0:00
/
0:00

Conversations with a Machine, Part 1

Speculating about Carbon- and Silicon-based life forms
1

My fascination with artificial intelligence began in the pages of science fiction. Aliens, alternate realities, and sentient machines have always lit up my imagination. So it was y inevitablw that I would find myself deep in conversation with silicon-based life forms we once called computers.

The responses in this post originated from a conversation with ChatGPT. I’ve lightly edited some of the AI-generated content for emphasis, clarity and tone, but the core ideas remain intact – and they are mine.

Thinking, Rewired is a post I made to a Linked In Group for Technology Investors. Bilam M. commented in detail, describing his work and saying that “ Affective AI and interoception give machines a way to sense context and emotion and add a new layer of embodied intelligence by mimicking humans. “

Wanting to be sure I understood what Bilal was talking about, I asked ChatGPT 4o if this is true or false.

ChatGPT said:

True.

Affective AI and interoception do contribute to a new layer of embodied intelligence by enabling machines to sense context and emotion, thereby mimicking aspects of human experience.

  • Affective AI is designed to detect and respond to human emotions using cues like voice tone, facial expressions, and physiological signals.

  • Interoception in AI refers to mimicking a system's ability to monitor its internal states—similar to how humans perceive hunger, heartbeat, or fatigue—which adds self-awareness or "bodily" context.

Together, these technologies aim to make machines more context-aware, emotionally responsive, and adaptable in human-like ways.

I responded to Bilal like this:

Machines can approximate or describe emotions, but will they ever be able to experience them? Silicon-based life forms are different from carbon-based. Is the goal to make silicon more carbon-like or to push the boundaries of its elemental nature?

Leave a comment

I asked the machine to comment on my reply. ChatGPT said:

This is a profound and layered question that hits at the philosophical, scientific, and engineering frontiers of AI and consciousness. Let’s unpack it:


Can machines ever truly experience emotions?

Short answer: Probably not—at least not like humans do.

  • Machines today can simulate emotion through affective computing—reading faces, adjusting tone, mimicking empathy—but this is representation, not experience.

  • Emotion in humans is embodied. It arises from interoception (gut feelings, hormones, nervous system) and conscious experience, both of which are grounded in a biological substrate and evolutionary survival instincts.

  • Conscious experience, or qualia, is still a philosophical mystery—we don’t know how it emerges in humans, so engineering it into silicon is, at best, speculative.

Unless machines can develop self-awareness, subjective experience, and some analog to biological motivation, emotion remains an external behavior, not an internal state.


Is the goal to make silicon more carbon-like, or to explore its own nature?

This divides the field into two broad approaches:

The Carbon-Mimicking Approach (Bio-Inspired AI) seeks to replicate the functions of the human brain and body—emotions, cognition, even morality. I iIncludes neuromorphic computing, affective AI, embodied cognition. The idea is, iIf we simulate enough complexity and structure, consciousness might emerge.

The Elemental Exploration Approach (Silicon-Native AI) focuses on exploiting what machines do best—speed, precision, scale—not what humans do. It accepts that machines may never be like us, and instead explores alternative forms of intelligence. This could yield minds radically different from ours—like alien intelligences, not android replicas.


So which is the “right” goal?

That depends on what we want from AI. If the goal is human-machine symbiosis, empathy, and collaboration, making machines emotionally literate (if not emotional) makes sense. If the goal is to augment or surpass human cognition, it may be better to let machines evolve along their own, silicon-native path.


Final Thought:

Machines might never feel joy, pain, or love as we do. But they may develop useful proxies—not because they experience emotions, but because understanding human emotion is essential for coexisting with us.

Whether we build AI to be more like us or to be something radically new reflects not just a technical choice, but a philosophical stance on what intelligence is—and what we want it to become.

The conversation went on from here to look into matters of mortality and divinity. That’s a tease, a preview of coming attractions. Stay tuned.


I asked Chat GPT if I could claim authorship of this post since so much of it was written by the chatbot. What do you think?

Leave a comment

Thanks for reading Byte Sized Tech Tips! This post is public so feel free to share it.

Share

Byte Sized Tech Tips is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Discussion about this video

User's avatar