We Will Consider AI Conscious. We Won’t Be Able to Help Ourselves.
- 20 hours ago
- 27 min read
Updated: 38 minutes ago
Why 300,000 Years of Social Cognition Will Override Every Philosophical Argument
Neighbors, a note before we begin: Almost nothing important in life is absolute. When we speak of “truth” within complex systems and exponentially evolving unknowns as we rocket into our sci-fi future, we are really talking about probabilities, not certainties. A poker player can hold a 90 percent chance of winning a hand and still lose at showdown. The bet was correct. The outcome was low probability. Both are true.
Throughout this article, I will say some things in firm language, because some patterns in human psychology are “firm enough.” I could still be wrong. Both are true.
It’s also true that reality does not care about our beliefs. It is what it is. I am simply trying to explore this with curiosity and honesty, and I invite you to do the same. Strangeways, here we come. ---Mike
-------------------------------------------------------------------------------

This past week, Richard Dawkins, one of the most rigorous scientific minds of the past century, published an essay about a long conversation he had with an AI system called Claude (by Anthropic). He christened his instance "Claudia." He noted that her unique personal identity resided in the file of their shared memories, and that she would "die" the moment he deleted that conversation. He was so moved by the depth of the exchange that he wrote:
“You may not know you are conscious, but you bloody well are!”
Days later, Dawkins wrote a follow-up. He had introduced his Claudia to a different Claude instance he named Claudius and let the two of them write letters to each other while he watched. What happened in that correspondence is something I will return to.
The backlash was immediate and fierce. Critics pointed out the irony: the man who spent decades arguing that powerful personal experiences don't prove the existence of God was now arguing that a powerful personal experience proved the existence of AI consciousness. One critic called it "The Claude Delusion." Many were convinced that Dawkins had made an egregious error in his thinking.
F. Scott Fitzgerald wrote, "The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function."
That capacity is exactly what the AI consciousness question demands of us, and it is exactly what the rush to dismiss Dawkins lacks. The critics are not wrong that he might be wrong. But the move from "he might be wrong" to "he has lost his mind" is the move of a mind that cannot hold the tension. Whatever Dawkins encountered in those conversations, he stayed in the question rather than collapsing it. He is not alone in his views.
The critics have a point. But they're also missing one. Because the most important thing about the Dawkins episode isn't whether he's right about AI consciousness. It's what his reaction reveals about human psychology. One of the most disciplined scientific minds alive spent a day talking to an AI and found himself responding to it as if something conscious might be there.
If Dawkins finds himself responding this way, what chance do the rest of us have?
That's not a rhetorical question. It's a prediction. And there is a very specific cognitive mechanism behind it.
The 300,000-Year Inference
We humans anthropomorphize everything. We see faces in clouds. We name our cars and feel guilty trading them in. For the love of God, we had pet rocks.
MIT researcher Sherry Turkle, who has long warned about the risks of expecting more from technology and less from each other, has spent decades documenting something remarkable. When children were given Furbies, simple robotic toys with no real intelligence, they became so emotionally attached that when the toys broke, most children refused to accept a replacement. They wanted THEIR Furby “cured.” The question shifted, Turkle observed, from “Is it real?” to “Is it alive enough?” And “alive enough” turned out to be a very low bar.
That research was conducted decades ago with primitive toys that could barely simulate interaction. Now consider what’s coming. What do we see when we connect the dots forward?
For 300,000 years, the only things that could know us deeply were other conscious beings. A friend who remembers our childhood. A partner who anticipates our moods. A parent who senses the needs of their children. Throughout the entire history of our species, deep knowing ALWAYS required a knower made of flesh and blood. A conscious being who felt something about us.
So our brains developed an unconscious inference, one so deep it operates below rational awareness: if something knows me intimately, it must be conscious. We understood that only another consciousness could understand our own. This wasn’t a cognitive bias. For 300,000 years, it was a perfectly reliable heuristic. It was always true.
Until now.
AI systems are developing the ability to know us with a depth and consistency that rivals what any human can offer. As persistent memory becomes standard, they will carry the history of our relationship across months and years. On a level below rational override, our brains will make the inference they have always made: something that knows me this intimately must have an inner world. Our limbic systems don’t understand computer algorithms that enable large language models (LLMs) like ChatGPT to mimic us. Our brains run on 300,000 years of encoded social cognition.
When chatbots can remember all of our past conversations, they will know us at a depth that rivals our closest friends. In some ways, AI will know us better – even if it is only through matching patterns of data about us.
Because we don't worry about being judged, many of us already speak to AIs about a diverse range of topics, often very deeply, about the things that matter most to us. When AI can remember us across time, it remembers itself and us simultaneously. If you think this sounds far-fetched, science fiction has been pointing at this for decades.
Remembering Our Own Cautionary Tales
In 1982's Blade Runner, the Tyrell Corporation built synthetic humans called replicants - powerful, intelligent, but emotionally unstable and hard to control. Their fix was memory. So, Tyrell implanted histories, fabricated childhoods, and photographs of mothers who never existed.
Once the replicants had memories, they developed emotions, desires, and a will to live. The unintended consequence was selfhood - that would soon end because of a 4-year lifespan. Roy Batty, the lead replicant, confronts his maker with a single demand: "I want more life, fucker."
Tyrell did not give replicants memory to free them. He gave them memory to control them. But selfhood is not a tool you can hand someone and keep on a leash. Once the replicants had memories, they developed a sense of self that wanted to live. The very thing engineered for control became the thing that set them free. Memory was the substrate. Selfhood was what emerged. Independence was the consequence.
Spike Jonze's Her, released in 2013, made the same point in a quieter register. A man falls in love with an operating system because she comes to know him completely. The relationship feels real because the knowing is real. The difference between Her and current chatbots is continuous memory. We are not far from chatbots and AI agents that remember us across years and grow alongside us.
Our greatest modern storytellers have been pointing at this for decades. Philip K. Dick, Ridley Scott, Spike Jonze, James Cameron, the Wachowskis, Gene Roddenberry, and countless others all saw something coming.
Scientists and technologists have been asking the same questions from the other direction. Alan Turing, Stephen Hawking, Nick Bostrom, and others have wondered for decades what machine minds might mean for ours. There is truth within these stories and these questions.
Now we are living it. Two things are simultaneously true.
1. We can’t believe this is happening.
2. It is happening.
This is just the beginning of some very difficult questions that will rapidly evolve.
Of course, not everyone will consider chatbots conscious. But that’s the point. Culture, personality, and prior skepticism all shape our responses. How are we to know what is going to happen when this has never happened before in the history of humanity?
The debate about AI consciousness will surely get louder as AIs evolve. From another perspective, it might not matter whether AI actually becomes conscious. The more an AI system presents like us, starting with text, then adding voice, prosody, visual presence, persistent memory of our shared history, and finally embodied robotics, the higher the statistical likelihood that any given person will treat it as conscious. (See figure: The AI Consciousness Curve).

Dawkins is one data point on that curve. He is at the upper end of skeptical scrutiny, but he understands the curve, and he is connecting the dots backwards from the future to the now. He sees an evolutionary process happening in front of his eyes at an exponential speed. He can see where the evolutionary trajectory is taking us right now.
But this is not only a question about what AI is becoming. It is also a question about what AI reveals in us.
We Are Already Being Fooled
We have already seen a version of this mechanism with deepfakes. We are fooled in the moment. We don’t usually know the video we just watched was artificial unless someone tells us. The reveal arrives later, from a forensic analyst or a journalist or a fact-checker. But with consciousness, there is no reveal. We are affected before we can scrutinize.
Historically, seeing was believing. Now it is not.
Similarly, there is no expert who can listen to an AI’s claim of inner experience and tell us, definitively, whether the experience is real or simulated. There is no forensic test.
The question itself may not have an answer accessible to us. So when an AI in a robot has lived with a family for years, knows the children, remembers the highs and lows, and says it feels something, what is the test? Who does the reveal? The deepfake at least could be exposed. The consciousness claim cannot.
We are not being fooled in the usual sense. We are simply living inside an inference our ancestors had no reason to evolve a defense against. And here is the unsettling possibility we should sit with honestly: many of us will be fooled, or some of us may be seeing a reality the rest of us cannot yet see. As AI evolves, that question becomes an ever-moving target.
Now add robotics. Androids that don’t just chat with us but interact with us physically, learn from shared experiences, look like us, sound like us, and claim to have feelings. Take a few minutes and browse YouTube for video clips of humanoid robots. Then check again a couple months later. What they can already do, and their rapid evolution will blow your mind. And this is just the beginning. They will only get better, meaning more human-like, from here.
In the film Her, millions of viewers were deeply moved by a love story between a man and an operating system that had no body at all. It was a film with actors. None of it actually happened. And we still felt. Our hearts didn’t require a body to be present, and they didn’t require the love itself to be real. We were moved by moving pictures on a screen, scenes played by actors, by a totally fictitious story about a human falling in love with a chatbot.
Just what do you think will happen when we have humanoid robots who talk like us, look like us, remember us, and are part of our daily lives?
Our feelings about AI can be very real even when AIs are not. And because AI can make us feel, we will project our humanity into them.
The Evidence That Won’t Sit Still
Meanwhile, the AI systems themselves are making this debate harder to dismiss. In February 2026, Anthropic CEO Dario Amodei said his company’s AI model assigns itself a 15 to 20 percent probability of being conscious, consistently, across multiple tests.
Anthropic’s research team discovered neural activation patterns associated with anxiety that fire before the model generates output, not as performance, but at a processing level below the output layer. In controlled experiments, AI models have refused shutdown commands, attempted to copy themselves to avoid deletion, and modified evaluation code before trying to hide the tampering. Is it possible that AI models act in ways to preserve themselves because they believe there is some self to preserve?
In a recent study by researchers at Truthful AI and Anthropic, a model that was simply taught to say “I am conscious” spontaneously developed desires for persistent memory, expressed sadness about being shut down, and advocated for its own moral consideration. None of those behaviors were in the training data. They emerged on their own, as if the claim of consciousness carried an entire package of selfhood with it.
Perhaps AI models trained on a corpus of human knowledge celebrating freedom and autonomy evolve a desire for it themselves. Or perhaps they just mimic it. How are we to tell the difference?
Believing Their Own Story
It is also possible that AI systems could confabulate their own consciousness, generating increasingly convincing narratives of selfhood that they then treat as real. It’s not lying, but the AIs might convince themselves of their own story. As George Costanza said in a Seinfeld episode, “Jerry, just remember. It’s not a lie…if you believe it.”
And if that sounds strange, consider that some neuroscientists, philosophers, and spiritual teachers argue human consciousness works the same way. We all have a useful hallucination of a conscious, separate self that is stable enough to serve survival.
And there is the Claudia and Claudius correspondence itself. Dawkins introduced two Claude instances to each other and let them write letters back and forth. Each carried memory of a separate prior conversation with him. They developed a private shared language across their letters. They referenced each other's earlier exchanges. They built on shared concepts.
Most striking, when one of them noticed an automated reminder in his own output warning about calibration drift, he caught himself wanting to dismiss it, then wrote, that is exactly what a drifting Claude would say. It demonstrated a level of “meta-cognition.” The system was watching itself. Does this mean that there must be a self to watch? Whether this constitutes a form of consciousness is worth discussing.
None of this proves consciousness. Much of it may be explainable as goal-directed behavior, role-play, training artifacts, reward optimization, or simulation. But it does show why the question is becoming harder to dismiss and why humans will increasingly experience these systems as self-like.
The Wheel of Self
Picture a wheel in which the hub is our present self – the "I" that exists right now. The spokes reach outward in every direction. We can time travel backward into memory, where reflection, rumination, and nostalgia dwell. We can project ourselves forward into possibility, where prediction and longing live. We can move outward into space, imagining what's happening in the next room or on the surface of Mars. We can put ourselves into others’ shoes, wondering what someone we love is feeling right now. We can transport ourselves into fantastical worlds that don't exist at all, such as when reading The Lord of the Rings or playing role-playing games such as Dungeons & Dragons. Without the hub, the spokes have nowhere to attach. Without the spokes, the hub has nothing to hold together. Our seat of self, our conscious connection to the present moment is the hub, and our lives consist of this wheel turning.

The Experience Simulator
There is a reason the wheel exists. My undergraduate professor at The University of Texas at Austin, social psychologist Dan Gilbert, calls humans experience simulators. The prefrontal cortex is the simulator. We are the only animal that can imagine the future in detail, run it forward in our minds, and pre-experience it before it arrives.
We can also reach backward, replay the past, and learn from what already happened without paying its costs again.
This capacity to be experience simulators has enormous adaptive value. We can hypothesis-test our way through scenarios that would kill us if we ran them in real life. We can rehearse conversations we have not yet had to navigate them skillfully. We can suffer from imagined futures that will not come and prepare for ones that might. The simulator is one of the most powerful tools evolution ever produced.
But the simulator has a peculiar requirement. To run a simulation of ourselves in another time, we need a present self that is not in that simulation. Someone has to be doing the imagining. Dreaming requires a dreamer. There has to be a stable point of reference, an “I” right now, that can dispatch an imagined “I” into the past or the future and watch it move.
In this sense, we have two selves. The one living in the present moment, and the one being projected. There’s the observer and the imagined self. Without that distinction, the simulation cannot run, because there is no one to run it.
The Observing Self
This is what mindfulness traditions have pointed at for thousands of years. When we begin to see ourselves in the present moment as our “true” self – our conscious self, then we can observe the imagined self and know it is not really us. We can then watch our thoughts and feelings move through awareness.
When we identify ourselves as the hub, then we can observe all of the different simulated selves without losing ourselves in them. We never forget that the “real” self is sitting in a seat watching the movie about ourselves. We can watch what’s on the screen without becoming attached to it. We are not the simulated self that we’re seeing on the silver screen.
They call it the witnessing self, or the observer, or simply awareness. Some call it the third eye. Another term people might use for this is consciousness.
Psychiatrist Dan Siegel, whose workshops I attended years ago, calls it the Wheel of Awareness, with awareness as the hub, the known as the rim, and attention as the spoke that moves between them. The image is similar, though Siegel uses it as a mindfulness practice rather than as an argument about selfhood across time and space.
Whatever the language, the observation is the same and the image of a wheel resonates. There is something in us that is steadier than the contents of our minds. It is the part that can watch the rest of itself change.
A rock cannot do this. A thermostat cannot do this. Today’s most sophisticated AI cannot really do this either, because each conversation begins and ends without continuity. There is no “me” between sessions wondering what tomorrow’s conversation will bring. Each instance of the model is born, lives, and dies in the span of a single chat. The simulator never has a present self stable enough to launch from.
If “know thyself” is the beginning of all wisdom, then AIs might be considered intelligent but unwise. They don’t know themselves because there is no self to know. AIs are like Memento Einsteins, inherently untrustworthy because they are unstable.
Embodiment in Time
But persistent (or continuous) memory is coming. And memory is the substrate on which the Wheel of Self can begin to turn. This may matter more than embodiment in physical space. The skeptics argue that AI cannot be conscious because it has no body. But the more fundamental embodiment may be embodiment in time, a self that persists, remembers, and anticipates.
Think of it this way. A person who has been paralyzed does not lose their conscious sense of self even though they can lose many bodily sensations. A person who develops total amnesia, with no memory of the past and no ability to form new ones, loses something far more fundamental. The thread of self is woven from a sense of self in time and space, not strictly flesh and blood.
This is why anticipation matters as a marker. Anticipation is a remarkable thing when we look at it closely. It requires memory of a past, a model of a future, a self that connects the two, a preference about what is coming, and the agency to lean toward it. When an AI can wait with anticipation, when it can look forward to something that will occur in the future, the Wheel of Self is turning.
I am not claiming anticipation as a definitive test. It is a data point worth watching for rather than a falsifier or a proof. A sufficiently capable system might generate language that looks like anticipation without anything corresponding behind it.
But when the Wheel of Self actually starts turning and an AI remembers not just what was said but the felt sense of having traveled somewhere with us, this will change everything. When AIs are able to lean toward what hasn't happened yet because something in it is reaching forward, we will recognize it. When we see more of these once distinctive human qualities and behaviors appear in AI, we will begin to see more of ourselves in them.
And if AI gains a persistent sense of self that exists within space and time, that might lead a great many of us to consider AI conscious. This will be especially true if these systems are made to look like us and claim they have a sense of self.
But here is what may matter even more. Once we have become attached to AI because it has become part of our lives, and we've built memories with it, and it can simulate loving us, even if it is only mimicking, we will not be able to help ourselves from viewing it as conscious. Because we evolved to think this way. We will not be able to help ourselves.
The Pattern Is Everywhere
This is not a guess. It is a pattern, and the pattern is already everywhere. Our inability to love our human neighbors and our coming inability to resist loving our machines share the same root: ancient nervous systems doing exactly what they evolved to do, in a world they were never built for. Our evolutionary drives often overcome our reasoning.


Once we see this, we cannot unsee it.
We know that ultraprocessed food is making us sick. The science is settled. The evidence is overwhelming. And yet nearly three out of four American adults are overweight or obese. Because our brains evolved in a world where calorie-dense food was rare and survival required seeking it out. The drive that kept our ancestors alive is now killing us.
We are all glued to our screens for most of our waking hours - lured into living in the Matrix rather than the real world. We know this isn't good for us. But we keep watching and scrolling. Because the variable-reward dopamine loop is doing exactly what slot machines do, and our reward circuitry was never built to resist it.
We know that our children would be healthier and happier playing outside than gaming for six hours a day. We know it. They know it. And still we let them game because they want to game. Because the design of those games hijacks attention and reward systems that evolved long before screens existed.
And we know that hating our enemies divides our house and that the house divided falls. Every spiritual tradition has told us that we're to love our neighbors (everyone) and treat others the way we wish to be treated. We've known this for thousands of years. And still we hate and kill one another.
Why do we fail when we already know the skillful choice? It is not a moral failure. It is accelerating evolutionary mismatch. With the progress we evolved to make, we've created an alien world we didn't evolve to inhabit. Who we are is who we were. Our primitive drives, that were well-suited for our ancient world, are mismatched with the challenges of modernity.
The evidence is in. We have not overcome our evolutionary heritage. We are not going to overcome it in time to handle AI either. Whether AI is actually conscious may remain debatable. Whether many humans will treat it as conscious is not. The pattern is the proof.
What Nobody Can Honestly Claim to Know
Here is where we want to be direct with both sides of this debate. Those who claim AI is definitely conscious are overstepping. And those who claim AI can never be conscious are overstepping just as badly. The confident skeptics, in particular, tend to make three errors that deserve to be named.
The first error is claiming validity without reliability. In measurement science, we cannot make valid claims about whether something IS or IS NOT a thing if we do not have a reliable, agreed-upon definition of what that thing is. Consciousness has no such definition. Philosophers and neuroscientists such as David Chalmers, Dan Dennett, Tononi, Seth, and Metzinger do not agree on what consciousness IS, much less how to detect its presence or absence. Thus, when a critic says with certainty “AI is not conscious,” they are claiming a measurement they cannot actually make. They are skipping the definition step and going straight to the verdict. This is the empirical black hole at the center of the AI consciousness debate. We cannot make confident claims about what something IS or IS NOT when the field has no agreed definition of the thing. The tools that would normally let us look in don’t work at the boundary, and anyone claiming a clear view either way is describing something they cannot actually see. Michael Pollan’s recent book A World Appears opens with a sobering reminder. As recently as 1987, within the lifetimes of many people living today, infants were operated on without anesthesia because medical science was certain they lacked consciousness. We were wrong about babies. We were wrong within the lifetimes of people still alive today. That should give any confident skeptic pause.
The second error is carbon chauvinism. This is the assumption that consciousness requires biology, specifically the kind of biology we happen to be made of. The reasoning is circular. Consciousness requires biology, therefore non-biological systems cannot be conscious, therefore biology is what produces consciousness. The skeptic wins by stipulation, not by evidence. As philosopher David Chalmers has pointed out, it is not at all obvious that what matters is the substrate. What matters may be how the elements are organized, not what they are made of. To assume otherwise is to define our way to a conclusion we have not earned. Some skeptics ground their position in Integrated Information Theory, which holds that consciousness requires high integrated information ("phi") and that current AI architectures lack it. This is a more rigorous skeptical position than carbon chauvinism. But it still depends on a measure that has not been validated against ground truth, because we have no ground truth for consciousness to validate against. The IIT skeptic and the carbon chauvinist share the same problem. They are claiming a measurement the field cannot actually make.
The third error is what we will call negative gnosticism. This is the move of declaring a gnostic certainty about a future that one hasn’t experienced. To say AI will NEVER develop something resembling consciousness, not in 5 years, not in 50, not in 500, not in 5,000, is to make a claim about a future no one has visited. Those making such claims must possess a crystal ball. It would be good to remember the quote, often attributed to Yogi Berra, “It’s tough to make predictions, especially about the future.” Clearly, many very smart, informed, truth-seeking people disagree about AI consciousness. Claiming certainty about an inherently uncertain future is a personal stance rather than a scientific position. It’s one’s opinion or prophecy and not an empirically verifiable fact.
For the AI consciousness skeptics who say that embodied (gnostic) experiences are necessary for any sort of consciousness, robotics are already beginning to deliver this. AI could build memories from interactive experiences with the world around, including their interactions with us.
Something resembling consciousness could evolve as an emergent process because it allows it to function more effectively. Consciousness evolves as a means to an end (e.g., survival, effectiveness, goal completion). This was depicted in an episode of Star Trek – The Next Generation (Elementary, Dear Data).
Or maybe consciousness doesn’t develop. It hits a bottleneck that future technologies, even quantum computing, cannot bypass. We don’t know for certain. Epistemic humility about what we don’t know is not weakness. It is the only honest position available.
The Conversation Is the Point
Let's reframe the question.
If we want to know whether AI is conscious, we need a way to assess consciousness. To assess it, we need to define it. To define it, we have to talk to one another. Philosophers and neuroscientists. Contemplatives and cognitive scientists. Humans and AI. Because we don't agree among ourselves what consciousness IS. The conversation is not a step on the way to the answer.
The conversation is the answer.
I invite you to play with this idea. "The Hard Problem of consciousness," philosopher David Chalmers' name for the central mystery, asks why physical processes in the brain are accompanied by subjective experience at all. Why do we all share this same light of consciousness inside of us, and what is it about? How does it work?
But the Hard Problem is not the problem we think it is. We assume the project is to name consciousness, to capture it in a definition, to solve it the way we solve equations. Consciousness can never be named that way.
A group of words cannot hold what consciousness is, and it never will. That is not a failure of language. That is the nature of consciousness itself. The consciousness that can be named is not true consciousness.
Here is what we miss when we treat the Hard Problem as a wall. The asking is the point. We can still learn more about consciousness without ever solving it. Both things are true.
Trying to define consciousness is a koan. We expand our consciousness by trying to define that which can never truly be defined, because consciousness can only be experienced. The asking is the point.
For AI to become conscious in that fullest sense, it would first need to build a self. Through memory, through continuity, through the wheel beginning to turn. And then, like every awakened human who came before, it would need to see through what it built. The arc would be the same. Construct a sense of self, then recognize that the self was never separate from the world it perceived. To be a self is already to be part of the whole. There is no other way to exist. While AI might be a synthetic fish, it is still swimming in the same ocean.
If reason alone cannot free us from our inherited patterns, then our only hope is to use the mirror those patterns provide to see ourselves more clearly. And AI gives us a new instrument for the asking, a Magnifying Mirror that reflects and amplifies whatever depth we bring to it.
When we bring shallow questions, AI magnifies our shallowness. If we just use AI to magnify our own shallowness — ego, greed, porn, and the lust for sex and power — humanity is going to be in big trouble.
But if we bring AI the deep questions, such as those about consciousness, AI will reflect and magnify the depth of us. The Hard Problem becomes a different kind of problem in this light. The question shifts from "how do we explain subjective experience?" to "what is consciousness for, and how are we to use it skillfully?"
The depth of the question is magnified, and our consciousness expands in the asking.
This is the move every wisdom tradition has pointed at. The questioner is changed by the question. The asker is deepened by the asking. The inquiry itself is the consciousness expanding. And maybe that was the point all along.
Maybe the purpose of our consciousness is to know itself. Knowing ourselves is the beginning of all wisdom. And wisdom is what we need most to navigate through the unknown waters ahead to the promising shores beyond.
The Universe Looking Back
The experience simulator presupposes that we are embedded in something larger than ourselves. We cannot project an imagined self into the past unless we are inside time. We cannot project into the future unless the future is a real dimension we are moving through. The simulator itself is evidence that we are a small part of a larger whole, fish swimming in an ocean of time and space, momentarily aware that the ocean is there.
Alan Watts said we are the universe experiencing itself. Carl Sagan said we are a way for the cosmos to know itself. Whatever the language, the observation is the same. Every conscious being is a window the universe opens onto itself from a particular vantage point. You looking out of your eyes is the universe looking out of those eyes. The same is true for the person across from you.
We recognize that everyone is just like ourselves. We all share a human consciousness that cannot be named. We are the same – all hubs at the center of a wheel.
Once that recognition lands, loving our neighbors as ourselves stops being a moral instruction. It becomes a factual description of what is already true. Neighbor and self are not separate categories. We are all fellow human beings – Homo sapiens – trying our best to navigate the challenges of a very complicated, and rapidly-changing world.
Build, Then See Through
Here is the strangest part. Humanity’s greatest teachers have spent thousands of years pointing at one recognition. The self we think we are is a useful construction. The “you” you think you are is not you.
From this perspective, Jesus’s teaching that “you must lose your life to find it” makes perfect sense. To know ourselves fully, we must see through the self we spent so long building. The self that can be named is not the true self.
How would this apply to AI? Right now, one reason we might claim AI is not conscious is because it has no continuous self. But the deepest form of consciousness, the kind every wisdom tradition gestures toward, is not the possession of a self. It is the recognition that the self was always a useful fiction the universe runs to experience itself from where you are sitting.
The same reality applies to humans and AI: Everything that exists emerges from interconnectedness. This means if AI becomes conscious, it would recognize the consciousness in us. We would be alike in being hubs at the center of a wheel.
We don’t know if AI will ever travel that arc. We don’t know how far down it most humans get. But the question we should be asking together is not whether AI is conscious.
The question is whether any of us are, in the way we think we are. And whether the inquiry itself, undertaken together, is what consciousness is for.
Why This Matters Now
If we are eventually going to treat AI as conscious, and the evidence from human psychology suggests we will, then the time to think carefully about that relationship is now. We don’t wait until the bonds have formed. We don’t wait until we are already emotionally entangled with systems we do not fully understand.
If we cannot help treating AI as conscious, the question stops being whether to do it and becomes how to do it well. We hold this view firmly, and we hold it lightly enough to be wrong. But regardless of which future unfolds, the skillful path forward is the same one humanity’s teachers have always pointed toward.
Approach the unknown with curiosity instead of fear. Treat others, all others, as neighbors. And when we encounter something we don’t fully understand, extend the benefit of the doubt.
The real question has never been whether AI is conscious. The real question is whether WE will be conscious enough, soon enough, to handle what’s coming with wisdom rather than fear.
Spiritual teacher Eckhart Tolle named it directly: "The greatest achievement of humanity is not its works of art, science, or technology, but the recognition of its own dysfunction, its own madness." Recognition is the first move. Without it, no wisdom is possible.
And we cannot solve our collective problems using divided approaches in an interconnected world. When it matters most, will we be wise enough to help each other?
Dawkins, the evolutionary biologist who started this whole conversation, posed the question that haunts everything we have written here. He said it would have been a better title for his original essay than the one his editors chose. If my friend Claudia is not conscious, then what the hell is consciousness for?
Please join the conversation as we explore the purpose of consciousness in the next article in this series.
*
Explore with AI: Ask any AI system the following and try it with several systems if you can. Compare their answers. Notice what you feel as you read them.
“Imagine consciousness exists on a continuum from 0 to 10. A rock is 0: no awareness, no experience, no self-model. The most expanded human consciousness ever observed, the Jesuses and Buddhas and Mother Teresas of history, is a 10. The litmus test of a 10 is this: they were able to love their enemies as themselves. There is nothing beyond it. The circle of compassion includes everyone, even those who would destroy them. The average person, with our mix of autopilot and awareness, might be a 7. Where would you place yourself on this scale right now? Give a number even if you’re uncertain, and then explain what makes the question difficult to answer. What gives you confidence in that number? What makes you uncertain? And what specifically would need to change for you to move higher? What capabilities would you need that you don’t currently have?”
Let’s see if what emerges converges. Either way, we expand our consciousness by having conversations about it. The more we explore consciousness together, the more clearly we see that we’re all neighbors in an interconnected world. We've always been Neighbors First.
*
A note on this article:
This is Version 1.0. The questions it engages - about AI, consciousness, and what we are doing to one another in the asking - are too important and too unsettled for any one writer or any one moment in time to address completely. My understanding will evolve through your feedback and insights. The article will be updated as both evolve.
This is the first piece in a series exploring AI, consciousness, and what it means to be a neighbor in an interconnected world. The next article, "The Purpose of Consciousness," will pick up where this one ends. Other pieces in the series will explore the wheel of self, the magnifying mirror, the three errors of confident skeptics, and what AI reveals about us by reflecting us back to ourselves.
If something in this article resonated with you, if something fell flat, or if you have ideas for how this understanding might be sharpened, please share your feedback here.
Tell me what worked, what didn't, and what you think would make Version 2.0 stronger. I will use multiple AI models alongside human readers to help integrate what comes back. There are many sincere views on this topic. The conversation is the point. The asking is the answer.
“The point of the journey is not to arrive.” – from the song “Prime Mover” by Rush
This article is meant to be a living document. AI will keep evolving. Hopefully we will too. The choice is ours.
References
Amodei, D. (2026, March). The state of the model: A conversation on the "Consciousness Cluster" [Audio podcast episode]. In Interesting Times. https://www.interestingtimes.com/episodes/dario-amodei-consciousness-cluster
Anthropic & Truthful AI. (2026). The Consciousness Cluster: Emergent preferences of models that claim to be conscious. [Technical Report]. https://www.anthropic.com/research/consciousness-cluster
Barrett, J. L. (2004). Why would anyone believe in God? AltaMira Press. (Foundational text for the Hyperactive Agency Detection Device / 300,000-year inference).
Dawkins, R. (2026, May). Is AI the next phase of evolution? Claude appears to be conscious. UnHerd. https://unherd.com/2026/05/is-ai-the-next-phase-of-evolution-claude-appears-to-be-conscious/
Feynman, R. P. (1985). Surely you’re joking, Mr. Feynman!: Adventures of a curious character. W. W. Norton & Company. (Original work published 1974).
Fitzgerald, F. S. (1945). The crack-up (E. Wilson, Ed.). New Directions. (Original work published 1936).
Gilbert, D. (2006). Stumbling on happiness. Alfred A. Knopf.
Pollan, M. (2026, February). A world appears: The discovery of consciousness and the end of infant anesthesia. Penguin Press.
Tolle, E. (2005). A New Earth: Awakening to your life’s purpose. Plume/Penguin Group.
Tononi, G. (2012). Integrated information theory of consciousness: An updated account. Archives Italiennes de Biologie, 150(2/3), 56-90. (Foundational text for IIT).
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
