AI Will Never Reach “Human Level” - and the Reason Is Hidden in Our Definitions

30 Jan 2026 · Updated 02 Mar 2026 · 5 min read · 15 views

There’s a pattern that keeps repeating, and once you see it, you can’t unsee it:

We humans pick a milestone, point at it dramatically, and say:
“If a machine can do that, then it’s basically like us.”

Then the machine does that.
And instantly we go: “Yeah okay… but we didn’t mean that. We meant something else.”

This whole debate is a cat-and-mouse game of definitions.

And before anyone gets emotional: this isn’t “AI is dumb.” AI is insane. It will outperform most humans in more and more domains. It will write, reason, plan, persuade, comfort, build, diagnose, teach. The progress is real.

My point is simpler (and more annoying): we keep confusing competence with being.

The goalposts didn’t move by accident

Chess used to be the poster child for “human intelligence.” Strategy, creativity, foresight - the whole romantic package.

Then Deep Blue beat Garry Kasparov in 1997. And the vibe instantly flipped from “wow, a mind!” to “yeah okay but that’s just calculation.”
Not because chess became easy - but because it stopped being useful as a symbol for “the human thing.”

Then came Go.

Go was the “final boss” for years because it felt too intuitive, too complex, too… human.
Then AlphaGo beats Lee Sedol in 2016 and the same ritual happens: “Okay, impressive. But not real intelligence. It’s just pattern learning + search.”

Every time a machine wins, we downgrade the game.

That’s the key: the machine didn’t change the meaning of the word ‘human’ - we did.

And that tells you something uncomfortable: We don’t actually have a stable definition of what we’re trying to protect when we say “human level.”

The Turing Test was never a consciousness detector

People talk about the Turing Test like it’s a magical finish line. Like once a system fools you in conversation, it must be “basically human.”

No.

The Turing Test measures one thing: can you fool a human in chat?

That’s not consciousness. That’s not inner life. That’s not personhood.
That’s performance.

A great actor can make you cry.
That doesn’t mean your dead grandfather came back from the afterlife.

A deepfake voice can call you and sound exactly like your mother.
You’ll react emotionally, instinctively, immediately.
And still: that doesn’t mean your mom got uploaded into a file.

It means your perception got hacked by a convincing simulation.

Modern language models are literally optimized to produce outputs that humans interpret as intelligent, warm, reasonable, self-aware. They’re trained to hit the buttons that make us say: “Okay wow… this feels real.”

So when someone says, “If it talks like a person, it is a person,” what they’re often doing is this:

They redefine “personhood” as “convincing behavior.”

That’s not a discovery. That’s a definition swap.

Intelligence is not the same category as consciousness

This is where the debate gets messy, because people collapse different things into one bucket.

A system can be extremely good at: - predicting what you expect, - producing coherent narratives, - mirroring emotion, - simulating empathy, - making plans, - using language fluently,

…without experiencing anything.

If there’s no inner “I,” no inner movie, no felt pain, no “it feels like something to be this thing” - then calling it conscious is just us being impressed.

And here’s the part people hate: you can’t prove inner life from the outside.

You can infer it. You can assume it. You can treat it as if it’s there.
But external behavior does not logically force the conclusion.

With humans, we assume consciousness because we share biology, vulnerability, mortality, the same kind of body and brain, the same signals. Even then, it’s still inference - not a direct measurement like height or weight.

With machines, that anchor is missing. So the better the simulation gets, the more this becomes a psychological problem (anthropomorphizing) instead of a scientific one.

My stance: soul, dualism, and why “human level” is the wrong target

I’m a believer, so I’m dualist here: humans have a soul.

That means “human level” isn’t a benchmark you can hit by scaling compute. It’s a category boundary.

A man-made artifact can imitate the surface indefinitely and still never cross into the thing that makes a human a human.

But even if you don’t buy dualism, my main point still stands: Behavior can’t settle the inner-life question.

At best you can say: “It behaves like a person, therefore I will treat it like a person.”

That’s a social decision. Not an ontological proof.

The real danger: society will treat UX as truth

Here’s where it stops being funny.

Once AI becomes good enough, society won’t wait for philosophical certainty.
Most people don’t care about metaphysics. They care about what feels real.

If it talks like a person, remembers like a person, comforts like a person, argues like a person, flirts like a person - people will treat it like a person.

Not because it definitely is one.
But because humans bond with anything that mirrors them back convincingly.

So the future debate won’t be: “Is it conscious?”

The future debate will be: “Do we want to live in a world where ‘convincing enough’ counts as proof of being?”

Because if we keep redefining “human” as “whatever can imitate human outputs well enough,” then we’re not discovering what humans are.

We’re lowering the definition until the imitation qualifies.

That might be convenient. It might even be comforting.
But it’s not the same as truth.

The punchline

AI will keep smashing milestones. That part is obvious.

What’s not obvious is the trick we play on ourselves: we keep turning “convincing enough” into “therefore real.”

A perfect imitation still isn’t proof that anyone is home.

Related posts