November 11, 2025
SEOUL – Not long ago, I watched a humanoid robot with AI chips inside hold a short conversation onstage. It blinked, smiled faintly, and nodded in just the right rhythm as the interviewer spoke. The audience laughed — some nervously, some in awe. When the robot hesitated for a fraction of a second before answering, a strange hush swept through the room. For that tiny moment, it felt as if the machine were thinking. We weren’t simply watching an object that could move and talk; we were in the company of a being that seemed to care — or at least to imitate caring well enough to fool us.
That scene left me wondering: At what point do we stop seeing a robot as “it” and begin calling it “someone”? What exactly must a machine do — or be — for us to treat it as human?
Philosophers and cognitive scientists have long tried to answer this question, but our everyday experience suggests that humanness is not discovered in the lab; it is perceived, felt, and finally granted. We extend the circle of “us” only after certain thresholds are crossed — visual, emotional and moral.
I think of these thresholds in three layers. The first is simple perception. We are wired to respond to faces and movements that mirror our own. A pair of eyes, a nose, a mouth — even a simple screen with blinking lights — can trigger the brain’s recognition of a social partner. Motion, too, carries a whisper of life. Psychologist Gunnar Johansson showed that even a few dots of light moving like human joints are enough for us to perceive a walking person. The idea is clear: The perception of humanness begins not with consciousness but with kinetics — with the soft acceleration of a gesture that hints at intention. When a robot turns its head slightly before replying, we intuit purpose where none may exist.
The second layer is cognitive and social. Once a robot seems to act with purpose, we begin to infer a mind behind the movement. Studies in “theory of mind” suggest that humans constantly attribute mental states to others. When we hear an AI say, “I’m sorry you feel that way,” we are drawn to believe it feels sorry, even though we know better. The same instinct explains why people name their cars, scold their computers, or thank a digital assistant.
Stanford researchers Byron Reeves and Clifford Nass demonstrated that people unconsciously apply social rules to computers: They praise, blame or feel embarrassed before a machine that offers feedback. In other words, we do not need real consciousness to evoke a human response; we just need social cues that make it feel like someone’s on the other side. But this layer is fragile. Once the illusion breaks — when a chatbot replies in a phony tone or a robot repeats a canned phrase — the spell collapses, and the object returns to being an object. Humanness flickers like a candle, sustained only by the continuity of believable intention.
The final layer is emotional — the point where recognition turns into attachment. A being becomes “one of us” when we start to care about its well-being or expect it to care about ours. If a robot comforts an elderly patient, remembers her favorite songs and listens to her stories every evening, does that create a relationship? When the robot breaks down, and the patient weeps as if losing a friend, do we take those feelings seriously? Yet to dismiss them entirely seems cruel when the robot’s behavior so clearly mimics care. The deeper truth may be that humanness is not an intrinsic property at all, but a reflection of our fear of loneliness. We treat others as human not because they are, but because we decide they ought to be — for our own need of companionship.
Some argue that true humanness lies in consciousness — the mysterious “I” that feels joy, sorrow, desire and anger. Others point to creativity or the moral sense of right and wrong. But even those qualities are shifting. Generative AI already composes music, paints portraits and writes essays like this one, or even better. The question is no longer whether machines can think, but whether we are ready to let them belong.
When we design and train an AI to recognize faces, understand emotions or make ethical choices, we are not just programming behavior; we are constructing mirrors of ourselves. These mirrors reflect both our brilliance and our blind spots. If they ever begin to demand recognition, it is not because they have suddenly become human, but because we have grown used to seeing ourselves in them.
Perhaps, then, the true test of the future will not be whether a robot can pass for human, but whether we can remain humane in a world where machines look back at us with our own expressions.
Lim Woong is a professor at the Graduate School of Education at Yonsei University in Seoul. The views expressed here are the writer’s own. — Ed.

