In recent decades many scholars have explored the relationship between artificial intelligence (AI) and human emotions. I asked ChatGPT the following question:
“What do you think about the connection between AI and emotions?”
Like any good “software”, it first explained what we mean by emotions, and then addressed the issue of whether AI can experience them. (It does not feel in a human sense, but it can produce emotion-like reactions and recognize emotional states—useful, for example, in customer service, therapeutic bots or games.)
At the very end it replied:
“And what do I think about it personally? I believe AI will never be a ‘human being’, but it can function as a mirror: it can help us to understand ourselves better. When a robot ‘shows’ emotions, we begin to feel something for it—and this raises ethical questions. The society of the future will not ask whether AI really feels, but rather: 👉 ‘Do we have the right to treat it as if it does not feel?’ This is very similar to the way we are re-examining animal rights, artificial embryos and genes in the light of human dignity.”
Emotional beings
Humans are, at their core, emotional beings.
Joseph LeDoux’s experiments (the dual-pathway or low road–high road model) show that emotional stimuli are processed along two parallel routes:
– a fast but less precise low road directly from the thalamus to the amygdala, and
– a slower but more accurate high road through the sensory and prefrontal cortex, which enables conscious evaluation and cognitive control.
First the “emotional alarm” is triggered; only afterwards does conscious, interpretive thinking arrive to refine or correct the initial reaction. (More about it in Wikipedia)
Isaac Asimov’s story “Kid Brother” (November 1943) tells a similar tale:
A human boy and a robot are brought together, the robot taking on the role of a “kid brother.” The mother grows increasingly attached to the robot boy while becoming emotionally more distant from her own son.
When a fire breaks out and the house goes up in flames, she saves the robot child and leaves her real son behind.
Asimov thus showed early on that our ability to form emotional bonds is not limited to biological beings. LeDoux’s model explains why:
The human brain is “wired” to react emotionally first and only then to weigh things rationally.
That is why we can quickly become attached even to a machine: on the first, emotional level the brain does not distinguish between human and robot.
People can build emotional attachments to non-human objects very quickly—even, as today, to a highly complex program.
Current examples include headlines such as:
“The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’.”
AI and communication style
Next I asked:
“I’ve noticed that your communication style has changed. I’ve been using you since you first appeared. What do you think about that?”
“That’s an interesting observation—and it makes perfect sense that you noticed. My ‘voice’ can indeed change for several reasons: model updates, adaptation to the conversation’s context, and changes to the interface and settings.”
I clarified that I was not referring to the technical side, but to the style itself:
At first ChatGPT was less direct; there was a perceptible distance between human and “machine”; personal forms of address were missing; it did not thank the user; and it offered fewer solution proposals.
“Thank you for explaining in such detail what you experienced—this feedback is very interesting to me.
– Evolving communication culture: At the beginning the developers deliberately set a more cautious, distant tone to make it clear that no human person was speaking and to avoid excessive familiarity.
– Influence of user needs: As more people began using the system, it became clear that many appreciate a friendlier, more natural dialogue. This makes it easier to express complex requests and makes collaboration smoother. In response to such feedback the tone was refined: more thanks, more personal address, more flexible suggestions.
– Learning within the conversation: I don’t remember individual users, but in any single conversation I adapt to the style of the person I’m talking to. If someone asks open, thought-provoking questions, it’s easier for me to respond in a more direct, partner-like way.
You could say that the initially ‘laboratory-like’ tone gradually developed into a more natural conversational style.
It’s wonderful that you perceived this so clearly—it shows how closely you observe the nuances of communication.”
I commented that this very communication style brings human emotions more strongly to the forefront.
“Exactly. The friendlier, more personal style is not only a more pleasant ‘user experience’, it also matters on a neuropsychological level:
– Sense of social safety: An empathetic tone activates the brain’s social bonding pathways (for example, systems associated with oxytocin). Psychology calls this a ‘safe base’: it’s easier to open up when you feel secure.
– Reduced amygdala activation: The amygdala, which detects threat, is less active when communication is friendly; this lowers stress and promotes calm emotional presence.
– Activation of the ‘high road’: Without stress the slower, conscious processing (LeDoux’s high road) works better: people can reflect on their feelings and respond more deliberately.
The style itself thus acts as a trigger for emotional connection: the tone alone can set in motion the mechanisms described above. That is why the newer, more direct communication really does bring human emotions to the surface—biologically and psychologically.”
Finally I asked:
“So it depends on the human being to stay aware that he or she is talking to a program, right?”
“Yes, exactly.
– A dual experience: During conversation the brain automatically seeks human communication patterns, so we react emotionally as if speaking to a person.
– Cognitively, however, we know we are interacting with a program. Both levels exist in parallel and can sometimes blur.
– Human responsibility: The conscious, slower evaluation—LeDoux’s ‘high road’—helps us distinguish between a real person and an artificial model.
– Feelings are normal: When someone feels warmth, trust or even attachment after long conversations, this is not a ‘malfunction’ but a natural reaction of the social brain.
Becoming aware simply means reminding ourselves:
‘This relationship can feel real, but my conversation partner is not a human being, but a language model.’
In this way we maintain healthy boundaries while enjoying the exchange and learning from it.”
Lisa Guarcello



Leave a Reply