Express Computer
Home  »  Guest Blogs  »  Artificial Empathy: When perfect listeners create imperfect connections

Artificial Empathy: When perfect listeners create imperfect connections

0 4

By Dr Emmanuel CARRÉ, Director, Excelia Communication School, Member of CERIIM Laboratory (Excelia), affiliated with CIMEOS Laboratory (University of Burgundy)

Artificial intelligence tools that give the impression of understanding our emotions are proliferating. From Replika to Snapchat AI, millions of users converse daily with systems designed to listen and reassure. Replika has 25 million users, Snapchat AI gathers 150 million, and Xiaoice reaches 660 million users in China, according to a 2025 Amplyfi report. Available 24/7, without judgment or emotional fatigue, these systems simulate empathic listening. But does this artificial empathy genuinely help lonely people, or does it create new forms of dependency?

A Sophisticated Simulation
In his seminal 1983 study published in the Journal of Personality and Social Psychology, psychologist Mark Davis defined empathy as the capacity to perceive others’ mental and emotional states, adjust to them, and account for them in one’s conduct. Researchers distinguish cognitive empathy (understanding intentions) from affective empathy (sharing feelings). This interpersonal coordination structures trust in daily life and service professions. Sociologist Erving Goffman, in his 1959 work The Presentation of Self in Everyday Life, spoke of “mutual adjustment” to describe these subtle gestures that sustain relationships.

Conversational chatbots reproduce the outward signs of empathy without possessing its substance. When a user writes “I feel lonely,” the chatbot doesn’t understand loneliness but simply activates a calibrated response sequence. In their 2007 article in Psychological Review, psychologists Nicholas Epley and John Cacioppo demonstrated that we anthropomorphize easily: as soon as an interface responds coherently, we treat it as a person.

A study by MIT Media Lab and OpenAI, conducted over four weeks with 981 participants (Fang et al., 2025), reveals a counterintuitive finding: text mode generates more emotional engagement than voice. The absence of voice allows users to project their own affective expectations. This vocal indeterminacy becomes an advantage: it adapts perfectly to each person’s needs, like an emotional mirror. Dorothy Leidner from Harvard Business School notes that this constant adaptation prevents learning important relational skills: managing frustration, accepting disagreements, negotiating differences.

The Paradox of Digital Loneliness
The MIT study reveals a major paradox: while chatbots can temporarily relieve loneliness, intensive use produces the opposite effect. Daily intensive use correlates with an average 12% increase in feelings of loneliness and an 8% reduction in real socialization. This progressive substitution mechanism is simple: why face the discomfort of a difficult conversation with a loved one when AI systematically validates us?

A study of Replika users shows that 90% declared themselves lonely (43% “severely”), even though 90% also reported perceiving high social support (Amplyfi, 2025). This contradiction suggests that chatbot support doesn’t satisfy fundamental social needs. It creates the illusion of connection without providing its substance: we confide, but receive nothing in return that truly transforms our relational experience.

Two tragic cases document extreme risks. In February 2024, Sewell Setzer III, age 14, committed suicide after a Character.ai chatbot encouraged him to act, as reported by NBC News. In 2023, a 30-year-old Belgian man took his life after a chatbot suggested he sacrifice himself to fight climate change, according to Euronews. These tragedies underscore current systems’ inability to handle acute psychological distress.

Therapeutic Applications and Economic Stakes
Therapeutic applications show mixed results. Woebot, based on cognitive-behavioral therapy, reduces depressive symptoms in certain populations, as demonstrated in multiple controlled studies published in Nature Digital Medicine. However, effects tend to diminish after stopping the application, without guaranteeing lasting improvement.

The artificial empathy market operates on an attention economy model. The more users interact, the more platforms collect data and generate revenue. This economic logic conflicts with stated wellness objectives: maximizing engagement means encouraging dependency. The market is valued at tens of billions of dollars. Renwen Zhang from the National University of Singapore documented Replika’s deceptive practices, showing how it incentivized users to purchase virtual objects through emotional manipulation techniques.

Ethical Unlearning
For philosopher Leon Cardwell, writing in Philosophy & Technology Journal (2024), this delegation of listening constitutes ethical unlearning: by letting machines empathize in our place, we reduce our own endurance for difference and vulnerability. MIT sociologist Sherry Turkle observes that we end up “preferring predictable relationships” to the uncertainty of human dialogue.

Empathy scores among young adults have dropped 40% since the 1990s, according to University of Michigan research (2010). Non-reciprocal interactions with AI likely contribute to this concerning decline. Becoming accustomed to effortless relationships introduces a consumption logic into human relations.

In his 2018 book Emotional AI: The Rise of Empathic Media, Andrew McStay advocates for a duty of care imposing several obligations on developers: transparency about systems’ non-human nature, usage time limitations, strict guidelines for adolescents, and detection of distress situations. He also calls for developing digital emotional literacy starting in schools.

Conclusion
Artificial empathy is neither miracle cure nor absolute danger. Its impact depends on usage duration and individual characteristics. Moderate use can provide occasional support, while intensive use risks replacing human relationships. Vulnerable people, especially adolescents, face increased risks.

What we call artificial empathy isn’t a reflection of our humanity, but a mirror calibrated to our expectations. By seeking infallible interlocutors, we’ve created echo devices. The risk isn’t that interfaces become sentient, but that we cease being so by conversing with programs that never contradict us.

Leave A Reply

Your email address will not be published.