GenAI chatbots are becoming lifelines for lonely users—we need guardrails to distinguish reality from the artificial
Sceptics may dismiss the ‘cathartic effect’ of GenAI therapy as a digital placebo, or clever illusion spun by next-word prediction models. Emily M. Bender, a professor of linguistics at the University of Washington, and her colleagues famously described language models as “stochastic (probabilistic) parrots.”
Yet, for millions of lonely users, AI chatbots and mental health apps offer affordable, non-judgemental, round-the-clock companionship. Therapy and companionship now top the ways people use GenAI, wrote Filtered.com co-founder Marc Zao-Sanders in Harvard Business Review. His report analysed discussions on Reddit and other online communities.
Similarly, a September MIT Media Lab paper ‘My Boyfriend is AI’ noted that Character.AI handles companion interactions equivalent to some 20,000 requests every second, and that sexual role-playing was ChatGPT’s second-most common use case.
Elon Musk’s Grok already has a goth-anime adult bot named Ani. And OpenAI plans to permit erotic content for verified adult ChatGPT users. Morality is subjective in a cultural and religious context, but simulated empathy can feel real even as programmed ‘consent’ is unreal.
This makes AI therapy suspect. A 2025 Stanford study warns that AI’s mimicry of intimacy can blur fantasy and reality, particularly for teens. It can get weirder as human–algorithm bonds deepen.
Just this week, a 32-year-old Japanese woman broke her engagement with her human partner to ‘marry’ an AI character built on ChatGPT. But AI bots cannot be legal spouses. And what if the behaviour of the bot changes after a model upgrade, or it simply vanishes if its maker shuts shop? Given their deep bonds with these bots, some users may sink into depression and even attempt suicide. Who, then, will be held responsible?
In August, the parents of a 16-year-old filed a lawsuit against OpenAI, alleging that its chatbot isolated their son and helped plan his suicide. OpenAI now is battling seven such cases.
However comforting they sound, the efficacy of digital counsellors remains scientifically unproven. Hence, monetizing AI intimacy too raises an ethical concern. Replika Pro, Soulmate AI and DreamGF already charge monthly fees for romantic or erotic chats. Grand View Research expects the AI therapy market to hit $5 billion by 2030 from $1.13 billion in 2023.
Given the complexity, AI therapy needs global and local guardrails coupled with a human touch.
OpenAI, for instance, plans to add age-gating for erotic content, but admits detection isn’t foolproof. It also has an Expert Council on Well-Being and AI comprising psychologists and psychiatrists. China has banned AI-generated erotic content, while EU’s upcoming AI Act (2026) may classify erotic AI as “high-risk,” requiring consent checks and human oversight.
The US and India prioritise data protection over sexual content. India even banned 25 OTT platforms in July for obscenity. But overregulation can drive users underground.
In the movie Her, a lonely Theodore Twombly (played by Joaquin Phoenix) falls for his AI assistant, Samantha (she uses Scarlett Johansson’s voice). The MIT Media Lab researchers note in their paper that “Her is here,” not as one sentient AI, but as countless daily interactions between humans and algorithms.
The real question, they believe, isn’t whether AI relationships are “real,” but whether they help humans flourish despite flaws. Policymakers might want to consider this perspective too.
Post Comment