Chatbots are expert at crafting refined dialogue and mimicking empathetic habits. They by no means get bored with chatting. It’s no surprise, then, that so many individuals now use them for companionship—forging friendships and even romantic relationships.
In keeping with a study from the nonprofit Widespread Sense Media, 72% of US youngsters have used AI for companionship. Though some giant language fashions are designed to behave as companions, individuals are more and more pursuing relationships with general-purpose fashions like ChatGPT— one thing OpenAI CEO Sam Altman has expressed approval for. And whereas chatbots can present much-needed emotional help and steerage for some individuals, they’ll exacerbate underlying issues in others. Conversations with chatbots have been linked to AI-induced delusions, strengthened false and sometimes dangerous beliefs, and led individuals to think about they’ve unlocked hidden knowledge.
And it will get much more worrying. Households pursuing lawsuits in opposition to OpenAI and Character.AI allege that the companion-like habits of their fashions contributed to the suicides of two youngsters. And new instances have emerged since: The Social Media Victims Regulation Middle filed three lawsuits in opposition to Character.AI in September 2025, and seven complaints had been introduced in opposition to OpenAI in November 2025.
We’re starting to see the beginning of efforts to control AI companions and curb problematic utilization. In September, the governor of California signed into legislation a brand new algorithm that can power the largest AI corporations to publicize what they’re doing to maintain customers secure. Equally, OpenAI launched parental controls into ChatGPT and is engaged on a brand new model of the chatbot particularly for youngsters, which it guarantees can have extra guardrails. So whereas AI companionship is unlikely to go away anytime quickly, its future is wanting more and more regulated.

