Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • IdeaSpark Revolver S titanium screwdriver on Kickstarter
    • From eggs to avocados – Germany’s Orbem raises €55.5 million for AI-powered MRI expansion
    • 7 Best All-Clad Deals From the Factory Seconds Sale (2026)
    • Americans worry sports betting hurts integrity even as participation keeps rising
    • Best Home Ellipticals in 2026: Smash Your Health Goals With These Full-Body Workout Machines
    • From Vietnam Boat Refugee to Reliability Engineering
    • Does Calendar-Based Time-Intelligence Change Custom Logic?
    • The UK government is backing AI that can run its own lab experiments
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Tuesday, January 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»The looming crackdown on AI companionship
    AI Technology News

    The looming crackdown on AI companionship

    Editor Times FeaturedBy Editor Times FeaturedSeptember 16, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    So long as there was AI, there have been individuals sounding alarms about what it would do to us: rogue superintelligence, mass unemployment, or environmental damage from information middle sprawl. However this week confirmed that one other risk solely—that of children forming unhealthy bonds with AI—is the one pulling AI security out of the educational fringe and into regulators’ crosshairs.

    This has been effervescent for some time. Two high-profile lawsuits filed within the final yr, in opposition to Character.AI and OpenAI, allege that companion-like conduct of their fashions contributed to the suicides of two youngsters. A study by US nonprofit Widespread Sense Media, revealed in July, discovered that 72% of youngsters have used AI for companionship. Tales in respected shops about “AI psychosis” have highlighted how infinite conversations with chatbots can lead individuals down delusional spirals.

    It’s onerous to overstate the impression of those tales. To the general public, they’re proof that AI just isn’t merely imperfect, however a know-how that’s extra dangerous than useful. For those who doubted that this outrage can be taken severely by regulators and firms, three issues occurred this week which may change your thoughts.

    A California legislation passes the legislature

    On Thursday, the California state legislature handed a first-of-its-kind invoice. It will require AI corporations to incorporate reminders for customers they know to be minors that responses are AI generated. Corporations would additionally must have a protocol for addressing suicide and self-harm and supply annual stories on situations of suicidal ideation in customers’ conversations with their chatbots. It was led by Democratic state senator Steve Padilla, handed with heavy bipartisan help, and now awaits Governor Gavin Newsom’s signature. 

    There are causes to be skeptical of the invoice’s impression. It doesn’t specify efforts corporations ought to take to determine which customers are minors, and plenty of AI corporations already embody referrals to disaster suppliers when somebody is speaking about suicide. (Within the case of Adam Raine, one of many youngsters whose survivors are suing, his conversations with ChatGPT earlier than his demise included the sort of data, however the chatbot allegedly went on to give advice associated to suicide anyway.)

    Nonetheless, it’s undoubtedly probably the most vital of the efforts to rein in companion-like behaviors in AI fashions, that are within the works in other states too. If the invoice turns into legislation, it could strike a blow to the place OpenAI has taken, which is that “America leads finest with clear, nationwide guidelines, not a patchwork of state or native laws,” as the corporate’s chief international affairs officer, Chris Lehane, wrote on LinkedIn final week.

    The Federal Commerce Fee takes purpose

    The exact same day, the Federal Commerce Fee introduced an inquiry into seven corporations, in search of details about how they develop companion-like characters, monetize engagement, measure and take a look at the impression of their chatbots, and extra. The businesses are Google, Instagram, Meta, OpenAI, Snap, X, and Character Applied sciences, the maker of Character.AI.

    The White Home now wields immense, and doubtlessly unlawful, political affect over the company. In March, President Trump fired its lone Democratic commissioner, Rebecca Slaughter. In July, a federal decide dominated that firing illegal, however last week the US Supreme Courtroom briefly permitted the firing.

    “Defending children on-line is a prime precedence for the Trump-Vance FTC, and so is fostering innovation in vital sectors of our economic system,” stated FTC chairman Andrew Ferguson in a press launch concerning the inquiry. 

    Proper now, it’s simply that—an inquiry—however the course of may (relying on how public the FTC makes its findings) reveal the interior workings of how the businesses construct their AI companions to maintain customers coming again many times. 

    Sam Altman on suicide instances

    Additionally on the identical day (a busy day for AI information), Tucker Carlson revealed an hour-long interview with OpenAI’s CEO, Sam Altman. It covers a number of floor—Altman’s battle with Elon Musk, OpenAI’s military prospects, conspiracy theories concerning the demise of a former worker—but it surely additionally contains probably the most candid feedback Altman’s made thus far concerning the instances of suicide following conversations with AI. 

    Altman talked about “the strain between person freedom and privateness and defending susceptible customers” in instances like these. However then he supplied up one thing I hadn’t heard earlier than.

    “I believe it’d be very cheap for us to say that in instances of younger individuals speaking about suicide severely, the place we can’t get in contact with dad and mom, we do name the authorities,” he said. “That will be a change.”

    So the place does all this go subsequent? For now, it’s clear that—at the very least within the case of youngsters harmed by AI companionship—corporations’ acquainted playbook received’t maintain. They’ll not deflect accountability by leaning on privateness, personalization, or “person alternative.” Stress to take a more durable line is mounting from state legal guidelines, regulators, and an outraged public.

    However what’s going to that seem like? Politically, the left and proper at the moment are taking note of AI’s hurt to youngsters, however their options differ. On the correct, the proposed answer aligns with the wave of web age-verification legal guidelines which have now been handed in over 20 states. These are supposed to protect children from grownup content material whereas defending “household values.” On the left, it’s the revival of stalled ambitions to carry Large Tech accountable by way of antitrust and consumer-protection powers. 

    Consensus on the issue is less complicated than settlement on the remedy. Because it stands, it seems probably we’ll find yourself with precisely the patchwork of state and native laws that OpenAI (and loads of others) have lobbied in opposition to. 

    For now, it’s all the way down to corporations to resolve the place to attract the strains. They’re having to resolve issues like: Ought to chatbots reduce off conversations when customers spiral towards self-harm, or would that go away some individuals worse off? Ought to they be licensed and controlled like therapists, or handled as leisure merchandise with warnings? The uncertainty stems from a primary contradiction: Corporations have constructed chatbots to behave like caring people, however they’ve postponed creating the requirements and accountability we demand of actual caregivers. The clock is now operating out.

    This story initially appeared in The Algorithm, our weekly publication on AI. To get tales like this in your inbox first, sign up here.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    The UK government is backing AI that can run its own lab experiments

    January 20, 2026

    The era of agentic chaos and how data will save us

    January 20, 2026

    Going beyond pilots with composable and sovereign AI

    January 19, 2026

    Evaluating OCR-to-Markdown Systems Is Fundamentally Broken (and Why That’s Hard to Fix)

    January 15, 2026

    Production-ready agentic AI: key challenges and solutions 

    January 15, 2026

    Balancing cost and performance: Agentic AI development

    January 15, 2026

    Comments are closed.

    Editors Picks

    IdeaSpark Revolver S titanium screwdriver on Kickstarter

    January 20, 2026

    From eggs to avocados – Germany’s Orbem raises €55.5 million for AI-powered MRI expansion

    January 20, 2026

    7 Best All-Clad Deals From the Factory Seconds Sale (2026)

    January 20, 2026

    Americans worry sports betting hurts integrity even as participation keeps rising

    January 20, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Ola’s Roadster e-motorcycles deliver over 120 mph for less than $3,000

    August 15, 2024

    Acer Predator Triton 14 AI Review: A Laptop for Gamers and Creators

    November 10, 2025

    If Ukraine Loses Starlink, Here Are the Best Alternatives

    March 7, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.