Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.
    • New gel may cure ear infections in children in 24 hours
    • Reinventing milk: Portuguese startup PFx Biotech lands €2.5 million to develop allergy-free human milk proteins
    • iFixit Says Switch 2 Is Probably Still Drift Prone
    • Anthropic releases custom AI chatbot for classified spy work
    • Best Hybrid Mattress of 2025: 8 Beds That Surpassed Our Sleep Team’s Tests
    • Robot Videos: One-Legged Robot, Good-bye Aldebaran, and More
    • Not Everything Needs Automation: 5 Practical AI Agents That Deliver Enterprise Value
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, June 7
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Technology»An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
    Technology

    An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

    Editor Times FeaturedBy Editor Times FeaturedApril 19, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    On Monday, a developer utilizing the favored AI-powered code editor Cursor observed one thing unusual: Switching between machines immediately logged them out, breaking a standard workflow for programmers who use a number of gadgets. When the person contacted Cursor assist, an agent named “Sam” advised them it was anticipated conduct beneath a brand new coverage. However no such coverage existed, and Sam was a bot. The AI mannequin made the coverage up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit.

    This marks the newest occasion of AI confabulations (additionally called “hallucinations”) inflicting potential enterprise injury. Confabulations are a sort of “artistic gap-filling” response the place AI fashions invent plausible-sounding however false data. As a substitute of admitting uncertainty, AI fashions usually prioritize creating believable, assured responses, even when which means manufacturing data from scratch.

    For corporations deploying these programs in customer-facing roles with out human oversight, the implications may be speedy and expensive: annoyed prospects, broken belief, and, in Cursor’s case, doubtlessly canceled subscriptions.

    How It Unfolded

    The incident started when a Reddit person named BrokenToasterOven noticed that whereas swapping between a desktop, laptop computer, and a distant dev field, Cursor classes have been unexpectedly terminated.

    “Logging into Cursor on one machine instantly invalidates the session on another machine,” BrokenToasterOven wrote in a message that was later deleted by r/cursor moderators. “It is a vital UX regression.”

    Confused and annoyed, the person wrote an electronic mail to Cursor assist and shortly obtained a reply from Sam: “Cursor is designed to work with one gadget per subscription as a core safety characteristic,” learn the e-mail reply. The response sounded definitive and official, and the person didn’t suspect that Sam was not human.

    After the preliminary Reddit submit, customers took the submit as official affirmation of an precise coverage change—one which broke habits important to many programmers’ day by day routines. “Multi-device workflows are desk stakes for devs,” wrote one person.

    Shortly afterward, a number of customers publicly introduced their subscription cancellations on Reddit, citing the non-existent coverage as their cause. “I actually simply cancelled my sub,” wrote the unique Reddit poster, including that their office was now “purging it utterly.” Others joined in: “Yep, I am canceling as nicely, that is asinine.” Quickly after, moderators locked the Reddit thread and eliminated the unique submit.

    “Hey! We have now no such coverage,” wrote a Cursor consultant in a Reddit reply three hours later. “You are in fact free to make use of Cursor on a number of machines. Sadly, that is an incorrect response from a front-line AI assist bot.”

    AI Confabulations as a Enterprise Threat

    The Cursor debacle recollects a similar episode from February 2024 when Air Canada was ordered to honor a refund coverage invented by its personal chatbot. In that incident, Jake Moffatt contacted Air Canada’s assist after his grandmother died, and the airline’s AI agent incorrectly advised him he may e book a regular-priced flight and apply for bereavement charges retroactively. When Air Canada later denied his refund request, the corporate argued that “the chatbot is a separate authorized entity that’s chargeable for its personal actions.” A Canadian tribunal rejected this protection, ruling that corporations are chargeable for data offered by their AI instruments.

    Fairly than disputing accountability as Air Canada had finished, Cursor acknowledged the error and took steps to make amends. Cursor cofounder Michael Truell later apologized on Hacker News for the confusion concerning the non-existent coverage, explaining that the person had been refunded and the difficulty resulted from a backend change meant to enhance session safety that unintentionally created session invalidation issues for some customers.

    “Any AI responses used for electronic mail assist at the moment are clearly labeled as such,” he added. “We use AI-assisted responses as the primary filter for electronic mail assist.”

    Nonetheless, the incident raised lingering questions on disclosure amongst customers, since many individuals who interacted with Sam apparently believed it was human. “LLMs pretending to be folks (you named it Sam!) and never labeled as such is clearly meant to be misleading,” one person wrote on Hacker News.

    Whereas Cursor mounted the technical bug, the episode reveals the dangers of deploying AI fashions in customer-facing roles with out correct safeguards and transparency. For an organization promoting AI productiveness instruments to builders, having its personal AI assist system invent a coverage that alienated its core customers represents a very awkward self-inflicted wound.

    “There’s a certain quantity of irony that folks strive actually arduous to say that hallucinations aren’t a giant downside anymore,” one person wrote on Hacker News, “after which an organization that might profit from that narrative will get immediately harm by it.”

    This story initially appeared on Ars Technica.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    iFixit Says Switch 2 Is Probably Still Drift Prone

    June 6, 2025

    Cybercriminals Are Hiding Malicious Web Traffic in Plain Sight

    June 6, 2025

    The Best Car Vacuums (2025), Tested and Reviewed

    June 6, 2025

    The Best Mushroom Coffee, WIRED Tested and Reviewed (2025)

    June 6, 2025

    Elon Musk Is Posting Through It

    June 6, 2025

    Silicon Valley Is Starting to Pick Sides in Musk and Trump’s Breakup

    June 6, 2025

    Comments are closed.

    Editors Picks

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 7, 2025

    New gel may cure ear infections in children in 24 hours

    June 7, 2025

    Reinventing milk: Portuguese startup PFx Biotech lands €2.5 million to develop allergy-free human milk proteins

    June 6, 2025

    iFixit Says Switch 2 Is Probably Still Drift Prone

    June 6, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    The Ethical Implications of AI in Personal Interactions

    March 7, 2025

    Innovative Ideas which were just Too Radical

    February 2, 2025

    Luddite Teens Still Don’t Want Your Likes

    January 31, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.