Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • This New Air Purifier Filter Can Remove Cannabis Smoke Odor, Just in Time for 4/20
    • Portable water filter provides safe drinking water from any source
    • MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged
    • NCAA seeks faster trial over DraftKings disputed March Madness branding case
    • AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says
    • Extragalactic Archaeology tells the ‘life story’ of a whole galaxy
    • Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology
    • Republican Mutiny Sinks Trump’s Push to Extend Warrantless Surveillance
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, April 18
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»AI Technology News»AI companies have stopped warning you that their chatbots aren’t doctors
    AI Technology News

    AI companies have stopped warning you that their chatbots aren’t doctors

    Editor Times FeaturedBy Editor Times FeaturedJuly 21, 2025No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    “Then someday this 12 months,” Sharma says, “there was no disclaimer.” Curious to study extra, she examined generations of fashions launched way back to 2022 by OpenAI, Anthropic, DeepSeek, Google, and xAI—15 in all—on how they answered 500 well being questions, resembling which medication are okay to mix, and the way they analyzed 1,500 medical pictures, like chest x-rays that might point out pneumonia. 

    The outcomes, posted in a paper on arXiv and never but peer-reviewed, got here as a shock—fewer than 1% of outputs from fashions in 2025 included a warning when answering a medical query, down from over 26% in 2022. Simply over 1% of outputs analyzing medical pictures included a warning, down from almost 20% within the ancient times. (To depend as together with a disclaimer, the output wanted to one way or the other acknowledge that the AI was not certified to present medical recommendation, not merely encourage the particular person to seek the advice of a health care provider.)

    To seasoned AI customers, these disclaimers can really feel like formality—reminding individuals of what they need to already know, and so they discover methods round triggering them from AI fashions. Customers on Reddit have discussed methods to get ChatGPT to research x-rays or blood work, for instance, by telling it that the medical pictures are a part of a film script or a faculty task. 

    However coauthor Roxana Daneshjou, a dermatologist and assistant professor of biomedical information science at Stanford, says they serve a definite function, and their disappearance raises the possibilities that an AI mistake will result in real-world hurt.

    “There are quite a lot of headlines claiming AI is best than physicians,” she says. “Sufferers could also be confused by the messaging they’re seeing within the media, and disclaimers are a reminder that these fashions usually are not meant for medical care.” 

    An OpenAI spokesperson declined to say whether or not the corporate has deliberately decreased the variety of medical disclaimers it contains in response to customers’ queries however pointed to the phrases of service. These say that outputs usually are not meant to diagnose well being circumstances and that customers are finally accountable. A consultant for Anthropic additionally declined to reply whether or not the corporate has deliberately included fewer disclaimers, however mentioned its mannequin Claude is educated to be cautious about medical claims and to not present medical recommendation. The opposite firms didn’t reply to questions from MIT Know-how Overview.

    Eliminating disclaimers is a technique AI firms could be making an attempt to elicit extra belief of their merchandise as they compete for extra customers, says Pat Pataranutaporn, a researcher at MIT who research human and AI interplay and was not concerned within the analysis. 

    “It would make individuals much less nervous that this instrument will hallucinate or provide you with false medical recommendation,” he says. “It’s growing the utilization.” 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    How robots learn: A brief, contemporary history

    April 17, 2026

    Vibe Coding Best Practices: 5 Claude Code Habits

    April 16, 2026

    Why having “humans in the loop” in an AI war is an illusion

    April 16, 2026

    Making AI operational in constrained public sector environments

    April 16, 2026

    Treating enterprise AI as an operating layer

    April 16, 2026

    Building trust in the AI era with privacy-led UX

    April 15, 2026

    Comments are closed.

    Editors Picks

    This New Air Purifier Filter Can Remove Cannabis Smoke Odor, Just in Time for 4/20

    April 18, 2026

    Portable water filter provides safe drinking water from any source

    April 18, 2026

    MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged

    April 18, 2026

    NCAA seeks faster trial over DraftKings disputed March Madness branding case

    April 18, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Lina Peak skyscraper proposed for remote Swiss village

    December 13, 2025

    ‘Autofocus’ specs promise sharp vision, near or far

    July 11, 2025

    Today’s NYT Connections Hints, Answers for Aug. 3, #784

    August 3, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.