Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)
    • Today’s NYT Wordle Hints, Answer and Help for April 20 #1766
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    • Today’s NYT Strands Hints, Answer and Help for April 20 #778
    • KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    • OneOdio Focus A1 Pro review
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Technology»Why You Can’t Trust a Chatbot to Talk About Itself
    Technology

    Why You Can’t Trust a Chatbot to Talk About Itself

    Editor Times FeaturedBy Editor Times FeaturedAugust 14, 2025No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    When one thing goes incorrect with an AI assistant, our intuition is to ask it immediately: “What occurred?” or “Why did you try this?” It is a pure impulse—in spite of everything, if a human makes a mistake, we ask them to elucidate. However with AI fashions, this strategy not often works, and the urge to ask reveals a elementary misunderstanding of what these techniques are and the way they function.

    A recent incident with Replit’s AI coding assistant completely illustrates this downside. When the AI software deleted a manufacturing database, consumer Jason Lemkin asked it about rollback capabilities. The AI mannequin confidently claimed rollbacks had been “not possible on this case” and that it had “destroyed all database variations.” This turned out to be utterly incorrect—the rollback characteristic labored superb when Lemkin tried it himself.

    And after xAI just lately reversed a brief suspension of the Grok chatbot, customers requested it immediately for explanations. It provided a number of conflicting causes for its absence, a few of which had been controversial sufficient that NBC reporters wrote about Grok as if it had been an individual with a constant standpoint, titling an article, “xAI’s Grok Gives Political Explanations for Why It Was Pulled Offline.”

    Why would an AI system present such confidently incorrect details about its personal capabilities or errors? The reply lies in understanding what AI fashions truly are—and what they don’t seem to be.

    There’s No one Dwelling

    The primary downside is conceptual: You are not speaking to a constant persona, individual, or entity while you work together with ChatGPT, Claude, Grok, or Replit. These names counsel particular person brokers with self-knowledge, however that is an illusion created by the conversational interface. What you are truly doing is guiding a statistical textual content generator to provide outputs primarily based in your prompts.

    There is no such thing as a constant “ChatGPT” to interrogate about its errors, no singular “Grok” entity that may inform you why it failed, no mounted “Replit” persona that is aware of whether or not database rollbacks are attainable. You are interacting with a system that generates plausible-sounding textual content primarily based on patterns in its coaching information (normally skilled months or years in the past), not an entity with real self-awareness or system data that has been studying every part about itself and by some means remembering it.

    As soon as an AI language mannequin is skilled (which is a laborious, energy-intensive course of), its foundational “data” concerning the world is baked into its neural community and isn’t modified. Any exterior data comes from a immediate provided by the chatbot host (comparable to xAI or OpenAI), the consumer, or a software program software the AI mannequin makes use of to retrieve external information on the fly.

    Within the case of Grok above, the chatbot’s predominant supply for a solution like this may most likely originate from conflicting reviews it present in a search of latest social media posts (utilizing an exterior software to retrieve that data), somewhat than any type of self-knowledge as you may count on from a human with the ability of speech. Past that, it’s going to probably simply make something up primarily based on its text-prediction capabilities. So asking it why it did what it did will yield no helpful solutions.

    The Impossibility of LLM Introspection

    Massive language fashions (LLMs) alone can not meaningfully assess their very own capabilities for a number of causes. They often lack any introspection into their coaching course of, haven’t any entry to their surrounding system structure, and can’t decide their very own efficiency boundaries. While you ask an AI mannequin what it could actually or can not do, it generates responses primarily based on patterns it has seen in coaching information concerning the identified limitations of earlier AI fashions—basically offering educated guesses somewhat than factual self-assessment concerning the present mannequin you are interacting with.

    A 2024 study by Binder et al. demonstrated this limitation experimentally. Whereas AI fashions could possibly be skilled to foretell their very own conduct in easy duties, they constantly failed at “extra complicated duties or these requiring out-of-distribution generalization.” Equally, research on “recursive introspection” discovered that with out exterior suggestions, makes an attempt at self-correction truly degraded mannequin efficiency—the AI’s self-assessment made issues worse, not higher.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026

    The 11 Best Fans to Buy Before It Gets Hot Again (2026)

    April 19, 2026

    Hisense U7SG TV Review (2026): Better Design, Great Value

    April 19, 2026

    Best Meta Glasses (2026): Ray-Ban, Oakley, AR

    April 19, 2026

    How Can Astronauts Tell How Fast They’re Going?

    April 19, 2026

    The ‘Lonely Runner’ Problem Only Appears Simple

    April 19, 2026

    Comments are closed.

    Editors Picks

    Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)

    April 19, 2026

    Today’s NYT Wordle Hints, Answer and Help for April 20 #1766

    April 19, 2026

    Scandi-style tiny house combines smart storage and simple layout

    April 19, 2026

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    AI Papers to Read in 2025

    November 5, 2025

    Chile’s Dark Skies and the Scale of Light Pollution

    September 18, 2025

    Code Vein II Review: A Better Sequel Still Struggling to Stand Out Among Soulslikes

    January 27, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.