Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged
    • NCAA seeks faster trial over DraftKings disputed March Madness branding case
    • AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says
    • Extragalactic Archaeology tells the ‘life story’ of a whole galaxy
    • Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology
    • Republican Mutiny Sinks Trump’s Push to Extend Warrantless Surveillance
    • Yocha Dehe slams Vallejo Council over rushed casino deal approval process
    • One Rumored Color for the iPhone 18 Pro? A Rich Dark Cherry Red
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, April 18
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Water Cooler Small Talk, Ep. 9: What “Thinking” and “Reasoning” Really Mean in AI and LLMs
    Artificial Intelligence

    Water Cooler Small Talk, Ep. 9: What “Thinking” and “Reasoning” Really Mean in AI and LLMs

    Editor Times FeaturedBy Editor Times FeaturedOctober 29, 2025No Comments10 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    discuss is a particular form of small discuss, sometimes noticed in workplace areas round a water cooler. There, staff steadily share all types of company gossip, myths, legends, inaccurate scientific opinions, indiscreet private anecdotes, or outright lies. Something goes. In my Water Cooler Small Discuss posts, I talk about unusual and often scientifically invalid opinions that I, my pals, or some acquaintance of mine have overheard within the workplace which have actually left us speechless.

    So, right here’s the water cooler opinion of as we speak’s put up:

    I used to be actually upset through the use of ChatGPT the opposite day for reviewing Q3 outcomes. This isn’t Synthetic Intelligence — that is only a search and summarization device, however not Synthetic Intelligence.

    🤷‍♀️

    We regularly discuss AI, imagining some superior form of intelligence, straight out of a 90s sci-fi film. It’s straightforward to float away and consider it as some cinematic singularity like Terminator’s Skynet or Dune’s dystopian AI. Generally used illustrations of AI-related matters with robots, androids, and intergalactic portals, prepared to move us to the longer term, simply additional mislead us into deciphering AI wrongly.

    A number of the high outcomes showing for ‘AI’ on Unsplash;
    from left to proper: 1) picture by julien Tromeur on Unsplash, 2) picture by Luke Jones on Unsplash, 3) picture by Xu Haiwei on Unsplash

    However, for higher or for worse, AI techniques function in a essentially completely different method — a minimum of for now. In the intervening time, there isn’t a omnipresent superintelligence ready to resolve all of humanity’s insolvable issues. That’s why it’s important to grasp what present AI fashions truly are and what they will (and might’t) do. Solely then can we handle our expectations and make the very best use of this highly effective new know-how.


    🍨 DataCream is a e-newsletter about what I study, construct, and take into consideration AI and knowledge. If you’re fascinated with these matters subscribe here.


    Deductive vs Inductive Pondering

    as a way to get our heads round what AI at its present state is and isn’t, and what it will probably and can’t do, we first want to grasp the distinction between deductive and inductive pondering.

    Psychologist Daniel Kahneman devoted his life to learning how our minds function, resulting in conclusions and selections, forming our actions and behaviors — an unlimited and groundbreaking analysis that in the end won him the Economics Nobel Prize. His work is fantastically summarized for the typical reader in Thinking Fast and Slow, the place he describes two modes of human thought:

    • System 1: quick, intuitive, and automated, primarily unconscious.
    • System 2: gradual, deliberate, and effortful, requiring acutely aware effort.

    From an evolutionary standpoint, we are inclined to favor to function on System 1 as a result of it saves time and power — form of like dwelling life on autopilot, not fascinated by issues a lot. Nonetheless, System 1’s excessive effectiveness is many occasions accompanied by low accuracy, resulting in errors.


    Equally, inductive reasoning aligns intently with Kahneman’s System 1. it strikes from particular observations to common conclusions. Such a pondering is pattern-based and thus, stochastic. In different phrases, its conclusions at all times carry a level of uncertainty, even when we don’t consciously acknowledge it.

    For instance:

    Sample: The solar has risen day-after-day in my life.
    Conclusion: Due to this fact, the solar will rise tomorrow.

    As you might think about, the sort of pondering is susceptible to bias and error as a result of it generalizes from restricted knowledge. In different phrases, the solar is likely going to additionally rise tomorrow, because it has risen day-after-day in my life, however not essentially.

    To succeed in this conclusion, we silently additionally assume that ‘all days will comply with the identical sample as these we’ve skilled’, which can or will not be true. In different phrases, we implicitly assume that the patterns noticed in a small pattern are going to use in all places.

    Such silent assumptions made as a way to attain a conclusion, are precisely what make inductive reasoning result in outcomes which are extremely believable, but by no means sure. Equally to becoming a operate via a couple of knowledge factors, we might assume what the underlying relationship could also be, however we will by no means ensure, and being incorrect is at all times a chance. We construct a believable mannequin of what we observe—and easily hope it’s one.

    picture by creator

    Or put one other method, completely different folks working on completely different knowledge or on completely different circumstances are going to supply completely different outcomes when utilizing induction.


    On the flip facet, deductive reasoning strikes from common rules to particular conclusions — that’s, primarily Kahneman’s System 2. It’s rule-based, deterministic, and logical, following the construction of “if A, then for positive B”.

    For instance:

    Premise 1: All people are mortal.
    Premise 2: Socrates is human.
    Conclusion: Due to this fact, Socrates is mortal.

    Such a pondering is much less susceptible to errors, since each step of the reasoning is deterministic. There are not any silent assumptions; for the reason that premises are true, the conclusion should be true.

    Again to the operate becoming analogy, we will think about deduction because the reverse course of. Calculating a datapoint given the operate. Since we all know the operate, we will for positive calculate the info level, and in contrast to a number of curves becoming the identical knowledge factors higher or worse, for the info level, there shall be one definitive appropriate reply. Most significantly, deductive reasoning is constant and strong. We will carry out the recalculation at a particular level of the operate one million occasions, and we’re at all times going to get the very same outcome.

    Picture by creator

    Apparently, even when utilizing deductive reasoning, people could make errors. As an illustration, we might mess up the calculation of the particular worth of the operate and get the outcome incorrect. However that is going to be only a random error. Quite the opposite, the error in inductive reasoning is systemic. The reasoning course of itself is susceptible to error, since we’re together with these silent assumptions with out ever figuring out to what extent they maintain true.


    So, how do LLMs work?

    It’s straightforward, particularly for folks with a non-tech or laptop science background, to think about as we speak’s AI fashions as an extraterrestrial, godly intelligence, capable of present clever solutions to all of humanity’s questions. Nonetheless, this isn’t (but) the case, and as we speak’s AI fashions, as spectacular and superior as they’re, stay restricted by the rules they function on.

    Giant Language Fashions (LLMs) don’t “assume” or “perceive” within the human sense. As a substitute, they depend on patterns within the knowledge they’ve been educated on, very similar to Kahneman’s System 1 or inductive reasoning. Merely put, they work by predicting the subsequent most believable phrase of a given enter.

    You’ll be able to consider an LLM as a really diligent scholar who memorized huge quantities of textual content and realized to breed patterns that sound appropriate with out essentially understanding why they’re appropriate. Many of the occasions this works as a result of sentences that sound appropriate have the next probability of truly being appropriate. Which means such fashions can generate human-like textual content and speech with spectacular high quality, and primarily sound like a really good human. Nonetheless, producing human-like textual content and producing arguments and conclusions that sound appropriate doesn’t assure they actually are appropriate. Even when LLMs generate content material that seems like deductive reasoning, it’s not. You’ll be able to simply determine this out by having a look at the nonsense AI tools like ChatGPT occasionally produce.

    Picture by creator

    It’s also necessary to grasp how LLMs get these subsequent most possible phrases. Naively, we might assume that such fashions simply rely the frequencies of phrases in current textual content after which someway reproduce these frequencies to generate new textual content. However that’s not the way it works. There are about 50,000 generally used phrases in English, which ends up in virtually infinite potential mixture of phrases. As an illustration, even for a brief sentence of 10 phrases the combos can be 50,000 x 10^10 which is like an astronomically giant quantity. On the flip facet, all current English textual content in books and the web are a couple of a whole bunch billions of phrases phrases (round 10^12). In consequence, there isn’t even practically sufficient textual content in existence to cowl each potential phrase, and generate textual content with this strategy.

    As a substitute, LLMs use statistical fashions constructed from current textual content to estimate the chance of phrases and phrases that will by no means have appeared earlier than. Like several mannequin of actuality, although, this can be a simplified approximation, leading to AI making errors or fabricating data.


    What about Chain of Thought?

    So, what about ‘the mannequin is pondering’, or ‘Chain of Thought (CoT) reasoning‘? If LLMs can’t actually assume like people do, what do these fancy phrases imply? Is it only a advertising and marketing trick? Nicely, form of, however not precisely.

    Chain of Thought (CoT) is primarily a prompting method permitting LLMs to reply questions by breaking them down into smaller, step-by-step reasoning sequences. On this method, as a substitute of creating one giant assumption to reply the consumer’s query in a single step, with a bigger danger of producing an incorrect reply, the mannequin makes a number of technology steps with larger confidence. Primarily, the consumer ‘guides’ the LLM by breaking the preliminary query into a number of prompts that the LLM solutions one after the opposite. For instance, a quite simple type of CoT prompting might be carried out by including on the finish of a immediate one thing like ‘let’s assume it step-by-step’.

    Taking this idea a step additional, as a substitute of requiring the consumer to interrupt down the preliminary query into smaller questions, fashions with ‘long-thinking‘ can carry out this course of by themselves. Particularly, such reasoning fashions can break down the consumer’s question right into a sequence of step-by-step, smaller queries, leading to higher solutions. CoT was one of many largest advances in AI, permitting fashions to successfully handle complicated reasoning duties. OpenAI’s o1 model was the primary main instance that demonstrated the facility of CoT reasoning.

    picture by creator

    On my thoughts

    Understanding the underlying rules enabling as we speak’s AI fashions to work is important as a way to have life like expectations of what they will and might’t do, and optimize their use. Neural networks and AI fashions inherently function on inductive-style reasoning, even when they many occasions sound like performing deduction. Even methods like Chain of Thought reasoning, whereas producing spectacular outcomes, nonetheless essentially function on induction and might nonetheless produce data that sounds appropriate, however in actuality are usually not.


    Beloved this put up? Let’s be pals! Be a part of me on:

    📰Substack 💌 Medium 💼LinkedIn ☕Buy me a coffee!


    What about pialgorithms?

    Trying to deliver the facility of RAG into your group?

    pialgorithms can do it for you 👉 book a demo as we speak



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    A Practical Guide to Memory for Autonomous LLM Agents

    April 17, 2026

    You Don’t Need Many Labels to Learn

    April 17, 2026

    Beyond Prompting: Using Agent Skills in Data Science

    April 17, 2026

    6 Things I Learned Building LLMs From Scratch That No Tutorial Teaches You

    April 17, 2026

    Introduction to Deep Evidential Regression for Uncertainty Quantification

    April 17, 2026

    memweave: Zero-Infra AI Agent Memory with Markdown and SQLite — No Vector Database Required

    April 17, 2026

    Comments are closed.

    Editors Picks

    MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged

    April 18, 2026

    NCAA seeks faster trial over DraftKings disputed March Madness branding case

    April 18, 2026

    AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says

    April 18, 2026

    Extragalactic Archaeology tells the ‘life story’ of a whole galaxy

    April 18, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    She Was Given Up by Her Chinese Parents—and Spent 14 Years Trying to Find a Way Back

    January 20, 2026

    Tech roles in Australia are falling, putting the Tech Council’s 1.2 million jobs plan in doubt

    November 10, 2025

    Samsung’s next big health advancement is a feature that alerts you to early signs of dementia

    December 31, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.