Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Turning Dumb Bombs into Cruise Missiles
    • When Elon Musk had a crack at Australia’s online safety boss, she received 60,000 abusive messages, including death threats, in 24 hrs
    • ‘It’s Undignified’: Hundreds of Workers Training Meta’s AI Could Be Laid Off
    • Alberta online gambling expansion sparks concern among First Nations casino operators
    • Google Moves Forward With Pentagon AI Deal Despite Employee Pushback
    • Titanium multitool hammer with wrench and rulers
    • Elon Musk Testifies That He Started OpenAI to Prevent a ‘Terminator Outcome’
    • Better Markets urges courts to let states regulate prediction markets, not CFTC
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Wednesday, April 29
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Are You Being Unfair to LLMs?
    Artificial Intelligence

    Are You Being Unfair to LLMs?

    Editor Times FeaturedBy Editor Times FeaturedJuly 11, 2025No Comments9 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    hype surrounding AI, some ill-informed concepts concerning the nature of LLM intelligence are floating round, and I’d like to handle a few of these. I’ll present sources—most of them preprints—and welcome your ideas on the matter.

    Why do I feel this subject issues? First, I really feel we’re creating a brand new intelligence that in some ways competes with us. Subsequently, we must always purpose to guage it pretty. Second, the subject of AI is deeply introspective. It raises questions on our pondering processes, our uniqueness, and our emotions of superiority over different beings.

    Millière and Buckner write [1]:

    Particularly, we have to perceive what LLMs signify concerning the sentences they produce—and the world these sentences are about. Such an understanding can’t be reached by means of armchair hypothesis alone; it requires cautious empirical investigation.

    LLMs are greater than prediction machines

    Deep neural networks can kind advanced buildings, with linear-nonlinear paths. Neurons can tackle a number of features in superpositions [2]. Additional, LLMs construct inside world fashions and thoughts maps of the context they analyze [3]. Accordingly, they aren’t simply prediction machines for the subsequent phrase. Their inside activations suppose forward to the top of a press release—they’ve a rudimentary plan in thoughts [4].

    Nonetheless, all of those capabilities rely on the dimensions and nature of a mannequin, so they might differ, particularly in particular contexts. These normal capabilities are an energetic area of analysis and are in all probability extra much like the human thought course of than to a spellchecker’s algorithm (if you must decide one of many two).

    LLMs present indicators of creativity

    When confronted with new duties, LLMs do extra than simply regurgitate memorized content material. Relatively, they will produce their very own solutions [5]. Wang et al. analyzed the relation of a mannequin’s output to the Pile dataset and located that bigger fashions advance each in recalling details and at creating extra novel content material.

    But Salvatore Raieli lately reported on TDS that LLMs are usually not artistic. The quoted research largely targeted on ChatGPT-3. In distinction, Guzik, Erike & Byrge discovered that GPT-4 is within the high percentile of human creativity [6]. Hubert et al. agree with this conclusion [7]. This is applicable to originality, fluency, and suppleness. Producing new concepts which are in contrast to something seen within the mannequin’s coaching knowledge could also be one other matter; that is the place distinctive people should still be .

    Both approach, there’s an excessive amount of debate to dismiss these indications solely. To be taught extra concerning the normal subject, you may search for computational creativity.

    LLMs have an idea of emotion

    LLMs can analyze emotional context and write in numerous types and emotional tones. This means that they possess inside associations and activations representing emotion. Certainly, there’s such correlational proof: One can probe the activations of their neural networks for sure feelings and even artificially induce them with steering vectors [8]. (One solution to establish these steering vectors is to find out the contrastive activations when the mannequin is processing statements with an reverse attribute, e.g., unhappiness vs. happiness.)

    Accordingly, the idea of emotional attributes and their doable relation to inside world fashions appears to fall inside the scope of what LLM architectures can signify. There’s a relation between the emotional illustration and the next reasoning, i.e., the world because the LLM understands it.

    Moreover, emotional representations are localized to sure areas of the mannequin, and lots of intuitive assumptions that apply to people may also be noticed in LLMs—even psychological and cognitive frameworks could apply [9].

    Word that the above statements don’t suggest phenomenology, that’s, that LLMs have a subjective expertise.

    Sure, LLMs don’t be taught (post-training)

    LLMs are neural networks with static weights. Once we are chatting with an LLM chatbot, we’re interacting with a mannequin that doesn’t change, and solely learns in-context of the continued chat. This implies it could possibly pull further knowledge from the net or from a database, course of our inputs, and many others. However its nature, built-in information, expertise, and biases stay unchanged.

    Past mere long-term reminiscence programs that present further in-context knowledge to static LLMs, future approaches could possibly be self-modifying by adapting the core LLM’s weights. This may be achieved by regularly pretraining with new knowledge or by regularly fine-tuning and overlaying further weights [10].

    Many various neural community architectures and adaptation approaches are being explored to effectively implement continuous-learning programs [11]. These programs exist; they’re simply not dependable and economical but.

    Future improvement

    Let’s not neglect that the AI programs we’re at the moment seeing are very new. “It’s not good at X” is a press release that will shortly change into invalid. Moreover, we’re normally judging the low-priced client merchandise, not the highest fashions which are too costly to run, unpopular, or nonetheless stored behind locked doorways. A lot of the final yr and a half of LLM improvement has targeted on creating cheaper, easier-to-scale fashions for customers, not simply smarter, higher-priced ones.

    Whereas computer systems could lack originality in some areas, they excel at shortly attempting completely different choices. And now, LLMs can choose themselves. Once we lack an intuitive reply whereas being artistic, aren’t we doing the identical factor—biking by means of ideas and selecting the perfect? The inherent creativity (or no matter you need to name it) of LLMs, coupled with the flexibility to quickly iterate by means of concepts, is already benefiting scientific analysis. See my earlier article on AlphaEvolve for an instance.

    Weaknesses resembling hallucinations, biases, and jailbreaks that confuse LLMs and circumvent their safeguards, in addition to security and reliability points, are nonetheless pervasive. However, these programs are so highly effective that myriad purposes and enhancements are doable. LLMs additionally shouldn’t have for use in isolation. When mixed with further, conventional approaches, some shortcomings could also be mitigated or change into irrelevant. As an example, LLMs can generate practical coaching knowledge for conventional AI programs which are subsequently utilized in industrial automation. Even when improvement had been to decelerate, I imagine that there are a long time of advantages to be explored, from drug analysis to training.

    LLMs are simply algorithms. Or are they?

    Many researchers at the moment are discovering similarities between human pondering processes and LLM info processing (e.g., [12]). It has lengthy been accepted that CNNs will be likened to the layers within the human visible cortex [13], however now we’re speaking concerning the neocortex [14, 15]! Don’t get me improper; there are additionally clear variations. However, the capability explosion of LLMs can’t be denied, and our claims of uniqueness don’t appear to carry up nicely.

    The query now could be the place this can lead, and the place the boundaries are—at what level should we talk about consciousness? Respected thought leaders like Geoffrey Hinton and Douglas Hofstadter have begun to understand the opportunity of consciousness in AI in gentle of current LLM breakthroughs [16, 17]. Others, like Yann LeCun, are uncertain [18].

    Professor James F. O’Brien shared his thoughts on the subject of LLM sentience final yr on TDS, and requested:

    Will we’ve got a solution to check for sentience? In that case, how will it work and what ought to we do if the consequence comes out constructive?

    Shifting on

    We ought to be cautious when ascribing human traits to machines—anthropomorphism occurs all too simply. Nonetheless, additionally it is straightforward to dismiss different beings. We now have seen this occur too typically with animals.

    Subsequently, no matter whether or not present LLMs change into artistic, possess world fashions, or are sentient, we’d need to chorus from belittling them. The subsequent technology of AI could possibly be all three [19].

    What do you suppose?

    References

    1. Millière, Raphaël, and Cameron Buckner, A Philosophical Introduction to Language Models — Part I: Continuity With Classic Debates (2024), arXiv.2401.03910
    2. Elhage, Nelson, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, et al., Toy Models of Superposition (2022), arXiv:2209.10652v1
    3. Kenneth Li, Do Large Language Models learn world models or just surface statistics? (2023), The Gradient
    4. Lindsey, et al., On the Biology of a Large Language Model (2025), Transformer Circuits
    5. Wang, Xinyi, Antonis Antoniades, Yanai Elazar, Alfonso Amayuelas, Alon Albalak, Kexun Zhang, and William Yang Wang, Generalization v.s. Memorization: Tracing Language Models’ Capabilities Back to Pretraining Data (2025), arXiv:2407.14985
    6. Guzik, Erik & Byrge, Christian & Gilde, Christian, The Originality of Machines: AI Takes the Torrance Test (2023), Journal of Creativity
    7. Hubert, Okay.F., Awa, Okay.N. & Zabelina, D.L, The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks (2024), Sci Rep 14, 3440
    8. Turner, Alexander Matt, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid, Activation Addition: Steering Language Models Without Optimization. (2023), arXiv:2308.10248v3
    9. Tak, Ala N., Amin Banayeeanzade, Anahita Bolourani, Mina Kian, Robin Jia, and Jonathan Gratch, Mechanistic Interpretability of Emotion Inference in Large Language Models (2025), arXiv:2502.05489
    10. Albert, Paul, Frederic Z. Zhang, Hemanth Saratchandran, Cristian Rodriguez-Opazo, Anton van den Hengel, and Ehsan Abbasnejad, RandLoRA: Full-Rank Parameter-Efficient Fine-Tuning of Large Models (2025), arXiv:2502.00987
    11. Shi, Haizhou, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Zifeng Wang, Sayna Ebrahimi, and Hao Wang, Continual Learning of Large Language Models: A Comprehensive Survey (2024), arXiv:2404.16789
    12. Goldstein, A., Wang, H., Niekerken, L. et al., A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations (2025), Nat Hum Behav 9, 1041–1055
    13. Yamins, Daniel L. Okay., Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo, Performance-Optimized Hierarchical Models Predict Neural Responses in Higher Visual Cortex (2014), Proceedings of the Nationwide Academy of Sciences of the USA of America 111(23): 8619–24
    14. Granier, Arno, and Walter Senn, Multihead Self-Attention in Cortico-Thalamic Circuits (2025), arXiv:2504.06354
    15. Han, Danny Dongyeop, Yunju Cho, Jiook Cha, and Jay-Yoon Lee, Mind the Gap: Aligning the Brain with Language Models Requires a Nonlinear and Multimodal Approach (2025), arXiv:2502.12771
    16. https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/
    17. https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai
    18. Yann LeCun, A Path Towards Autonomous Machine Intelligence (2022), OpenReview
    19. Butlin, Patrick, Robert Lengthy, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Fixed, George Deane, et al., Consciousness in Artificial Intelligence: Insights from the Science of Consciousness (2023), arXiv: 2308.08708



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    PyTorch NaNs Are Silent Killers — So I Built a 3ms Hook to Catch Them at the Exact Layer

    April 28, 2026

    Correlation Doesn’t Mean Causation! But What Does It Mean?

    April 28, 2026

    Let the AI Do the Experimenting

    April 28, 2026

    The Next Frontier of AI in Production Is Chaos Engineering

    April 28, 2026

    How Spreadsheets Quietly Cost Supply Chains Millions

    April 27, 2026

    A Career in Data Is Not Always a Straight Line, and That’s Okay

    April 27, 2026

    Comments are closed.

    Editors Picks

    Turning Dumb Bombs into Cruise Missiles

    April 29, 2026

    When Elon Musk had a crack at Australia’s online safety boss, she received 60,000 abusive messages, including death threats, in 24 hrs

    April 29, 2026

    ‘It’s Undignified’: Hundreds of Workers Training Meta’s AI Could Be Laid Off

    April 29, 2026

    Alberta online gambling expansion sparks concern among First Nations casino operators

    April 29, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    BAE Systems Adds Advanced EW Suite

    March 29, 2026

    Italian publishing house Zanichelli Editore launches €60 million EdTech push to “unlock human potential”

    June 24, 2025

    Coffee in midlife linked to healthier aging in women

    June 22, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.