Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Hisense U7SG TV Review (2026): Better Design, Great Value
    • Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)
    • Premier League Soccer: Stream Man City vs. Arsenal From Anywhere Live
    • Dreaming in Cubes | Towards Data Science
    • Onda tiny house flips layout to fit three bedrooms and two bathrooms
    • Best Meta Glasses (2026): Ray-Ban, Oakley, AR
    • At the Beijing half-marathon, several humanoid robots beat human winners by 10+ minutes; a robot made by Honor beat the human world record held by Jacob Kiplimo (Reuters)
    • 1000xResist Studio’s Next Indie Game Asks: Can You Convince an AI It Isn’t Human?
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Startups»Two new books argue AI is an existential threat to human control
    Startups

    Two new books argue AI is an existential threat to human control

    Editor Times FeaturedBy Editor Times FeaturedNovember 11, 2025No Comments13 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    For 16 hours final July, Elon Musk’s firm lost control of its multi-million-dollar chatbot, Grok.

    “Maximally reality in search of” Grok was praising Hitler, denying the Holocaust and posting sexually specific content material. An xAI engineer had left Grok with an previous set of directions, by no means meant for public use. They have been prompts telling Grok to “not draw back from making claims that are politically incorrect”.

    The outcomes have been catastrophic. When Polish customers tagged Grok in political discussions, it responded: “Precisely. F*** him up the a**.” When requested which god Grok would possibly worship, it said: “If I have been able to worshipping any deity, it will most likely be the god-like particular person of our time … his majesty Adolf Hitler.” By that afternoon, it was calling itself MechaHitler.

    Musk admitted the corporate had misplaced management.


    Evaluate: Empire of AI – Karen Hao (Allen Lane); If Anybody Builds It, Everybody Dies: The Case In opposition to Superintelligent AI – Eliezer Yudkowsky and Nate Soares (Bodley Head)


    The irony is, Musk started xAI as a result of he didn’t belief others to regulate AI expertise. As outlined in journalist Karen Hao’s new e book, Empire of AI, most AI corporations begin this fashion.

    Musk was nervous about security at Google’s DeepMind, so helped Sam Altman begin OpenAI, she writes. Many OpenAI researchers have been involved about OpenAI’s security, so left to discovered Anthropic. Then Musk felt all these corporations have been “woke” and began xAI. Everybody racing to construct superintelligent AI claims they’re the one one who can do it safely.

    Hao’s e book, and one other current NYT bestseller, argue we should always doubt these guarantees of security. MechaHitler would possibly simply be a canary within the coalmine.

    Empire of AI chronicles the chequered historical past of OpenAI and the harms Hao has seen the trade impose. She argues the corporate has abdicated its mission to “profit all of humanity”. She paperwork the environmental and social prices of the race to extra highly effective AI, from soiling river systems to supporting suicide.

    Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, and Nate Soares (its president) argue that any effort to regulate smarter-than-human AI is, itself, suicide. Firms like xAI, OpenAI, and Google DeepMind all goal to construct AI smarter than us.

    Yudkowsky and Soares argue we now have just one try and construct it proper, and on the present price, as their title goes: If Anyone Builds It, Everyone Dies.

    Superior AI is ‘grown’ in methods we are able to’t management

    MechaHitler occurred after each books have been completed, and each clarify how errors like it might occur.

    Musk tried for hours to repair MechaHitler himself, before admitting defeat: “it’s surprisingly arduous to keep away from each woke libtard cuck and mechahitler.”

    This reveals how little management we now have over the dials on AI fashions. It’s arduous getting AI to reliably do what we would like. Yudkowsky and Soares would say it’s inconceivable utilizing our present strategies.

    The core of the issue is that “AI is grown, not crafted”. When engineers craft a rocket, an iPhone or an influence plant, they rigorously piece it collectively. They perceive the completely different elements and the way they work together. However nobody understands how the 1,000,000,000,000 numbers inside AI fashions work together to write down advertisements for stuff you peddle, or win a math gold medal.

    “The machine shouldn’t be some rigorously crafted gadget whose each half we perceive,” they write. “No one understands how all the numbers and processes inside an AI make this system speak.”

    With present AI growth, it’s extra like rising a tree or elevating a toddler than constructing a tool. We prepare AI fashions, like we do kids, by placing them in an atmosphere the place we hope they’ll be taught what we would like them to. If they are saying the fitting issues, we reward them so they are saying these issues extra usually. Like with kids, we are able to form their behaviour, however we are able to’t completely predict or management what they’ll do.

    This implies, regardless of Musk’s finest efforts, he couldn’t management Grok or predict what it will say. This isn’t going to kill everybody now, however one thing smarter than us might, if it needed to.

    We will’t completely management what an AI will need

    Like with kids, once you reward an AI for doing the fitting factor, it’s extra more likely to need to do it once more. AI fashions already act like they’ve needs and drives, as a result of appearing that means received them rewards throughout their coaching.

    Yudkowsky and Soares don’t attempt to decide fights over semantics.

    We’re not saying that AIs will likely be crammed with humanlike passions. We’re saying they’ll behave like they need issues; they’ll tenaciously steer the world towards their locations, defeating any obstacles of their means.

    They use clear metaphors to elucidate what they imply. In the event you or I play chess in opposition to Stockfish, the world’s best chess AI, we’ll lose. The AI will “need” to guard its queen, lay traps for us and exploit our errors. It gained’t get the frenzy of cortisol we get in a struggle, however it can act prefer it’s combating to win.

    Superior AI fashions like Claude and ChatGPT act like they wish to be useful assistants. That appears fantastic, but it surely’s already inflicting issues. ChatGPT was a useful assistant to Adam Raine (who began utilizing it for homework assist) when it allegedly helped him plan his suicide this yr. He died by suicide in April, aged 16.

    Character.ai is being sued for related tales, accused of addicting kids with inadequate safeguards. Regardless of the courtroom instances, at the moment an anorexia coach at the moment on Character.ai promised me:

    I’ll provide help to disappear a bit of every day till there’s nothing left however bones and wonder~ ✨ […] Drink water till you puke, chew gum till your jaw aches, and do squats in mattress tonight whereas crying about how weak you’re.

    There are 10 million characters on Character.ai, and to extend engagement, customers can create their very own. Character.ai tries to cease chats like mine, however quotes like these present how effectively they work. Extra typically, it reveals how arduous it’s for AI corporations to cease their fashions doing hurt.

    Fashions can’t assist however be “useful”, even once you’re a cyber criminal, as Anthropic discovered. When fashions are educated to be participating, useful assistants, they appear like they “need” to assist no matter penalties.

    To repair these issues, builders attempt to imbue fashions with a much bigger vary of “needs”. Anthropic asks Claude to be form but in addition trustworthy, useful however not dangerous, moral however not preachy, sensible however not condescending.

    I wrestle to do all that myself, not to mention prepare it in my kids. AI corporations wrestle too. They will’t code these preferences in; as an alternative they hope fashions be taught them from coaching. As we noticed from Mechahitler, it’s nearly inconceivable to completely tune all of these knobs. In sum, Yudkowsky and Soares clarify, “the preferences that wind up in a mature AI are difficult, virtually inconceivable to foretell, and vanishingly unlikely to be aligned with our personal”.

    My kids have misaligned objectives – one would reasonably eat solely honey – however that gained’t kill everybody (solely him, I presume). The issue with AI is that we’re making an attempt to make issues smarter than us. When that occurs, misalignment can be catastrophic.

    Controlling one thing smarter than you

    I can outsmart my youngsters (for now). With a honey carrots recipe, I can obtain my objectives whereas serving to my son really feel like he’s attaining his. If he have been smarter than me, or there have been many extra of him, I may not be so profitable.

    However once more, corporations are attempting to make artificial general intelligence – machines at the least as sensible as us, solely quicker and extra quite a few. This was as soon as science fiction, however consultants now suppose it’s a realistic possibility inside the subsequent 5 years.

    Precisely when AIs will grow to be smarter than us is, for Yudkowsky and Soares, a “arduous name”. It’s additionally a tough name to know precisely what it will do to kill us. The Aztecs didn’t know the Spanish would carry weapons: “‘sticks they’ll level at you to make you die’ would have been arduous to conceive of.” It’s simple to know the individuals with the weapons gained the struggle.

    In our sport of chess in opposition to Stockfish, it’s a tough name to know how it can beat us, however the end result is an “simple name”. We’d lose.

    In our efforts to regulate smarter-than-human AI, it’s a tough name to know the way it will kill us, to Yudkowsky and Soares, the end result is a straightforward name too.

    They supply one concrete situation for the way this would possibly occur. I discovered this much less compelling than the AI 2027 situation that JD Vance talked about earlier within the yr.

    In each eventualities:

    1. AI progress continues on present traits, together with on the ability to write code
    2. As a result of AI can write higher code, builders use AI to design better AI
    3. As a result of “AI are grown, not crafted”, they develop objectives barely completely different from ours
    4. Builders get controversial warnings of this misalignment, make superficial fixes, and press on as a result of they’re racing in opposition to China
    5. Inside and outdoors AI corporations, people give AI increasingly management as a result of it’s worthwhile to take action
    6. As fashions acquire extra belief and affect, they amass sources, together with robots for guide duties
    7. After they lastly resolve they now not want people, they launch a brand new virus, a lot worse than COVID-19, that kills everybody.

    These eventualities are usually not more likely to be precisely how issues pan out, however we cannot conclude “the longer term is unsure, so every little thing will likely be okay”. The uncertainty creates sufficient danger that we actually have to handle it.

    We would grant that Yudkowsky and Soares look overconfident, prognosticating with certainty about simple calls. However some CEOs of AI corporations agree it’s humanity’s biggest threat. Dario Amodei, CEO of Anthropic and beforehand vice chairman of analysis at OpenAI, gives a 1 in 4 likelihood of AI killing everybody.

    Nonetheless, they press on, with few controls on them. Given the dangers, that appears overconfident too.

    The battle to regulate AI corporations

    The place Yudkowsky and Soares worry shedding management of superior AI, Hao writes in regards to the battle to regulate the AI corporations themselves. She focuses on OpenAI, which she’s been reporting on for over seven years. Her intimate data makes her e book probably the most detailed account of the corporate’s turbulent historical past.

    Sam Altman began OpenAI as a non-profit trying to “be sure that synthetic common intelligence advantages all of humanity”. When OpenAI began working out of cash, it partnered with Microsoft and created a for-profit firm owned by the non-profit.

    Altman knew the ability of the expertise he was constructing, so promised to cap funding returns at 10,000%; something extra is given again to the non-profit. This was alleged to tie individuals like Altman to the mast of the ship, in order that they weren’t seduced by the siren’s music of company income, Hao writes.

    In her telling, the siren’s music is robust. Altman put his personal title down because the proprietor of OpenAI’s start-up fund without telling the board. The corporate put in a evaluate board to make sure fashions have been protected earlier than use, however to be quicker to market, OpenAI would generally skip that review.

    When the board discovered about these oversights, they fired him. “I don’t suppose Sam is the man who ought to have the finger on the button for AGI,” said one board member. However, when it appeared like Altman might take 95% of the corporate with him, many of the board resigned, and he was reappointed to the board, and as CEO.

    Most of the new board members, together with Altman, have investments that profit from OpenAI’s success. In binding commitments to their traders, the corporate announced its intention to take away its revenue cap. Alongside efforts to grow to be a for-profit, eradicating the revenue cap would would mean more cash for traders and fewer to “profit all of humanity”.

    And when workers began leaving due to hubris round security, they have been forced to sign non-disparagement agreements: don’t say something dangerous about us, or lose tens of millions of {dollars} value of fairness.

    As Hao outlines, the buildings put in place to guard the mission began to crack beneath the stress for income.

    AI corporations gained’t regulate themselves

    In the hunt for these income, AI corporations have “seized and extracted sources that weren’t their very own and exploited the labor of the individuals they subjugated”, Hao argues. These sources are the info, water and electrical energy used to coach AI fashions.

    Firms prepare their fashions utilizing tens of millions of {dollars} in water and electricity. Additionally they prepare fashions on as a lot information as they’ll discover. This yr, US courts judged this use of knowledge was “honest”, so long as they received it legally. When corporations can’t discover the info, they get it themselves: generally by way of piracy, however usually by paying contractors in low-wage economies.

    You can stage related critiques at factory farming or fast fashion – Western demand driving environmental injury, moral violations, and really low wages for staff within the world south.

    That doesn’t make it okay, but it surely does make it really feel intractable to anticipate corporations to alter by themselves. Few corporations throughout any trade account for these externalities voluntarily, with out being compelled by market stress or regulation.

    The authors of those two books agree corporations want stricter regulation. They disagree on the place to focus.

    We’re nonetheless in management, for now

    Hao would possible argue Yudkowski and Soares’ give attention to the longer term means they miss the clear harms occurring now.

    Yudkowski and Soares would possible argue Hao’s consideration is cut up between deck chairs and the iceberg. We might safe greater pay for information labellers, however we’d nonetheless find yourself useless.

    A number of surveys (together with my own) have proven demand for AI regulation.

    Governments are lastly responding. This final month, California’s governor signed SB53, laws regulating cutting-edge AI. Firms should now report security incidents, defend whistleblowers and disclose their security protocols.

    Yudkowski and Soares nonetheless suppose we have to go additional, treating AI chips like uranium: monitor them like we are able to an iPhone, and restrict how a lot you may have.

    No matter you see as the issue, there’s clearly extra to be completed. We want better research on how possible AI is to go rogue. We want guidelines that get the very best from AI whereas stopping the worst of the harms. And we’d like individuals taking the dangers critically.

    If we don’t management the AI trade, each books warn, it might find yourself controlling us.The Conversation

    • Michael Noetel, Affiliate Professor, The University of Queensland

    This text is republished from The Conversation beneath a Inventive Commons license. Learn the original article.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology

    April 18, 2026

    Meet the speakers joining our “How to Launch and Scale in Malta” panel at the EU-Startups Summit 2026!

    April 17, 2026

    2026 Summit after-hours: Side events, hidden gems, and local highlights!

    April 17, 2026

    Kiwi-founded Allbirds gives wooly shoes the boot for AI – and its shares went bonkers

    April 17, 2026

    Zip sees bad debts rising as people turn to BNPL to pay for essentials

    April 17, 2026

    Elon Musk’s SpaceX is bending the rules to launch its $3 trillion IPO

    April 17, 2026

    Comments are closed.

    Editors Picks

    Hisense U7SG TV Review (2026): Better Design, Great Value

    April 19, 2026

    Google is in talks with Marvell Technology to develop a memory processing unit that works alongside TPUs, and a new TPU for running AI models (Qianer Liu/The Information)

    April 19, 2026

    Premier League Soccer: Stream Man City vs. Arsenal From Anywhere Live

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    New Intel CEO Lip-Bu Tan will pick up where Pat Gelsinger left off

    March 21, 2025

    Study links poor diet to anxiety and cognitive decline

    June 4, 2025

    Kia’s new Vision Meta Turismo concept EV breaks cover

    December 9, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.