Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • AI Machine-Vision Earns Man Overboard Certification
    • Battery recycling startup Renewable Metals charges up on $12 million Series A
    • The Influencers Normalizing Not Having Sex
    • Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)
    • Today’s NYT Wordle Hints, Answer and Help for April 20 #1766
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Monday, April 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»“This isn’t what we signed up for.”
    Artificial Intelligence

    “This isn’t what we signed up for.”

    Editor Times FeaturedBy Editor Times FeaturedFebruary 27, 2026No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    There was a palpable change in Silicon Valley this week.

    Over 200 Google and OpenAI staff known as on their employers to higher outline the bounds of how AI can be utilized for navy functions. Explicitly. Loudly. In a non-public push that Axios’s details, employees made it clear they’re more and more uneasy about how the AI instruments they’re creating are being deployed.

    And actually? You possibly can see why.

    AI now not simply helps compose electronic mail and produce graphics. It’s being talked about in relation to struggle logistics, surveillance and autonomous weaponry on the battlefield. That’s severe. Not less than one one who participated within the effort questioned aloud if these company checks are ample, or whether or not they merely signify aspirational prose that may be bent when wanted within the face of political exigencies.

    The rationale this appears déjà vu is as a result of we’ve been right here earlier than. In 2018, Googlers revolted in opposition to the corporate engaged on Undertaking Maven, a Pentagon mission to investigate drone footage. Google responded with its AI principles, which promised the corporate wouldn’t construct AI to be used in weapons or in weapons surveillance. The difficulty is, expertise strikes sooner than ideas, and issues that appeared clearly out of bounds in 2018 might sound much less clear-cut in 2023.

    OpenAI also has publicly accessible use cases policies that ban weapons work. On paper, it’s reassuring. However staff seem like in search of solutions to a extra ambiguous query: What if AI tech is twin use? What if it helps medical doctors do analysis, but in addition might be employed in weapons work? What’s the boundary?

    When you step again a little bit additional, you will note the geopolitical context: AI has been designated one of many Division of Protection’s prime areas of precedence modernization, and there’s an entire web site for the Chief Digital and Artificial Intelligence Office. They declare AI will allow sooner decision-making, decrease lack of life, and deter threats. It’s all very “sensible”.

    However critics, together with some inside tech firms, are involved that that is the skinny fringe of the wedge. AI in protection techniques can result in an absence of accountability. Autonomous techniques, even non-lethal ones, are one other step in direction of delegating decisions that some consider ought to all the time stay within the palms of individuals.

    However the worldwide argument is way from over. The UN has been debating deadly autonomous weapons for years and, as recent reports show, nations are nonetheless a great distance from agreeing what ought to occur subsequent. Some need a ban. Others desire to suggest unfastened tips. AI fashions, in the meantime, get higher each month.

    The half that sounds actually human is the people who find themselves talking out aren’t against expertise. Lots of them are AI fanatics. They’ve seen their techniques allow the sooner detection of ailments, the real-time translation of languages, and simpler entry to studying. They assist the great things. That’s why that is such a charged state of affairs. It’s not a riot for its personal sake — it’s a disagreement over values.

    There’s a generational aspect, too. Youthful engineers aren’t so fast to shrug and say, “If we don’t do it, another person will.” The Silicon Valley standby now not resonates. As an alternative, they’re asking: If we’re going to do it, shouldn’t we create the borders, too?

    However clearly, firm leaders have a special perspective. Governments are large prospects. Safety points are an element. And with AI racing occurring (notably between the U.S. and China), they don’t wish to get left behind. It’s not simple to simply go away. It’s strategic, it’s cash, it’s politics, it’s all that.

    However the internal strain reveals one thing invaluable. AI isn’t simply algorithms. AI is values. AI is a gaggle of individuals sitting in entrance of a monitor and beginning to perceive that what they’re creating may sooner or later weigh on questions of life and dying.

    Maybe that’s the crux of the matter. It is a ethical as a lot as a coverage argument. Workers are being very clear: “We would like guardrails.” Not as a result of they’re against progress — however exactly as a result of they see its gravity.

    What’s subsequent? It’s unclear. The firms may tighten up the pledges. The governments may develop extra outlined insurance policies. Or the friction may merely be papered over with PR bulletins.

    However one factor is obvious: the talk over navy AI is not only theoretical anymore. It’s private. And it’s happening within the rooms the place the long run is being created.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.

    April 19, 2026

    Proxy-Pointer RAG: Structure Meets Scale at 100% Accuracy with Smarter Retrieval

    April 19, 2026

    Dreaming in Cubes | Towards Data Science

    April 19, 2026

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    April 18, 2026

    Your RAG System Retrieves the Right Data — But Still Produces Wrong Answers. Here’s Why (and How to Fix It).

    April 18, 2026

    Europe Warns of a Next-Gen Cyber Threat

    April 18, 2026

    Comments are closed.

    Editors Picks

    AI Machine-Vision Earns Man Overboard Certification

    April 20, 2026

    Battery recycling startup Renewable Metals charges up on $12 million Series A

    April 20, 2026

    The Influencers Normalizing Not Having Sex

    April 20, 2026

    Sources say NSA is using Mythos Preview, and a source says it is also being used widely within the DoD, despite Anthropic’s designation as a supply chain risk (Axios)

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    A Yann LeCun–Linked Startup Charts a New Path to AGI

    January 30, 2026

    AI for New Physics: AI Looks Beyond the Standard Model

    March 1, 2026

    Influencer Marketing in Numbers: Key Stats

    March 13, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.