Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Scandi-style tiny house combines smart storage and simple layout
    • Our Favorite Apple Watch Has Never Been Less Expensive
    • Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)
    • Today’s NYT Strands Hints, Answer and Help for April 20 #778
    • KV Cache Is Eating Your VRAM. Here’s How Google Fixed It With TurboQuant.
    • OneOdio Focus A1 Pro review
    • The 11 Best Fans to Buy Before It Gets Hot Again (2026)
    • A look at Dylan Patel’s SemiAnalysis, an AI newsletter and research firm that expects $100M+ in 2026 revenue from subscriptions and AI supply chain research (Abram Brown/The Information)
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Sunday, April 19
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Technology»Anthropic Denies It Could Sabotage AI Tools During War
    Technology

    Anthropic Denies It Could Sabotage AI Tools During War

    Editor Times FeaturedBy Editor Times FeaturedMarch 21, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    Anthropic can’t manipulate its generative AI mannequin Claude as soon as the US army has it working, an govt wrote in a courtroom submitting on Friday. The assertion was made in response to accusations from the Trump administration concerning the firm potentially tampering with its AI tools during war.

    “Anthropic has by no means had the power to trigger Claude to cease working, alter its performance, shut off entry, or in any other case affect or imperil army operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote. “Anthropic doesn’t have the entry required to disable the expertise or alter the mannequin’s habits earlier than or throughout ongoing operations.”

    The Pentagon has been sparring with the main AI lab for months over how its expertise can be utilized for nationwide safety—and what the boundaries on that utilization ought to be. This month, Protection Secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that may stop the Division of Protection from utilizing the corporate’s software program, together with by way of contractors, over the approaching months. Different federal companies are additionally abandoning Claude.

    Anthropic filed two lawsuits difficult the constitutionality of the ban and is searching for an emergency order to reverse it. Nevertheless, clients have already begun canceling deals. A listening to in one of many circumstances is scheduled for March 24 in federal district courtroom in San Francisco. The decide might resolve on a brief reversal quickly after.

    In a submitting earlier this week, authorities attorneys wrote that the Division of Protection “will not be required to tolerate the chance that crucial army methods might be jeopardized at pivotal moments for nationwide protection and lively army operations.”

    The Pentagon has been utilizing Claude to investigate knowledge, write memos, and assist generate battle plans, WIRED reported. The federal government’s argument is that Anthropic might disrupt lively army operations by turning off entry to Claude or pushing dangerous updates if the corporate disapproves of sure makes use of.

    Ramasamy rejected that risk. “Anthropic doesn’t keep any again door or distant ‘kill change,’” he wrote. “Anthropic personnel can’t, for instance, log right into a DoW system to switch or disable the fashions throughout an operation; the expertise merely doesn’t operate that method.”

    He went on to say that Anthropic would be capable of present updates solely with the approval of the federal government and its cloud supplier, on this case Amazon Net Companies, although he didn’t specify it by title. Ramasamy added that Anthropic can’t entry the prompts or different knowledge army customers enter into Claude.

    Anthropic executives keep in courtroom filings that the corporate doesn’t need veto energy over army tactical choices. Sarah Heck, head of coverage, wrote in a courtroom submitting on Friday that Anthropic was prepared to ensure as a lot in a contract proposed March 4. “For the avoidance of doubt, [Anthropic] understands that this license doesn’t grant or confer any proper to regulate or veto lawful Division of Struggle operational determination‑making,” the proposal acknowledged, in keeping with the submitting, which referred to an alternate title for the Pentagon.

    The corporate was additionally prepared to simply accept language that will tackle its issues about Claude getting used to assist perform lethal strikes with out human supervision, Heck claimed. However negotiations finally broke down.

    In the meanwhile, the Protection Division has said in courtroom filings that it “is taking extra measures to mitigate the provision chain danger” posed by the corporate by “working with third-party cloud service suppliers to make sure Anthropic management can’t make unilateral adjustments” to the Claude methods at the moment in place.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026

    The 11 Best Fans to Buy Before It Gets Hot Again (2026)

    April 19, 2026

    Hisense U7SG TV Review (2026): Better Design, Great Value

    April 19, 2026

    Best Meta Glasses (2026): Ray-Ban, Oakley, AR

    April 19, 2026

    How Can Astronauts Tell How Fast They’re Going?

    April 19, 2026

    The ‘Lonely Runner’ Problem Only Appears Simple

    April 19, 2026

    Comments are closed.

    Editors Picks

    Scandi-style tiny house combines smart storage and simple layout

    April 19, 2026

    Our Favorite Apple Watch Has Never Been Less Expensive

    April 19, 2026

    Vercel says it detected unauthorized access to its internal systems after a hacker using the ShinyHunters handle claimed a breach on BreachForums (Lawrence Abrams/BleepingComputer)

    April 19, 2026

    Today’s NYT Strands Hints, Answer and Help for April 20 #778

    April 19, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Google AI tricked by Cwmbran roundabouts Aprils fools’ prank

    April 20, 2025

    Terraced supertall 2 World Trade Center set to be NYC’s green office landmark

    February 26, 2026

    British AgriTech startup Antler Bio raises €3.6 million to help farmers increase milk yield

    June 26, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.