Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Portable water filter provides safe drinking water from any source
    • MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged
    • NCAA seeks faster trial over DraftKings disputed March Madness branding case
    • AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says
    • Extragalactic Archaeology tells the ‘life story’ of a whole galaxy
    • Swedish semiconductor startup AlixLabs closes €15 million Series A to scale atomic-level etching technology
    • Republican Mutiny Sinks Trump’s Push to Extend Warrantless Surveillance
    • Yocha Dehe slams Vallejo Council over rushed casino deal approval process
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Saturday, April 18
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Technology»Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’
    Technology

    Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’

    Editor Times FeaturedBy Editor Times FeaturedFebruary 28, 2026No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    United States Secretary of Protection Pete Hegseth directed the Pentagon to designate Anthropic as a “supply-chain risk” on Friday, sending shockwaves by Silicon Valley and leaving many firms scrambling to know whether or not they can preserve utilizing one of many business’s most popular AI fashions.

    “Efficient instantly, no contractor, provider, or companion that does enterprise with the US army could conduct any business exercise with Anthropic,” Hegseth wrote in a social media put up.

    The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US army may use the startup’s AI fashions. In a blog post this week, Anthropic argued its contracts with the Pentagon shouldn’t permit for its expertise for use for mass home surveillance of Individuals or absolutely autonomous weapons. The Pentagon requested that Anthropic comply with let the US army apply its AI to “all lawful makes use of” with no particular exceptions.

    A provide chain threat designation permits the Pentagon to limit or exclude sure distributors from protection contracts if they’re deemed to pose safety vulnerabilities, comparable to dangers associated to overseas possession, management, or affect. It’s supposed to guard delicate army techniques and information from potential compromise.

    Anthropic responded in one other blog post on Friday night, saying it will “problem any provide chain threat designation in court docket,” and that such a designation would “set a harmful precedent for any American firm that negotiates with the federal government.”

    Anthropic added that it hadn’t obtained any direct communication from the Division of Protection or the White Home relating to negotiations over using its AI fashions.

    “Secretary Hegseth has implied this designation would limit anybody who does enterprise with the army from doing enterprise with Anthropic. The Secretary doesn’t have the statutory authority to again up this assertion,” the corporate wrote.

    The Pentagon declined to remark.

    “That is probably the most surprising, damaging, and over-reaching factor I’ve ever seen the US authorities do,” says Dean Ball, a senior fellow on the Basis for American Innovation and the previous senior coverage advisor for AI on the White Home. “We now have primarily simply sanctioned an American firm. In case you are an American, you have to be enthusiastic about whether or not or not it is best to stay right here 10 years from now.”

    Folks throughout Silicon Valley chimed in on social media expressing related shock and dismay. “The individuals operating this administration are impulsive and vindictive. I imagine that is ample to clarify their conduct,” Paul Graham, founding father of the startup accelerator Y Combinator said.

    Boaz Barak, an OpenAI researcher, mentioned in a post that “kneecapping one in every of our main AI firms is correct concerning the worst personal objective we are able to do. I hope very a lot that cooler heads prevail and this announcement is reversed.”

    In the meantime, OpenAI CEO Sam Altman introduced on Friday evening that the corporate reached an settlement with the Division of Protection to deploy its AI fashions in labeled environments, seemingly with carveouts. “Two of our most vital security ideas are prohibitions on home mass surveillance and human duty for using pressure, together with for autonomous weapon techniques,” mentioned Altman. “The DoW agrees with these ideas, displays them in regulation and coverage, and we put them into our settlement.”

    Confused Prospects

    In its Friday weblog put up, Anthropic mentioned a provide chain threat designation, beneath the authority 10 USC 3252, solely applies to Division of Protection contracts instantly with suppliers, and doesn’t cowl how contractors use its Claude AI software program to serve different prospects.

    Three consultants in federal contracts say it’s unimaginable at this level to find out which Anthropic prospects, if any, should now reduce ties with the corporate. Hegseth’s announcement “is just not mired in any regulation we are able to divine proper now,” says Alex Main, a companion on the regulation agency McCarter & English, which works with tech firms.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged

    April 18, 2026

    Republican Mutiny Sinks Trump’s Push to Extend Warrantless Surveillance

    April 18, 2026

    OpenAI Executive Kevin Weil Is Leaving the Company

    April 17, 2026

    Gazing Into Sam Altman’s Orb Now Proves You’re Human on Tinder

    April 17, 2026

    AI Drafting My Stories? Over My Dead Body

    April 17, 2026

    Coolfly Aura Review: More Angles, Fewer Advantages

    April 17, 2026

    Comments are closed.

    Editors Picks

    Portable water filter provides safe drinking water from any source

    April 18, 2026

    MAGA Is Increasingly Convinced the Trump Assassination Attempt Was Staged

    April 18, 2026

    NCAA seeks faster trial over DraftKings disputed March Madness branding case

    April 18, 2026

    AI Trusted Less Than Social Media and Airlines, With Grok Placing Last, Survey Says

    April 18, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    The 16 Best Advent Calendars (2025): We Opened Every Door

    November 5, 2025

    The startup trying to turn the web into a database

    December 4, 2024

    The Skills You Need for Jobs in Quantum Computing

    October 30, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.