Anthropic can’t manipulate its generative AI mannequin Claude as soon as the US army has it working, an govt wrote in a courtroom submitting on Friday. The assertion was made in response to accusations from the Trump administration concerning the firm potentially tampering with its AI tools during war.
“Anthropic has by no means had the power to trigger Claude to cease working, alter its performance, shut off entry, or in any other case affect or imperil army operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote. “Anthropic doesn’t have the entry required to disable the expertise or alter the mannequin’s habits earlier than or throughout ongoing operations.”
The Pentagon has been sparring with the main AI lab for months over how its expertise can be utilized for nationwide safety—and what the boundaries on that utilization ought to be. This month, Protection Secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that may stop the Division of Protection from utilizing the corporate’s software program, together with by way of contractors, over the approaching months. Different federal companies are additionally abandoning Claude.
Anthropic filed two lawsuits difficult the constitutionality of the ban and is searching for an emergency order to reverse it. Nevertheless, clients have already begun canceling deals. A listening to in one of many circumstances is scheduled for March 24 in federal district courtroom in San Francisco. The decide might resolve on a brief reversal quickly after.
In a submitting earlier this week, authorities attorneys wrote that the Division of Protection “will not be required to tolerate the chance that crucial army methods might be jeopardized at pivotal moments for nationwide protection and lively army operations.”
The Pentagon has been utilizing Claude to investigate knowledge, write memos, and assist generate battle plans, WIRED reported. The federal government’s argument is that Anthropic might disrupt lively army operations by turning off entry to Claude or pushing dangerous updates if the corporate disapproves of sure makes use of.
Ramasamy rejected that risk. “Anthropic doesn’t keep any again door or distant ‘kill change,’” he wrote. “Anthropic personnel can’t, for instance, log right into a DoW system to switch or disable the fashions throughout an operation; the expertise merely doesn’t operate that method.”
He went on to say that Anthropic would be capable of present updates solely with the approval of the federal government and its cloud supplier, on this case Amazon Net Companies, although he didn’t specify it by title. Ramasamy added that Anthropic can’t entry the prompts or different knowledge army customers enter into Claude.
Anthropic executives keep in courtroom filings that the corporate doesn’t need veto energy over army tactical choices. Sarah Heck, head of coverage, wrote in a courtroom submitting on Friday that Anthropic was prepared to ensure as a lot in a contract proposed March 4. “For the avoidance of doubt, [Anthropic] understands that this license doesn’t grant or confer any proper to regulate or veto lawful Division of Struggle operational determination‑making,” the proposal acknowledged, in keeping with the submitting, which referred to an alternate title for the Pentagon.
The corporate was additionally prepared to simply accept language that will tackle its issues about Claude getting used to assist perform lethal strikes with out human supervision, Heck claimed. However negotiations finally broke down.
In the meanwhile, the Protection Division has said in courtroom filings that it “is taking extra measures to mitigate the provision chain danger” posed by the corporate by “working with third-party cloud service suppliers to make sure Anthropic management can’t make unilateral adjustments” to the Claude methods at the moment in place.

