Anthropic Disputes Pentagon's National Security Risk Claims in Court

Anthropic Disputes Pentagon's National Security Risk Claims in Court

Synopsis

The company has said that the government’s concerns are based on misunderstandings and retaliation for its stance on AI safety.
Anthropic has submitted sworn declarations in a California federal court challenging the Pentagon’s claim that it poses an “unacceptable risk to national security." The filings, cited by TechCrunch, accompany the company’s reply brief and argue that the government’s case is based on technical misunderstandings and mischaracterisations of its position during earlier negotiations.

A hearing is set for March 24 before Judge Rita Lin.

The dispute began last month, when Donald Trump and Defense Secretary Pete Hegseth announced they were ending engagement with Anthropic. This followed the company’s refusal to permit unrestricted military use of its artificial intelligence (AI) systems. Soon after, the Pentagon labelled Anthropic a supply-chain risk, the first time such a classification has been applied to a US-based AI firm.

Two senior executives at Anthropic — Sarah Heck, head of policy, and Thiyagu Ramasamy, head of public sector — filed declarations disputing the government’s account. According to the report, Heck strongly rejected a claim made by the Pentagon: that Anthropic had sought an approval role in military operations. She said this was incorrect and wrote, “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role."

Heck also stated that concerns about Anthropic potentially altering or disabling its technology during active operations were never raised in discussions with officials. She said these issues appeared only later in court filings, leaving the company without an opportunity to address them at the time.

Her declaration highlights another development. On March 4, a day after the Pentagon finalised its supply-chain risk designation, Under Secretary Michael (Only one name?) emailed Anthropic CEO Dario Amodei, saying both sides were “very close” to agreement on two key issues — autonomous weapons and surveillance of Americans — which are now being cited as grounds for the national security concern.

In a separate filing, Ramasamy challenged the suggestion that Anthropic could interfere with military systems. He explained that once its Claude AI is deployed within secure, “air-gapped” government environments, the company has no access or control. There is no kill switch or backdoor, and any updates must be approved and installed by the Pentagon.

He also noted that Anthropic personnel working on such projects are cleared through US government security processes, which he said distinguishes the company in classified settings. Anthropic’s lawsuit argues that the Pentagon’s decision is retaliation for its public stance on AI safety and violates its First Amendment rights.

The government has rejected this argument, maintaining that the designation is purely a national security measure.