Synopsis
A legal battle is unfolding as AI firm Anthropic challenges the Pentagon's decision to label it a national security risk. The Defense Department blacklisted Anthropic for refusing to allow its Claude AI for military surveillance or autonomous weapons. Anthropic argues the designation is an overreach and violates its rights.The company says AI models are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights.
ANTHROPIC DESIGNATION FIRST FOR U.S. COMPANY
US District Judge Rita Lin in San Francisco, an appointee of former Democratic President Joe Biden, is set to hold a hearing at 1:30 p.m. PT (2030 GMT) over Anthropic's request for an initial order blocking the designation while the case plays out. Anthropic's designation was the first time a U.S. company has been publicly designated a supply chain risk under an obscure government-procurement statute aimed at protecting military systems from foreign sabotage.
In its March 9 lawsuit, Anthropic alleged the government violated its right to free speech under the First Amendment of the Constitution by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process.
The lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military's past praise of Claude.
The Justice Department countered that Anthropic's refusal to lift the restrictions could cause uncertainty in the Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.
The government said the designation stemmed from Anthropic's refusal to accept contractual terms, not its views on AI safety. Anthropic has a second lawsuit pending in Washington, D.C., over a separate Pentagon supply chain risk designation that could lead to its exclusion from civilian government contracts.