Synopsis
Google has joined other tech firms in a deal with the US Department of Defense. The agreement allows the Pentagon to use Google's AI for lawful government purposes. This includes sensitive work like mission planning and weapons targeting. The Pentagon is signing deals worth up to $200 million with major AI labs. Google's agreement includes adjustments to AI safety settings.Listen to this article in summarized format
The agreement allows the Pentagon to use Google's AI for "any lawful government purpose", the report added, putting it alongside OpenAI and Elon Musk's xAI, which also have deals to supply AI models for classified use.
Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting.
The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google. Reuters had earlier reported that the Pentagon had been pushing top AI companies such as OpenAI and Anthropic to make their tools available on classified networks without the standard restrictions they apply to users.
Safety and oversight
Google's agreement requires it to help in adjusting the company's AI safety settings and filters at the government's request, according to The Information report.
The contract includes language stating, "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control."
However, the agreement also says it does not give Google the right to control or veto lawful government operational decision-making, the report added.
The U.S. Department of Defense, which has now been renamed the Department of War by President Donald Trump, declined to comment on the matter.
Google said it supports government agencies across both classified and non-classified projects. A spokesperson for the company said that the company remains committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
"We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security," a spokesperson for Google told Reuters.
The Pentagon has said it has no interest in using AI to conduct mass surveillance of Americans or to develop weapons that operate without human involvement, but wants 'any lawful use' of AI to be allowed.
Anthropic faced fallout with the Pentagon earlier in the year after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance, and the department designated the Claude-maker a supply-chain risk.