Google enters into a classified AI agreement with the Pentagon.
The Information reported on Tuesday that Google has entered into a classified AI agreement with the US Department of Defense, allowing the Pentagon to utilize Google's AI models for "any lawful government purpose." This arrangement does not include the restrictions that resulted in Anthropic being blacklisted in February. Google joins a series of AI companies, including OpenAI and xAI, in providing classified AI capabilities to the US military.
The deal was disclosed shortly after over 560 Google employees published an open letter to CEO Sundar Pichai on Monday, advocating against such classified military AI agreements. As of the time of this article's publication, Google had not publicly acknowledged or commented on the agreement.
According to The Information, this agreement is structured without the ethical constraints that Anthropic had in its contract with the Pentagon. These constraints contributed to Anthropic being identified as a national security supply chain risk and subsequently banned by the Trump administration in February 2026. While Anthropic declined to eliminate its contractual restrictions on mass domestic surveillance and fully autonomous weapons without human oversight, Google's contract is described as allowing "any lawful government purpose" without such exclusions.
This perspective aligns Google’s deal with the unrestricted model preferred by the Trump administration, contrasting with the modified terms negotiated by OpenAI, which included limitations on domestic surveillance while adhering to the Pentagon’s contracting framework.
The Pentagon has now formed classified AI agreements with four of the largest AI companies in the United States: OpenAI, xAI, Google, and Anthropic, until its ban. This sequence is significant; Anthropic was excluded for maintaining ethical restrictions, OpenAI re-negotiated to stay compliant while keeping some restrictions, xAI signed without apparent limitations, and now Google has entered into an agreement that seemingly grants the most extensive discretion to the Pentagon.
The outcome is a classified AI supplier network from which Anthropic is absent, and where the three remaining suppliers possess varying degrees of freedom to offer AI capabilities for military uses.
The timing concerning Monday’s employee letter is particularly striking. The 560 employees who signed the letter to Pichai on Monday morning were, by Tuesday, part of a company that had signed the deal they urged Pichai to reject. This situation creates a direct and uncomfortable contradiction that Pichai will need to address in town halls, press conferences, and potentially in the courtroom during the Musk v. Altman trial, should Google’s stance on AI ethics be relevant to the proceedings.
Google has not confirmed the specific terms of its Pentagon AI arrangements, and the “any lawful government purpose” characterization stems from a single anonymous source cited by The Information.
The employee letter and the Pentagon contract together highlight the dilemma that all major AI companies are currently facing. On one side is the US government's requirement for unrestricted AI capabilities for classified military purposes, and on the other are the published AI ethical principles that companies adopted in part in response to the 2018 Project Maven controversy, which commit them to steering clear of AI weapons without human oversight. Anthropic adhered to its principles and was blacklisted.
OpenAI and Google appear to have opted for the contracts. Whether this decision will be temporary, commercially reversible, or permanent will depend on the evolution of the political landscape and whether the 560 signatories of Monday’s letter, along with potential future supporters, can influence internal decisions.
Other articles
Google enters into a classified AI agreement with the Pentagon.
Google has entered into a secret AI agreement with the Pentagon that permits its models to be used for 'any legal government purpose.'
