Google finalizes a confidential AI agreement with the Pentagon.
The Information reported on Tuesday that Google has established a classified AI agreement with the US Department of Defense (DoD), enabling the Pentagon to utilize Google’s AI models for "any lawful government purpose," bypassing the constraints that previously resulted in Anthropic being blacklisted in February. This deal places Google among several AI firms, including OpenAI and xAI, providing classified AI capabilities to the US military.
This report came just hours after over 560 Google employees sent an open letter to CEO Sundar Pichai on Monday, urging him to reject this type of classified military AI arrangement. At the time of this article's publication, Google had not made any public confirmation or comment regarding the agreement.
The agreement, as described by The Information, lacks the ethical restrictions that Anthropic incorporated into its Pentagon contract, which ultimately led to Anthropic being categorized as a national security supply chain risk and subsequently blacklisted by the Trump administration in February 2026. Unlike Anthropic, which declined to eliminate contractual prohibitions on mass domestic surveillance and fully autonomous weapons lacking human oversight, Google’s deal allows for “any lawful government purpose” without such limitations.
This framing aligns more closely with the unrestricted approach favored by the Trump administration rather than the revised terms negotiated by OpenAI, which included limitations on domestic surveillance while still conforming to the Pentagon’s contracting framework.
The Pentagon has now formed classified AI contracts with four of the largest AI companies in the US: OpenAI, xAI, Google, and, until its blacklist, Anthropic. The order in which these deals were signed is noteworthy. Anthropic was excluded from the supplier list for adhering to ethical constraints, OpenAI renegotiated to maintain some limitations, xAI entered into a deal without evident restrictions, and now Google has signed an agreement that appears to offer the widest discretion to the Pentagon.
As a result, a classified AI vendor pool has emerged that excludes Anthropic and gives the remaining three suppliers varying degrees of freedom to deliver AI capabilities for military uses.
The timing of this news in relation to Monday’s employee letter is particularly striking. The 560 employees who urged Pichai to reject the deal on Monday morning found themselves in a company that had signed the agreement by Tuesday morning.
This juxtaposition presents a direct and contentious issue that Pichai will have to address in town halls, press conferences, and potentially in the courtroom during the Musk v. Altman trial if questions regarding Google’s AI ethics arise.
Google has not confirmed the specific details of its Pentagon AI agreements, and the "any lawful government purpose" phrasing comes from a single anonymous source cited by The Information.
The employee letter and the Pentagon deal highlight the divide that all major AI companies are currently navigating. On one side lies the US government's requirement for unrestricted AI capabilities for classified military purposes. On the other side are the AI ethics principles adopted by companies, largely in response to the 2018 Project Maven controversy, which commit them to avoiding AI weapons that lack human oversight. Anthropic adhered to its principles and faced blacklisting, while OpenAI and Google chose the contracts. Whether this choice will be short-lived, commercially reversible, or permanent will depend on how the political landscape evolves and whether the 560 signees of Monday’s letter, along with potential supporters, can alter the internal dynamics.
Other articles
Google finalizes a confidential AI agreement with the Pentagon.
Google has entered into a confidential AI agreement with the Pentagon that permits its models to be used for "any lawful government purpose."
