Over 580 Google employees, including researchers from DeepMind, are calling on Pichai to reject a classified AI contract with the Pentagon.
Over 580 Google employees have signed a letter to CEO Sundar Pichai, urging him to decline classified military AI projects for the Pentagon. This group includes more than 20 directors, VPs, and senior researchers from DeepMind. The letter emphasizes that on air-gapped classified networks, Google cannot monitor the use of its AI, making "trust us" the only safeguard against potential autonomous weapons and mass surveillance. In 2018, Google employees successfully opposed Project Maven, but the company has since altered its AI principles, secured a portion of the $9 billion JWCC cloud contract, deployed the Gemini AI to three million Pentagon staff, and is now discussing classified access under "all lawful uses" terms.
The letter addressed to Pichai expresses deep concern over ongoing negotiations between Google and the Department of Defense, emphasizing the risks inherent in AI systems that can centralize power and make errors. The signatories demand that Google reject all classified workloads since such systems, isolated from public internet access, would limit the company's ability to monitor their application, stating that the only way to ensure Google is not associated with harmful uses is to avoid classified projects.
Historically, Google employees have taken a stand against Pentagon collaborations. In 2018, around 4,000 employees signed a petition and at least 12 resigned in protest of Project Maven, which involved AI in drone surveillance. This backlash led Google to create AI principles pledging to avoid weapons or surveillance technology and to allow the Maven contract to expire. Palantir later took over this contract. Although the 2018 protest was a significant win, it was the last successful limitation of Google's military ambitions, as the company has since rebuilt ties to defense contracts.
In December 2022, Google secured part of the Pentagon's $9 billion Joint Warfighting Cloud Capability contract along with Amazon, Microsoft, and Oracle. In February 2025, Google removed specific language from its AI principles, which had originally committed to avoiding weapons technology that causes harm and surveillance that violates international norms, citing global competition for AI leadership as a reason. This decision drew criticism from organizations like Human Rights Watch and Amnesty International. By December 2025, the Pentagon launched GenAI.mil, using Google's Gemini chatbot to serve defense personnel. Defense Secretary Pete Hegseth indicated that AI would play a crucial role in the future of American warfare. In March 2026, Google implemented Gemini AI tools for the Pentagon's unclassified workforce, creating agents for various administrative tasks.
The negotiations for the classified agreement mark the next stage. Emil Michael, undersecretary of defense for research and engineering, indicated that the Pentagon plans to start with unclassified systems before transitioning to classified applications. Discussions concerning the use of Gemini agents on classified cloud infrastructure are already happening. Recent reports have highlighted that talks are leaning toward "all lawful uses," a phrase that lacks the strict conditions Anthropic adhered to before being labeled as a supply-chain risk by the Pentagon due to its refusal to relax restrictions on autonomous weapons and domestic surveillance. The Pentagon contested Anthropic's position, asserting that commercial firms should not set policy limitations during wartime.
OpenAI signed its Pentagon contract shortly after Anthropic's designation, with three clear restrictions: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions. However, enforcing these conditions on classified networks remains a concern raised by Google's employees. On an air-gapped system, AI functions in a network disconnected from Google’s infrastructure, preventing the company from monitoring queries, outputs, or decisions made. The only assurance against violations of negotiated red lines relies on trust in Pentagon leadership. Sofia Liguori, an AI researcher at DeepMind UK and one of the letter's signatories, noted that employees have been encouraged to trust company leadership without clear guidelines. She expressed concern that powerful AI tools could be deployed without proper control over their usage.
The Pentagon's budget for AI reflects the potential funding of the classified deal; the fiscal 2026 defense budget allocated $13.4 billion for AI and autonomy, with a fiscal 2027 request of $54.6 billion dedicated to the Defence Autonomous Warfare Group, marking a significant increase. The military’s AI investment is moving from experimental phases, like Project Maven, to an industrial-scale capability considered foundational to the U.S. military. The classified projects that Google employees oppose would play a central role in this expansion.
The letter's organizers emphasize that the fight against the weaponization of Google's AI technology is not over, vowing to continue advocating for clear and enforceable restrictions. The current struggle differs from the 2018 protest against a single contract; now, the issue concerns whether Google’s entire AI ecosystem, including Gemini, DeepMind’s research, and related technologies, will become part of military infrastructure on classified networks, obscured from external oversight.
The gap between employee dissent and corporate decision-making has widened since 2018; then, 4,000 signatures and resignations led to the
Other articles
Over 580 Google employees, including researchers from DeepMind, are calling on Pichai to reject a classified AI contract with the Pentagon.
More than 580 employees at Google signed a letter against classified military AI projects. Google is unable to oversee the usage of Gemini on air-gapped networks. Last year, the company eliminated its self-imposed restrictions regarding weapons.
