The Pentagon has entered into classified AI agreements with Nvidia, Microsoft, and AWS after removing Anthropic due to safety concerns.

The Pentagon has entered into classified AI agreements with Nvidia, Microsoft, and AWS after removing Anthropic due to safety concerns.

      TL;DR: The Pentagon has established classified AI agreements with Nvidia, Microsoft, AWS, and Reflection AI, increasing the total to seven companies—including SpaceX, OpenAI, and Google—working on covert military networks under the terms of “lawful operational use.” This terminology intentionally replaces the safety limitations that Anthropic attempted to impose, which led to its removal from Pentagon supply chains. The implication is clear: any AI firm that places restrictions on military applications will be substituted by one that does not.

      On May 1, the Pentagon announced it had signed agreements with Nvidia, Microsoft, Amazon Web Services, and Reflection AI to enhance the use of advanced artificial intelligence in classified military networks. These agreements add to the existing arrangements with SpaceX, OpenAI, and Google, which also concluded a classified AI deal earlier this week. Collectively, these seven agreements allow for “lawful operational use,” a term that Defense Department officials describe as facilitating a shift towards making the U.S. military an AI-first combat force. This phrasing is intentional, as it serves to replace the limitations that Anthropic tried to impose on the military use of its technology. Anthropic's insistence on maintaining those restrictions resulted in their exclusion from the Pentagon’s supply chain, paving the way for seven companies that accepted broader terms.

      The significance of these terms lies in how “classified military AI” is defined in practice. Prior to being labeled a supply chain risk in February, Anthropic's stance was that it would not allow its models to be used for widespread domestic surveillance or fully autonomous weapon systems. These were firm contractual boundaries that Anthropic stipulated in a $200 million Pentagon agreement awarded in July 2025. When the Pentagon rejected the restrictions during renegotiations in late 2025 and early 2026, Anthropic maintained its position, prompting the Defense Department to eliminate the company and seek competitors willing to accept less restrictive terms.

      The adoption of "lawful operational use" emerged as a broad description that encompasses assistance with targeting, intelligence gathering, and operational strategy on classified networks, without the specific prohibitions that Anthropic sought. According to defense officials briefed on the agreements, the new contracts give the Pentagon significant flexibility to utilize advanced AI technologies for clandestine combat operations, including targeting assistance. Negotiations for the AWS agreement extended late into Thursday night, indicating urgency in finalizing all agreements. An AWS spokesperson, commenting on the deal, referred to the Defense Department as “the Department of War,” its name prior to 1947, and expressed AWS’s eagerness to continue supporting its modernization initiatives.

      The seven companies involved in classified Pentagon networks represent nearly the entire foundational layer of the American AI industry. Nvidia supplies the necessary chips, while Microsoft and AWS provide the cloud infrastructure. Google contributes its Gemini AI, OpenAI offers GPT, and SpaceX supplies satellite communications, along with AI models trained on data from X, thanks to its acquisition of xAI. Smaller, defense-focused AI firms are also developing applications for sovereign military needs, but the Pentagon's primary focus is clearly on larger providers. Among the group, Reflection AI is less recognizable but specializes in AI for classified and intelligence community purposes.

      The comprehensive nature of these agreements is noteworthy. Defense officials have emphasized their aim to prevent the military from relying on a single company or set of constraints, a statement that directly references the issues with Anthropic. The Pentagon aims to avoid a scenario where the ethical boundaries defined by one AI company limit military operations. The approach is to diversify across seven providers, all of whom agreed to terms that do not include the restrictions insisted upon by Anthropic. The envisioned “AI-first fighting force” requires AI technologies available for any lawful purpose defined by the military, free from pre-existing limitations set by the companies that develop them.

      In contrast, the situation with Anthropic has unfolded quite differently. The company was labeled a supply chain risk, a designation previously applied to Chinese firms such as Huawei and ZTE. As a result, its $200 million Pentagon contract was effectively annulled. High-ranking defense officials openly criticized the company, and the Trump administration broadened the conflict to critique Anthropic’s Mythos model and its deployment restrictions within government systems. So far, the commercial repercussions have been minimal; Anthropic's valuation has surged to about $900 billion, up from $380 billion in February. Its largest computing deal, with Google and Broadcom, far exceeds the lost Pentagon contract, and the company's revenue run rate sits around $30 billion. Thus, being barred from the Pentagon's classified networks has not significantly harmed Anthropic's business in the short term.

      However, this situation has set a precedent: any AI firm that imposes explicit limits on military usage of its technology will be replaced by one that does not. Through seven concurrent agreements with alternative companies, the Pentagon's message is clear: the Department of Defense will not negotiate the parameters of military AI usage with the developers of that technology. “Lawful operational use” indicates

Другие статьи

Australia invests $22.7 billion in renewable energy following the Hormuz crisis, which highlighted the severe fuel vulnerability of developed countries. Australia invests $22.7 billion in renewable energy following the Hormuz crisis, which highlighted the severe fuel vulnerability of developed countries. Australia imports 80% of its fuel and has the least reserves among IEA members. The crisis in Hormuz has turned this into a national security issue. The solution requires an investment of $22.7 billion in technology. Nebius has purchased Eigen AI, a company with 20 employees, for $643 million as inference optimization emerges as the most crucial component of AI infrastructure. Nebius has purchased Eigen AI, a company with 20 employees, for $643 million as inference optimization emerges as the most crucial component of AI infrastructure. Nebius has acquired Eigen AI, a 20-person MIT spinout that optimizes tokens per GPU, for $643 million. In the neocloud competition, inference optimization provides a significant advantage. Meta has purchased Assured Robot Intelligence to develop the Android for humanoid robots. Meta has purchased Assured Robot Intelligence to develop the Android for humanoid robots. Meta acquired ARI, a robotics AI startup, and integrated it into Superintelligence Labs. The aim is to become the essential platform for all humanoid manufacturers. Meta's $145 billion AI initiative eclipses the child safety lawsuits that could potentially incur higher costs. Meta's $145 billion AI initiative eclipses the child safety lawsuits that could potentially incur higher costs. Meta experienced a defeat in its initial addiction trial, is confronted with over 40 lawsuits from state attorneys general, and is seeing an increase in bans. During Zuckerberg's earnings call, the focus was on AI, and no investors inquired about issues concerning children. The Pentagon has finalized classified AI agreements with Nvidia, Microsoft, and AWS following the removal of Anthropic due to safety constraints. The Pentagon has finalized classified AI agreements with Nvidia, Microsoft, and AWS following the removal of Anthropic due to safety constraints. Seven companies currently manage AI on classified Pentagon networks under the terms of "lawful operational use." Anthropic, which did not remove its safety constraints, has been substituted. Founders Fund has raised $6 billion following the expenditure of $4.6 billion in less than a year on investments in Anthropic, Anduril, and OpenAI. Founders Fund has raised $6 billion following the expenditure of $4.6 billion in less than a year on investments in Anthropic, Anduril, and OpenAI. Founders Fund has launched a $6 billion fund, succeeding its previous $4.6 billion fund, which was utilized in less than a year. The average investment amount is $600 million. Their portfolio includes Anthropic, Anduril, SpaceX, and OpenAI.

The Pentagon has entered into classified AI agreements with Nvidia, Microsoft, and AWS after removing Anthropic due to safety concerns.

Seven companies currently run AI on classified Pentagon networks under the terms of "lawful operational use." Anthropic was replaced after it declined to remove its safety restrictions.