The guardrail conflict: the implications of America's AI crackdown for the rest of the world
On the afternoon of February 27, 2026, Pete Hegseth took to his phone and posted on X. The US Secretary of Defense had just officially categorized Anthropic, a San Francisco-based AI firm, as a “supply chain risk to national security.”
This classification, under 10 USC 3252, had previously been applied to Chinese companies Huawei and ZTE, which were accused of embedding surveillance backdoors in their products.
Now, it was directed at an American company established by former OpenAI employees, whose offense was that it refused to allow the US military to utilize its AI models for mass surveillance of American citizens or for fully autonomous lethal weapons.
Later that afternoon, shortly after Anthropic was blacklisted, OpenAI CEO Sam Altman revealed that his company had reached an agreement with the Pentagon. He stated that his models would be available for all lawful purposes.
That same evening, OpenAI's leading hardware executive, Caitlin Kalinowski, who dedicated 16 months to developing the company’s robotics program, announced her resignation.
In her statement, she expressed concern: “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization,” she asserted, “are lines that deserved more deliberation than they received.”
Ultimately, those lines had not been considered at all. They were drawn in a contractual dispute and erased in a press release issued on a Friday afternoon.
This narrative is often framed as a conflict between two American companies and one American administration, depicting a Washington power struggle centered around AI. While this perspective is not incorrect, it lacks completeness.
What transpired between Anthropic, OpenAI, and the Pentagon during the first three months of 2026 also tells a story about democratic governance, addressing who has the authority to establish the terms for deploying the most impactful technologies of our time, and what follows when a government determines that the priority is: those who comply first.
The dynamics of a purge
It is important to clarify the sequence of events, as their rapid progression has obscured their importance. Anthropic held a $200 million Pentagon contract, awarded in July 2025, for classified project work.
The conditions included two key restrictions: Claude could not be utilized for mass surveillance of American citizens, and it could not be used to operate fully autonomous weapons without human involvement in targeting decisions. These demands were not new.
They were aligned with longstanding prohibitions under international humanitarian law and US constitutional protections. By any reasonable standard, these represented the kinds of safeguards a democratic government would want integrated into its AI systems.
The Pentagon had a different perspective. It sought, as stated in its final ultimatum, “unrestricted access to AI for all lawful purposes.” When Anthropic refused to lift its restrictions, Hegseth established a deadline: 5:01 PM on February 27. The deadline passed without consensus. Trump, posting on Truth Social, labeled the company’s leadership as “leftwing nut jobs” and ordered all federal agencies to immediately stop using Anthropic's technology.
A federal judge in San Francisco reviewed the designation and, although less colorful, was more precise. Judge Rita Lin wrote in her ruling in March that designating a supply chain risk is “generally reserved for foreign intelligence agencies and terrorists, not American companies,” and labeled the administration’s actions as “classic First Amendment retaliation.”
She issued a preliminary injunction halting the ban.
Despite this, a federal appeals court later denied Anthropic’s request for a stay, asserting that “the equitable balance here cuts in favor of the government.”
As of this writing, Anthropic is prohibited from Pentagon contracts but allowed to collaborate with other agencies and is involved in two ongoing lawsuits while simultaneously seeking enterprise partners, launching a $100 million partner program, and testing its new model, Mythos, with Wall Street banks, encouraged subtly by the Treasury Secretary and the Federal Reserve chair.
The administration that blacklisted the company is also directing those banks to assess it for critical financial infrastructure.
The contradiction here is not merely bureaucratic confusion; it constitutes a policy.
The implications of OpenAI’s deal
A more troubling aspect of this situation is OpenAI’s involvement. Altman has asserted that his company shares Anthropic’s fundamental principles: no mass surveillance of citizens, no autonomous weapons. On paper, the companies’ stated red lines are nearly identical.
However, the distinction lies in the fact that OpenAI signed the agreement, while Anthropic did not. The specific contents of OpenAI’s agreement with the Pentagon, and how its terms compare to the safeguards Anthropic sought, have not been disclosed.
Pentagon officials have claimed that existing US laws already prohibit the uses of concern to Anthropic. However, Anthropic’s legal team and a group of 37 researchers from OpenAI and Google DeepMind who submitted an amicus brief supporting the lawsuit do not share that belief.
What can be
Other articles
The guardrail conflict: the implications of America's AI crackdown for the rest of the world
The US government placed Anthropic on a blacklist for its unwillingness to facilitate surveillance, while OpenAI agreed to do so. These developments should concern European policymakers.
