The guardrail conflict: what the AI purge in America signifies for the rest of us.

The guardrail conflict: what the AI purge in America signifies for the rest of us.

      On the afternoon of February 27, 2026, Pete Hegseth grabbed his phone and posted on X. The US Secretary of Defense had just classified Anthropic, a San Francisco-based AI firm, as a “supply chain risk to national security.” This designation, according to 10 USC 3252, had previously been applied to Chinese companies Huawei and ZTE, which faced accusations of installing surveillance backdoors in their products.

      Now, it was being directed at an American firm founded by former OpenAI researchers, whose offense was this: it declined to allow the US military to use its AI models for mass domestic surveillance on American citizens or for fully autonomous lethal weaponry.

      That afternoon, shortly after Anthropic was placed on the blacklist, OpenAI CEO Sam Altman revealed his company had made its own deal with the Pentagon. He stated that his models would be accessible for all lawful purposes.

      Later that evening, OpenAI’s lead hardware executive, Caitlin Kalinowski, who had dedicated 16 months to developing the company’s robotics program, announced her resignation. She expressed that “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization,” were issues that warranted more consideration than they received.

      It became clear that these boundaries had not been thoroughly discussed; they had merely been defined during a contractual dispute and then obscured by a Friday afternoon press release.

      Typically, this narrative is presented as a conflict between two American companies and a US administration, depicting a power struggle in Washington with AI at its core. While this interpretation isn't incorrect, it is lacking.

      The events involving Anthropic, OpenAI, and the Pentagon over the first quarter of 2026 also reflect on democratic governance: who determines the terms for deploying the most critical technologies of our time, and what occurs when a government decides that the answer to this query is: whoever complies first.

      The details of the purge are critical to understand, as the speed of their progression has clouded their importance. Anthropic held a $200 million Pentagon contract, awarded in July 2025, to work on classified systems.

      The agreement included two stipulations: Claude could not be utilized for mass domestic surveillance of American citizens, and it could not be used to operate fully autonomous weapons without human oversight. These demands were not new; they corresponded with long-standing international humanitarian law and US constitutional protections. They represented, by any fair standard, the type of safeguards a democratic government should aim to include in its AI systems.

      The Pentagon disagreed, insisting on “unrestricted access to AI for all lawful purposes,” as stated in its final ultimatum. When Anthropic refused to lift its restrictions, Hegseth established a deadline: 5:01 PM on February 27. The deadline passed without a resolution. Trump, posting on Truth Social, labeled the company’s leadership as “left-wing nut jobs” and directed all federal agencies to immediately stop using Anthropic’s technology.

      A federal judge in San Francisco, while analyzing the designation, was less colorful but more precise. Judge Rita Lin indicated in her March ruling that the supply chain risk label is “usually reserved for foreign intelligence agencies and terrorists, not for American companies,” and characterized the administration’s actions as “classic First Amendment retaliation.” She issued a preliminary injunction to block the ban.

      Nevertheless, this did not prevent a federal appeals court from later denying Anthropic’s request for a stay, concluding that "the equitable balance here favors the government.”

      As of now, Anthropic is prohibited from Pentagon contracts, allowed to collaborate with other agencies, and is currently engaged in two simultaneous lawsuits while also recruiting enterprise partners, launching a $100 million partner program, and testing its latest model, Mythos, with Wall Street banks at the discreet encouragement of the Treasury Secretary and the Federal Reserve chair.

      The administration that blacklisted Anthropics is also instructing those banks to assess it for critical financial infrastructure.

      This contradiction is not perplexing bureaucratic behaviour; it is a policy.

      The uncomfortable aspect of this situation involves OpenAI’s involvement. Altman has stated that his company aligns with Anthropic’s fundamental principles: no domestic mass surveillance, no autonomous weapons. The stated boundaries of both companies are nearly identical on paper.

      The difference lies in that OpenAI agreed, whereas Anthropic did not. The specifics of OpenAI's agreement with the Pentagon, and how its terms contrast with Anthropic's desired assurances, have not been disclosed.

      Pentagon representatives have remarked that existing US law already precludes the uses Anthropic was apprehensive about. However, Anthropic’s lawyers, along with a coalition of 37 researchers from OpenAI and Google DeepMind who supported the lawsuit in an amicus brief, clearly do not share that level of assurance.

      With reasonable certainty, we can assert this: a government intent on lifting enforceable safety restrictions from AI models employed in classified military systems discovered a path to

Other articles

The guardrail conflict: what the AI purge in America signifies for the rest of us.

The US government placed Anthropic on a blacklist due to its unwillingness to facilitate surveillance, while OpenAI agreed to comply. These developments should raise concerns for European policymakers.