The US has informed China to cease its imitation of American AI. However, implementing this directive poses a significant challenge.
Summary: The White House OSTP issued a policy memo accusing China of large-scale theft of U.S. AI models, pledging to share intelligence with American AI firms and consider accountability measures. In February, OpenAI accused DeepSeek of distilling its models; Anthropic identified DeepSeek, MiniMax, and Moonshot AI as responsible for creating 24,000 fake accounts that generated over 16 million interactions with Claude. The Deterring American AI Model Theft Act (H.R. 8283) was introduced on April 15. The memo is released just three weeks before a scheduled Trump-Xi summit on May 14.
On Wednesday, the White House accused China of engaging in “industrial-scale” theft of U.S. artificial intelligence, releasing a policy memorandum that commits to sharing intelligence with American AI companies about foreign distillation efforts and exploring ways to hold the offenders accountable. Michael Kratsios, director of the Office of Science and Technology Policy, stated that there is evidence suggesting that foreign entities, mainly in China, are conducting extensive distillation campaigns to capture American AI technology. He emphasized that action would be taken to safeguard American innovation. The memo's release comes three weeks before a planned Trump-Xi summit in Beijing on May 14, highlighting the protection of AI technology as a national security priority and a potential negotiation tool.
Distillation, the technique at the center of the dispute, does not involve stealing model weights or hacking servers. Instead, a distiller inputs thousands or millions of carefully crafted queries to a leading AI model, collects the outputs, and then trains a less expensive competitor model to closely mimic the original's capabilities at a significantly lower cost. Essentially, it learns from the answers given rather than the underlying model. The legal standing of this technique is ambiguous, while its strategic consequences are clear.
Evidence
The OSTP memo builds upon claims made by U.S. AI firms since February. On February 12, OpenAI presented a formal memo to the House Select Committee on China accusing DeepSeek of distilling its models. OpenAI noted it identified accounts linked to DeepSeek employees that developed methods to bypass access restrictions, routing queries through hidden third-party proxies to extract data at scale. OpenAI’s terms of service explicitly forbid the use of outputs to create “imitation frontier AI models.” DeepSeek has not responded publicly to the accusations.
On February 23, Anthropic provided more detailed evidence, naming three Chinese labs. The report indicated that DeepSeek had conducted over 150,000 interactions with Claude, focusing on foundational logic and alignment methods. MiniMax generated the most interactions, exceeding 13 million, while Moonshot AI accounted for more than 3.4 million interactions, concentrating on areas such as agentic reasoning and coding. Anthropic identified around 24,000 fraudulent accounts across the three companies that produced over 16 million exchanges with Claude, employing jailbreaking techniques to access proprietary information and circumvent geofencing using commercial proxy services.
By early April, OpenAI, Anthropic, and Google began sharing intelligence about distillation threats via the Frontier Model Forum, a coalition originally established in 2023 alongside Microsoft. This arrangement mirrors cybersecurity threat-sharing frameworks, where one company alerts others after detecting an attack pattern. The collaboration among these three intense competitors underscores the severity of the threat. DeepSeek demonstrated that achieving frontier AI performance no longer necessitates substantial resources from Silicon Valley, prompting the U.S. government to question the extent of efficiency gained versus what was illicitly acquired.
The policy response
The OSTP memo serves as a policy statement, not an executive order or enforceable regulation. It instructs federal agencies to share intelligence regarding foreign distillation efforts with U.S. AI developers, assist the industry in bolstering technical defenses, and investigate accountability options for foreign entities. No specific sanctions, additions to entity lists, or enforcement actions were disclosed on Wednesday, making the memo's practical impact contingent on subsequent developments.
Congress is concurrently addressing the issue. On April 15, Representative Bill Huizenga introduced the Deterring American AI Model Theft Act of 2026, co-sponsored by Representative John Moolenaar, chair of the House Select Committee on China. The bill aims to direct the government in identifying entities utilizing “improper query-and-copy techniques” and imposing sanctions via the Commerce Department blacklist. The House Select Committee held a hearing on April 16 titled “China’s Campaign to Steal America’s AI Edge,” featuring witnesses from Brookings, the Silverado Policy Accelerator, and the America First Policy Institute. This issue has garnered bipartisan support, with reports indicating that “winning the AI arms race resonates with both parties.”
The legal basis for prosecution remains unclear. The Protecting American Intellectual Property Act, enacted in January 2023, allows for sanctions in cases of trade secret theft, though whether extracted model outputs qualify as trade secrets under existing laws is an open question. The South China Morning Post noted that Anthropic’s allegations regarding distillation “highlight an
Other articles
The US has informed China to cease its imitation of American AI. However, implementing this directive poses a significant challenge.
The OSTP reports that China is conducting large-scale efforts to extract US AI models. OpenAI, Anthropic, and Google are collaborating on threat intelligence. Congress is working on drafting sanctions.
