Trump's efforts to prevent state regulation of AI are encountering opposition from both states and Congress.
In summary, the Trump administration is engaged in a multi-faceted effort to block states from regulating AI. This includes a DOJ litigation task force, evaluations by the Commerce Department on “burdensome” state laws, and a legislative push for Congress to establish a national standard that preempts state regulation. However, states have moved in the opposite direction, introducing 1,208 AI bills in 2025, with 145 enacted. Additionally, Congress has rejected preemption attempts twice, including a significant 99-1 Senate vote removing an AI moratorium from the One Big Beautiful Bill Act.
Doug Fiefia, a first-term Republican state representative from Herriman, Utah, and a former Google salesperson, introduced House Bill 286, known as the Artificial Intelligence Transparency Act, earlier this year. This bill aimed to require leading AI companies to publish safety and child protection plans and included protections for whistleblowers who report safety issues. The bill passed a House committee unanimously, but the White House ultimately derailed it.
On February 12, the White House Office of Intergovernmental Affairs sent a letter to Utah Senate Majority Leader Kirk Cullimore Jr. opposing HB 286, calling it an “unfixable bill” contrary to the Administration’s AI agenda. Officials engaged in discussions with Fiefia for two weeks, advising against progressing the bill without proposing any specific amendments. Ultimately, the bill was not passed in the Senate.
Fiefia responded firmly, emphasizing the importance of upholding states’ rights, particularly when a Republican is in power, to show that this principle transcends partisanship. His bill specifically targeted “frontier developers,” defined as companies using at least 10^26 floating-point operations for training models, with a penalty cap of $1 million. In the context of AI legislation, it was relatively modest, while the administration regarded it as a significant threat.
The federal framework against state AI regulations comprises three main elements, each building upon the previous one. The initial component was Executive Order 14365, signed on December 11, 2025, aimed at establishing a national policy framework for AI. This order created an AI Litigation Task Force within the DOJ, set to begin operations on January 10, 2026, to contest state AI laws in federal court, citing unconstitutional burdens on interstate commerce or federal preemption. It instructed the Secretary of Commerce to complete a detailed evaluation of state AI laws by March 11, identifying those deemed “burdensome” and charged the FTC to clarify when state laws might be preempted by the FTC Act. Furthermore, it linked federal broadband funding to states’ agreements to refrain from enacting what the administration considers excessive AI laws, while excluding certain child safety, data center zoning, and state procurement laws from preemption.
The second component was the Commerce Department’s evaluation released in March, which specifically scrutinized laws in Colorado, California, and New York. This evaluation is intended to guide the DOJ task force, which is expected to initiate federal legal challenges by summer 2026, with cases anticipated to take two to three years to resolve.
The third component, a National Policy Framework for AI released on March 20, outlined legislative recommendations across seven areas: child protection, AI infrastructure, intellectual property, free speech and censorship, innovation, workforce preparation, and the preemption of state AI laws. The framework asserts that “Congress should preempt state AI laws that impose undue burdens to maintain a minimally burdensome national standard, rather than allowing fifty conflicting ones.” The administration's stance on copyright asserts that training AI models using copyrighted material does not infringe copyright laws. Regarding content moderation, it calls for measures to prevent the federal government from pressuring technology providers, including AI developers, to alter or ban content based on political or ideological reasons.
David Sacks, who previously served as the AI and crypto czar before transitioning to a presidential advisory committee role in late March, succinctly articulated the rationale: “You’ve got 50 different states regulating this in 50 different ways, which creates a patchwork of regulation that’s challenging for our innovators to navigate.” He raised concerns about Colorado’s algorithmic discrimination regulations, stating they posed “very serious First Amendment issues.” On the broader issue of blue states, he remarked, “We disapprove of blue states trying to impose their progressive ideology in AI models and aim to put a stop to that.”
Meanwhile, states have not remained passive during the ongoing disputes regarding their regulatory authority. In 2023, fewer than 200 AI-related bills were introduced in state legislatures. This number increased to 635 across 45 states in 2024, with 99 enacted. In 2025, there were 1,208 AI bills introduced, marking the first year every state contributed at least one, with 145 laws enacted. In the early months of 2026, 78 bills focused on chatbot safety were submitted across 27 states.
California’s Transparency in
Other articles
Trump's efforts to prevent state regulation of AI are encountering opposition from both states and Congress.
The White House rejected an AI safety bill proposed by a Republican from Utah and established a DOJ task force to contest state regulations, yet Congress voted 99-1 in favor of opposing AI preemption.
