The Trump administration’s new executive order on artificial intelligence is more than a deregulatory maneuver—it is the opening shot in a high‑stakes federalism fight that will determine who sets the rules for one of the most consequential technologies of the century. By moving aggressively to preempt state AI laws, threaten litigation, and condition billions in federal funding on regulatory compliance, the White House has transformed a policy debate about innovation and risk into a constitutional clash over the balance of state and federal power.
At its core, the order titled “Ensuring a National Policy Framework for Artificial Intelligence”, issued December 11, 2025, aims to replace a patchwork of ambitious state AI rules with a “minimally burdensome” national framework geared toward accelerating U.S. AI competitiveness. The administration casts state initiatives—especially in California, New York, and Colorado—as “onerous and excessive” laws that threaten to “stymie innovation” and undermine national economic and security interests.
January 10, 2026, marked a turning point. On that date, the Department of Justice’s AI Litigation Task Force began operations, empowered to challenge state AI laws in federal court on interstate commerce and preemption grounds. With that, the executive order’s legal machinery moved from paper to practice, setting the stage for cases that are likely to climb to the Supreme Court.
—
A Strategy to Neutralize State AI Governance
The executive order responds directly to a surge of state-level AI regulation in recent years: algorithmic accountability rules in New York City, expansive AI and automated decision-making bills in California and Colorado, and a growing wave of local transparency and audit mandates.
Trump’s order seeks to reverse that trajectory through four primary mechanisms:
– Litigation: A DOJ task force organized to systematically challenge state laws.
– Funding leverage: Conditioning federal broadband funds on state regulatory restraint.
– Administrative preemption: Directing agencies like the FTC and FCC to reinterpret existing federal authority to crowd out state rules.
– Future legislation: Laying the groundwork for a federal AI statute that would formally preempt conflicting state law.
This strategy leverages the executive branch’s full toolkit: courts, purse strings, and regulatory agencies. It also signals to industry that the federal government intends to be the primary, and eventually exclusive, regulator of AI.
—
The AI Litigation Task Force: From Policy Statement to Courtroom
The most immediate and aggressive tool in the order is the AI Litigation Task Force housed in the Department of Justice. The Attorney General was directed to establish the task force within 30 days of the order, and it is now operational with a clear single mission: attack state AI laws deemed inconsistent with federal policy.
The task force is instructed to bring cases on several legal theories:
– Dormant Commerce Clause: Arguing that state AI laws impermissibly burden interstate commerce or regulate extraterritorially—especially where AI models and services flow across state lines.
– Federal preemption: Claiming that existing federal statutes or regulations, including those enforced by agencies like the FTC and FCC, displace conflicting state standards.
– Other constitutional grounds: Including First Amendment challenges where states require AI systems to alter “truthful outputs” or impose disclosure mandates the administration characterizes as compelled speech.
The order specifically singles out California’s SB 53 (a broad AI governance bill) and New York City’s Local Law 144 (which imposes audit and bias assessment requirements on automated employment decision tools) as paradigmatic targets. These laws, which require algorithmic explanations, independent audits, and bias mitigation, are precisely the kinds of measures the administration labels “overly burdensome.”
For state attorneys general, particularly in California and New York, defending these statutes is no longer just a matter of policy—it is a test of the scope of state police powers in the AI era. For AI companies, the task force offers a potential pathway to invalidate some of the most demanding state compliance obligations.
—
Commerce Department Review: The Official Hit List of State AI Laws
While the DOJ prepares litigation, the Department of Commerce has been assigned a parallel gatekeeping role.
Within 90 days of the order—by March 11, 2026—the Secretary of Commerce must publish a comprehensive evaluation of existing state AI laws, cataloging which measures are:
– “Onerous” or “in conflict” with the national AI policy.
– Requiring AI systems to alter truthful outputs, such as forcing models to modify content or suppress certain information in ways the administration views as constitutionally problematic.
– Imposing disclosure or reporting mandates that the department believes may violate the First Amendment or otherwise unduly burden AI developers and deployers.
This evaluation serves multiple purposes:
– It identifies specific state laws for referral to the AI Litigation Task Force.
– It signals to states which statutes put their federal funding at risk.
– It highlights “good” state approaches deemed pro‑innovation and “aligned” with federal policy, potentially rewarding states that hold back on strict regulation.
In effect, Commerce will publish a public map of where the federal government intends to focus its pressure—and where states may be encouraged to retreat.
—
Funding Leverage: $42 Billion in Broadband Money on the Line
The order’s most potent enforcement lever is financial. It ties $42 billion in Broadband Equity, Access and Deployment (BEAD) program funding—a major federal investment in broadband infrastructure—to state compliance with the administration’s AI policy preferences.
Under the order:
– States that enact or enforce AI regulations deemed inconsistent with the executive order’s national policy risk losing access to BEAD funds.
– Commerce, which administers the program, must incorporate this compliance condition into its oversight and grant-making processes.
For many states, especially those with large rural or underserved populations, BEAD dollars are central to long-term broadband deployment strategies. The message is clear: restrain your AI laws, or jeopardize your broadband buildout.
This approach mirrors other contentious uses of conditional spending in federal policy—such as tying highway funds to drinking age requirements—but applies that logic to a new frontier: AI governance. It raises immediate questions about coercion, the limits of conditional spending under Supreme Court precedents, and whether conditioning broadband infrastructure on AI regulation is sufficiently related to the underlying program’s purpose.
—
FTC and FCC: Administrative Preemption Through Policy and Standards
The executive order also pushes key independent agencies into the federalism battle.
Federal Trade Commission: Bias Mitigation as “Deception”
The White House directs the FTC Chair to issue a policy statement within 90 days explaining how Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices, applies to AI models.
According to the order and related analysis:
– The FTC is asked to clarify circumstances where state laws that require companies to alter truthful AI outputs—for instance, to rebalance results for fairness or to suppress certain lawful content—should be viewed as mandating “deceptive” conduct.
– In practical terms, the order pushes the FTC to characterize state-mandated bias mitigation or outcome adjustments as per se deceptive trade practices, implying they are inconsistent with federal law and thus preempted.
This is a dramatic reframing. Many state and local AI rules treat bias mitigation and algorithmic fairness obligations as consumer protection and civil rights safeguards. The executive order, by contrast, recasts them as potential violations of truthfulness and transparency—flipping the normative frame from protection to distortion.
For businesses, this promises relief from complex fairness mandates. For civil rights and consumer advocates, it signals a rollback of emerging tools to combat discrimination, especially in employment, housing, lending, and public services.
Federal Communications Commission: A National AI Reporting Standard
The order also instructs the FCC Chair to open a proceeding, within 90 days after Commerce publishes its state-law evaluation, to consider federal AI reporting and disclosure standards.
The goal is to determine whether the FCC should:
– Create a federal standard for disclosures and reporting by AI models.
– Explicitly preempt conflicting state requirements, including transparency rules and notice obligations imposed by states and municipalities.
If the FCC moves forward, tech and telecom companies could face a single federal rulebook for AI-related disclosures instead of navigating divergent state regimes. For states like California and New York that have used disclosure mandates to pry open black-box systems, FCC action could effectively sideline their most powerful regulatory tools.
—
A Federal Framework on the Horizon—With Narrow State Carveouts
Beyond immediate administrative action, the executive order looks ahead to comprehensive federal AI legislation.
The White House tasks the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology with drafting a legislative recommendation for a uniform federal AI framework. The envisioned statute would:
– Establish national AI standards aligned with the administration’s “minimally burdensome” philosophy.
– Formally preempt state laws that conflict with that national policy.
– Preserve narrow carveouts for specific state domains, including:
– Child safety protections
– AI compute and data center infrastructure
– State procurement and governmental use of AI
– Additional topics the administration may later designate
This approach builds on the administration’s AI Action Plan released in July 2025 and echoes the proposed “One Big Beautiful Bill Act,” which floated a 10-year moratorium on new state AI regulations. Although that legislation has not advanced, the executive order effectively implements parts of the strategy through executive and administrative means while waiting for Congress to act.
—
Federalism Tension: A Conservative Administration vs. State Sovereignty
Perhaps the most striking dimension of the order is its ideological reversal of traditional conservative rhetoric about federalism.
Historically, Republican administrations and lawmakers have often championed state sovereignty, criticizing federal overreach and celebrating states as “laboratories of democracy.” In the AI context, however, the Trump administration is expressly seeking to:
– Override state regulatory experimentation in areas like bias audits, transparency, and risk management.
– Threaten litigation against state governments for exercising their police powers.
– Leverage federal funding to coerce regulatory retreat.
The administration justifies this shift by framing AI as a strategic national asset—essential to economic growth, job creation, and national security, especially in competition with China. From this perspective, fragmented state rules are not a healthy laboratory but a destabilizing force undermining coherent national strategy.
This framing sets up a clash with states like California and New York, which see themselves as frontline regulators of powerful technologies that can exacerbate inequality, discrimination, and disinformation. Their attorneys general are likely to argue that:
– AI deployment has significant local and regional impacts—on workers, tenants, patients, and voters—that justify robust state intervention.
– The Tenth Amendment and longstanding doctrines preserving state police powers in consumer protection and civil rights should limit federal attempts to strip them of authority in these domains.
That conflict is not just about policy design. It goes to fundamental questions about who gets to govern transformative technologies when federal legislation remains incomplete and contested.
—
Stakeholders: Winners, Losers, and Those Caught in the Middle
The emerging battle lines over the executive order bring a diverse set of stakeholders into the fray.
– State governments: California, New York, and Colorado—already at the forefront of AI regulation—face direct challenges. Some may revise statutes to preserve funding; others may dig in and prepare for constitutional litigation.
– AI companies and technology developers: Large platforms and enterprise AI providers stand to gain from reduced compliance variability, fewer overlapping audits and reporting obligations, and a friendlier federal posture. Many have quietly or openly backed calls for national preemption to avoid state-by-state rulemaking.
– Consumer and civil rights advocates: Groups focused on algorithmic discrimination, worker surveillance, and data privacy warn that the order will erode hard-won protections and stifle innovative state efforts to address harms that federal law has yet to fully recognize.
– Congress: Lawmakers across both parties now face pressure to either codify a national framework or push back against perceived overreach. While industry may support preemption, bipartisan concern about AI harms could complicate efforts to pass a strictly deregulatory statute.
– Federal agencies: DOJ, Commerce, FTC, and FCC are thrust into politically charged roles, balancing their statutory missions with the administration’s directive to prioritize innovation and competitiveness over aggressive oversight.
Each of these actors will shape how the order plays out in practice—and how quickly the issues reach the federal courts.
—
From Executive Order to Supreme Court
The executive order is already reshaping expectations about the future of AI regulation in the United States. But its most significant effects will likely be decided not in agency conference rooms, but in federal courtrooms.
As the AI Litigation Task Force files suits and states respond with constitutional defenses, judges will be forced to grapple with novel questions:
– How far does the Dormant Commerce Clause restrict states from regulating AI services that cross borders digitally?
– When does federal administrative guidance or policy—such as an FTC policy statement—provide a sufficient basis to preempt state law?
– Can the federal government condition broadband infrastructure funding on state AI policy choices without crossing the line into coercion under modern spending clause doctrine?
– To what extent are state AI transparency and fairness mandates constrained by First Amendment protections for “truthful outputs” of AI systems?
The answers to these questions will define the constitutional boundaries of AI governance for years to come. Whatever one’s view of the policy merits, the executive order has ensured that AI will be at the center of a new chapter in American federalism—one in which the traditional roles of state and federal power may be re‑drawn around lines of code.
—
