One AI Rulebook, Fifty States, and the Old Federal Power Grab in New Clothing
United States – March 21, 2026 – The White House wants to preempt state AI laws; fine, but show the guardrails before you grab the keys, in daylight.
I have spent enough time in public buildings to recognize the smell of a power transfer before you see it. Courthouse air, copier toner, and a fresh stack of forms replacing the old stack. No marching band. Just a new checkbox that sounds like efficiency and behaves like control.
That is what this week’s White House AI push feels like. Not the part where Washington says AI matters. Of course it matters. The part where Washington says: let us be the one hand on the wheel, and while we are at it, loosen a few seatbelts for speed.
What the White House released (and what it is really doing)
On March 20, the White House released legislative recommendations for Congress on artificial intelligence. It pitches a national approach: protecting kids, addressing community impacts like power costs, navigating copyright fights, resisting government censorship, speeding innovation, building an AI-ready workforce, and, crucially, preempting state AI laws it deems overly burdensome.
It is not a bill. Congress would still have to pass something for it to become binding law. But the administration is not waiting politely in the lobby.
Back on December 11, 2025, President Trump signed an executive order titled Ensuring a National Policy Framework for Artificial Intelligence. It directs DOJ to stand up an AI Litigation Task Force to challenge state AI laws the administration says conflict with federal policy. It also directs Commerce to evaluate state AI laws and identify those it considers onerous. And it lays out a path to restrict certain BEAD broadband funds and to consider conditioning other discretionary federal grants based on whether states enact or enforce AI laws the White House dislikes.
So the March 20 framework is the friendly face. The December 11 order is the crowbar in the trunk.
The Orwell check: “Minimally burdensome” for whom?
Watch the language. “Minimally burdensome” sounds like a diet plan for bureaucracy. It can also be a diet plan for accountability.
A national standard can be sensible. But “minimally burdensome” is a choice about which burdens count. Burden on companies building models? Or on the person denied a job interview by an algorithm? Or on a kid being steered toward self-harm? Or on a town whose electric bills jump when a data center plugs into the grid?
The framework nods to protecting children and empowering parents, including age-assurance requirements and limits on data collection for training. It also urges avoiding ambiguous standards and open-ended liability. Translation: protect kids, yes, but do not create too many avenues for lawsuits. That tradeoff deserves daylight.
The liberty ledger: Who gains freedom, who loses it?
- Industry gets fewer referees: one federal framework that preempts state efforts means fewer places regulators can poke around.
- States lose a big slice of their “laboratory” role, including messy, practical safeguards like disclosure rules and anti-discrimination style provisions. AP noted states such as Texas and Colorado have pursued different approaches.
- Citizens risk losing the closest levers of accountability. It is easier to pack a state hearing room than a Capitol Hill committee room.
The Paine test and the tradeoff: Unity is not a substitute for rights
A federal AI framework could expand liberty if it delivers enforceable rights: privacy limits, transparency, the ability to contest automated decisions, meaningful discrimination protections, and clear limits on government use of AI for surveillance or speech control.
But if the main mission is to swat states and speed deployment, power concentrates: in Washington, and in the boardrooms that benefit when the nearest regulator is a thousand miles away.
What would make preemption legitimate
- A real federal floor: baseline privacy, transparency for high-impact uses, data-retention rules, and a right to redress.
- Limits on grant leverage: tight statutory boundaries and public reporting for any funding conditions.
- Audits and whistleblower protections, especially in hiring, housing, credit, education, health care, and law enforcement.
- Sunsets: preemption should expire unless Congress renews it after evidence and hearings.
If this is really about trust, trust is earned with enforceable rights, not demanded with preemption. If Washington wants the keys to every state’s AI rules, what exact protections is it promising to install before it turns the engine over?
Keep Me Marginally Informed