The U.S. Needs More Legal Patchworks
The legal landscape for AI should be diverse and if Big Tech can't comply with state laws then they should re-evaluate their products.
President Trump says he’s about to rescue American AI from a “patchwork” of state laws. In a repudiation of the union’s basic premise, he wants a “one rule” executive order so companies don’t have to get “50 approvals every time they want to do something.” Meanwhile firms like big tech investor Andreessen Horowitz and OpenAI are aggressively lobbying against state laws. The president cannot pre-empt state laws via executive order. Trump’s post on Truth Social about this would-be executive order comes after Congress balked at freezing state AI laws, the White House is reaching for an order that would steer the Justice Department against them. This is pitched as temporary triage until Congress writes a national framework. In reality, it’s an attempt to decide who governs a general-purpose technology that will seep into every area of life. And it points in the wrong direction. AI is exactly the kind of domain where we should lean into legal patchworks.
Start with uncertainty. We don’t have a settled catalogue of “AI harms.” We have familiar problems like fraud, discrimination, error, and opacity showing up differently in criminal justice, health care, hiring, housing, and finance. A single federal framework that pretends to know the right balance for all of that might be more efficient in the abstract but introduces risks that outweigh the benefits. If a few large states adopt different limits on how insurers or employers can use AI, we’ll learn who overreached, who under-reached, and which rules actually changed outcomes. A bad state statute can be repealed; a sprawling federal code that took years to pass and years to admit is broken is much harder to unwind.
AI also isn’t one product. It’s a layer that will sit inside systems we already run mostly through states and localities. Police will buy AI-assisted tools, school districts will adopt “personalized” platforms, state agencies will use models to flag fraud or sort applications. Those choices involve value judgments about due process, acceptable error, and tolerance for automation in sensitive domains. A dense, tech-heavy state and a rural, low-trust state are not going to want the same knobs turned the same way. A national one-size framework doesn’t resolve that conflict; it just centralizes it.
Patchworks are also a hedge against capture. When Washington is the only game in town, the rational move for large firms is to pour lobbying money into one set of agencies and a handful of committee chairs. If they win there, they win everywhere. When states retain real authority, they become additional veto points: attorneys general who can bring their own suits, state courts with their own precedents, sector regulators and procurement officers who can say no. That doesn’t guarantee smart regulation, but it makes quietly locking in bad regulation more expensive. The “50 approvals” line that bothers Trump is the whole point. What industry wants is not just clarity, but a single chokepoint it can learn to manage and, if it’s lucky, to own.
Centralizers sometimes concede this logic in the short run, then assume that once we “know more” about AI, the mature system belongs in Washington. That gets the structure wrong. The places where AI will matter most over time—health care, education, criminal justice, land use, benefits administration, licensing—are structurally state-heavy and will stay that way. If a state judiciary is going to let judges consult a risk-assessment tool, it makes more sense for that same state to set guardrails than for a federal agency to do it for everyone. The right long-term picture is a thin but firm national floor: civil-rights protections and basic due process in high-stakes automated decisions; national-security and critical-infrastructure guardrails; and competition rules that states can’t undercut. The federal government should stop states from going below that floor, not from trying to do better. Above it, states should own the bulk of deployment rules and remedies. That’s not “let the states play until D.C. is ready.” It’s a permanent division of labor.
There’s also the question of power. We’re already drifting toward an AI stack where a handful of firms control the leading models, the cloud infrastructure, and the main distribution channels. A unified approval regime layered on top of that is an invitation to consolidate. Secure one friendly settlement in Washington and you effectively have a hall pass across the country. State-based patchworks don’t magically end monopoly, but they change the geometry. States have their own antitrust and consumer-protection statutes. They can bring cases when firms tie access to models to their own cloud or lock in agencies with exclusivity clauses. They can use tools like procurement policies, licensing, privacy and unfair-practice laws to push back on abusive deployments even if federal antitrust enforcers drag their feet.
Patchworks also raise the cost of capture. Lobbying one national regulator is cheaper than lobbying three dozen statehouses, attorneys general, and utility commissions. No one should romanticize state politics; plenty of them will under-regulate or turn AI into culture-war theater. But a map of overlapping authorities is harder to quietly rig than a single vertical chain running from a few executive-branch lawyers to a few committee chairs.
Finally, patchworks keep more of the AI fight where people actually live. The harms that will enrage voters will hit specific counties and neighborhoods. It is easier to show up at a statehouse, pressure a governor, or deny an attorney general re-election than it is to move a federal commission whose members mostly answer to the White House and congressional leadership. Trump’s order is aimed at sweeping those centers of power aside — which is one of the broader example of Trump’s penchant for authoritarian leadership. The choice isn’t between chaos and order. It’s between a tidy national framework that’s comfortable for a handful of firms, and a rougher landscape of state-led rules that are messier, more varied, and more contestable. If some companies really can’t handle different rules in ten or fifteen states, that’s not an argument against patchworks; that’s a prompt to go back to the drawing board. If AI is going to become infrastructure embedded into American life, the burden is on firms to build it so it can survive contact with more than one legal and moral regime.
We are destined to make serious mistakes with AI. We are better off making smaller mistakes, containing them, and allowing other states to learn from them than we are centralizing all of the risk at one point of failure.






