US AI Framework And The Politics Of Deregulation — Bloomsbury Intelligence and Security Institute (BISI)
The Trump Administration’s Framework is built around a clear policy choice: federal pre-emption and lighter regulation are presented as conditions for faster innovation, stronger competitiveness, and national leadership in AI. The March 2026 document asks Congress to pre-empt “cumbersome” state AI laws, avoid creating a new AI regulator and rely largely on sector-specific rules and industry-led standards instead. It also combines limited safeguards in areas such as child safety, fraud, copyright and energy costs, with a broader effort to reduce what the administration sees as compliance burdens on developers and deployers. The administration’s approach has not resolved regulatory fragmentation. The White House argues that a “patchwork” of state AI laws could undermine US competitiveness and has explicitly called for federal pre-emption of “cumbersome” state rules. Yet the March framework remains non-binding, and the December 2025 executive order “Ensuring a National Policy Framework for Artificial Intelligence” did not itself invalidate any state law. Instead, it ordered a 90-day federal review of existing and proposed state AI laws and created a Justice Department task force to challenge those deemed “onerous and excessive”. With that deadline having passed in mid-March without a public Commerce Department report, firms are left facing the prospect of future federal intervention without clarity on which state measures may be targeted first. Rather than simplifying compliance, the result is a more unsettled regulatory environment.That uncertainty is especially significant because states continue to legislate. California’s 30 March executive order N-5-26 imposed new contracting safeguards on AI vendors, including expectations around bias, child safety, surveillance, and unlawful discrimination, thereby diverging from the administration’s more deregulatory posture. More broadly, states have advanced measures on deepfakes, child safety, transparency, discrimination, and incident reporting, with more than 100 AI-related state laws reportedly already enacted or newly adopted. What is emerging, therefore, is not merely a jurisdictional dispute between federal and state authorities but a broader contest over who should define acceptable risk in the US AI market. In effect, states are advancing the view that innovation without enforceable safeguards is politically difficult to sustain, while the administration has suggested that such safeguards may themselves impede innovation.From a governance perspective, the framework raises interlinked concerns. First, consumer protection remains partial as the document acknowledges public concerns but does not establish concrete national obligations on auditing, testing, disclosure, or liability. Second, legal uncertainty persists for firms operating across jurisdictions where state rules remain active but may later face federal challenge. Third, it sharpens the question of federalism in digital governance, namely, whether states should retain authority to regulate technologies whose effects are already being felt in schools, labour markets, procurement systems, and public services.For companies, this may reduce formal regulatory pressure in theory while increasing strategic uncertainty in practice. Large firms are better placed to absorb ambiguity, manage multi-state compliance and shape future rulemaking. Smaller firms, public-sector deployers and downstream users are less able to do so. In that respect, pre-emption without a clear replacement framework may not yield a more efficient regulatory environment; it may instead deepen existing asymmetries in scale, legal capacity and market power.