News Detail Banner
All News & Events

Artificial Intelligence Update - August 2025

August 18, 2025
Business Litigation Reports

Navigating the U.S. AI Regulatory Landscape After Defeat of Federal Moratorium

In July 2025, the U.S. Senate voted 99 to1 to remove a proposed federal moratorium on state and local AI regulation from the budget bill.  This provision, originally part of the “One Big Beautiful Bill Act,” would have blocked state and local governments from enforcing any AI-related laws for ten years.  Its removal keeps states in control of AI regulation and ensures they will continue as the primary regulators of AI technologies.

For businesses, this means they must closely monitor and follow state-level requirements.  Without a single federal standard, companies face varying rules across states and need to plan for long-term compliance with a wide range of laws.  The absence of federal preemption means businesses can no longer wait for a future federal law to bring uniformity to ensure compliance.

Four Different State Approaches to AI Regulation

States have begun adopting a range of legislative models.  Many target narrow issues, such as deepfakes or government procurement.  But four states—Colorado, California, Texas, and Utah—have established broader frameworks that illustrate divergent regulatory approaches.

Colorado’s Risk-Based Framework:  Colorado’s AI Act, effective February 1, 2026, establishes the most comprehensive regime to date.  Modeled partly on the EU AI Act, it applies across industries and imposes obligations on developers and deployers of “high-risk” AI systems—those that influence employment, housing, healthcare, financial services, or similar opportunities.  The statute requires companies to exercise reasonable care to mitigate foreseeable risks of algorithmic discrimination.  Businesses must adopt written risk management policies, conduct annual impact assessments, and ensure transparency in system operations.  The law recognizes adherence to established standards—specifically the NIST AI Risk Management Framework (NIST AI RMF)—as a potential affirmative defense.  This approach centers on risk management and internal accountability.

California’s Transparency-Oriented Approach:  California’s AI Transparency Act, effective January 1, 2026, targets risks from undisclosed AI-generated content, especially from large generative AI systems.  It applies to providers with over one million monthly users and focuses on consumer awareness.  The law requires providers to offer a free public AI detection tool and to embed both latent and optional visible disclosures in AI-generated outputs.  It also requires providers to ensure third-party users maintain these features.  While narrower than Colorado’s law, California’s statute emphasizes transparency and consumer protection, fitting into a broader agenda addressing deepfakes, algorithmic bias, and child safety.

Texas’s Targeted, Pro-Innovation Strategy:  The Texas Responsible AI Governance Act, also effective January 1, 2026, adopts a focused approach designed to avoid regulatory overreach.  Rather than regulating by risk, the statute prohibits only certain uses of AI—those developed or deployed with the intent to cause unlawful discrimination, behavioral manipulation, or the creation of deepfakes for illegal purposes.  Unlike Colorado’s effects-based framework, Texas requires proof of intent, setting a higher bar for enforcement.  The law establishes an AI sandbox program for controlled experimentation.  Enforcement rests exclusively with the Attorney General; the law does not create a private right of action.  Texas’s model limits liability and encourages innovation through regulatory restraint.

Utah’s Minimalist Disclosure Regime:  Utah’s AI Policy Act, in effect since May 1, 2024, is the least intrusive of the four.  The law focuses on disclosure for generative AI, requiring businesses to inform consumers when they are interacting with AI—but only if the consumer asks.  For licensed professionals in fields like law, medicine, or accounting, proactive disclosure is required at the outset.  Utah also offers a regulatory sandbox under its AI Learning Laboratory Program, encouraging innovation by providing temporary relief from obligations during testing.  The statute imposes few substantive obligations and reflects a light-touch approach.

Regulatory Fragmentation and Compliance Challenges

The patchwork of state laws creates a complex compliance environment for companies that operate nationwide.  What is allowed in one state may be banned in another, raising legal risks and costs.  To address these challenges, companies should avoid piecemeal compliance efforts.  Instead, they should create a centralized AI governance program that meets or exceeds the strictest state standards.  A first step is to inventory all AI systems and assess whether they implicate sensitive areas such as employment, finance, or generative content.

High-impact systems should undergo formal impact assessments to evaluate risks related to bias, privacy, or accuracy.  Where issues are identified, companies should adopt appropriate mitigations, such as modifying models or introducing human oversight.  Transparency is critical, and companies should disclose AI use in consumer applications and provide explanations for AI-driven decisions when appropriate.

Adopting recognized frameworks like the NIST AI RMF can offer both practical benefits and potential legal protections.  Colorado and Texas, for example, specifically mention such frameworks as possible safe harbors.  With no federal standard, companies should assign staff or committees to track new state laws and update compliance programs as needed.

Although future federal AI regulation remains possible, businesses that already follow best practices in risk assessment and transparency will be better prepared for any future national standards.  Integrating AI oversight into broader corporate governance—including privacy and cybersecurity—will help companies manage this evolving regulatory environment effectively.