Sarah Abrams

If you have been following the news out of Washington surrounding the One Big Beautiful Bill, you know that one of the features of the Bill, passed by the House of Representatives, was a provision barring states from enacting laws or regulations relating to AI for ten years. It looks like this provision will not be part of the Senate version of the bill, but the provision still raises important questions about AI regulation. In the following guest post, Sarah Abrams, Head of Claims Baleen Specialty, a division of Bowhead Specialty, takes a look at the larger context of AI regulation. I would like to thank Sarah for allowing me to publish her article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like to submit a guest post. Here is Sarah’s article.

************************

Last night, the Senate struck the proposed AI regulation ban from the One Big Beautiful Bill (OBBB), a tax and budget reconciliation bill.  OBBB had passed the U.S. House of Representatives on May 22, 2025, with the AI regulation moratorium intact and garnering a lot of attention.  In particular, the provision located on pages 278–279 of the 1,000-page bill, prohibited states from enacting or enforcing laws or regulations relating to AI for a decade.  Despite the United States Senate Parliamentarian allowing the proposed ban on AI legislation to survive procedural scrutiny, Senate lawmakers voted 99-1 to strike the ban from the bill.  Interestingly, soon-to-be-retired North Carolina Republican Senator Thom Tillis was the only one who voted to retain the ban.

Readers of the D&O Diary will recall that while President Trump declared in his January 23, 2025, Executive Order that he would remove “barriers to American AI innovation,” AI-washing has remained a focus of the Atkins SEC.  The OBBB AI regulation ban, if passed in the Senate and the House without the AI regulation ban, may have an impact on private litigation or enforcement actions by the SEC or other agencies for AI washing.  The House still may adopt some form of legislation impacting state AI regulation before the President can sign the OBBB. The following discusses the OBBB AI regulation ban (AI regulation ban) as proposed in more detail, existing AI regulation, as well as the potential impact on companies and D&O insurers.

First, what did the House Bill AI regulation ban state? Nestled under TITLE IV–ENERGY AND COMMERCE, Subtitle C—Communications, Part 2–Artificial Intelligence and Information Technology Modernization of the OBBB:

This section prohibits states and localities from limiting, restricting, or otherwise regulating artificial intelligence (AI) models, AI systems, or automated decision systems entered into interstate commerce for 10 years.

There are exceptions to the above.  In particular, States can still pass laws involving artificial intelligence (AI) if the laws are meant to help AI adoption, don’t regulate AI more strictly than other technology, charge reasonable fees or require bonds related to AI systems and, States can continue to pass AI-related criminal laws (i.e. for fraud or misuse of AI).  The regulation ban defines AI in broad terms.  The proposed federal preemption of the AI regulation may have created confusion for executives, and perhaps shareholders, on what initiatives to undertake in order to comply with anticipated international and State regulations.

The OBBB survived the Senate’s Byrd Rule, which restricts the inclusion of non-budgetary provisions in reconciliation bills.  This ruling by the Senate Parliamentarian shielded the AI moratorium provision from a filibuster hurdle and allowed it to advance through the budget reconciliation process with a simple majority vote. The path forward was clear and nearly every Senator responded that the moratorium, as proposed in the House Bill, was not going to survive.

It is important for D&O carriers to understand that there may still be attempts to revive the AI moratorium, while considering the potential for a significant shift in corporate exposure.  For example, Senate Commerce Committee chair Ted Cruz proposed the AI regulation ban be cut to five years and to allow states to regulate issues such as protecting artists’ voices or child online safety if they did not impose an “undue or disproportionate burden” on AI.

Immediately after the House Bill passed, Representative Marjorie Taylor Greene posted on X “I am adamantly OPPOSED to [the state AI regulation ban] and it is a violation of state rights and I would have voted NO if I had known this was in there.”  In addition to Representative Greene’s statement, on June 3, 2025, 260 lawmakers from all 50 states, half Republicans, almost half Democrats, and one independent, sent a letter to Congress strongly opposing the OBBB AI regulation ban. The letter states in part:

“As state lawmakers and policymakers, we regularly hear from constituents about the rise of online harms and the impacts of AI on our communities. In an increasingly fraught digital environment, young people are facing new threats online, seniors are targeted by the emergence of AI-generated scams, and workers and creators face new challenges in an AI-integrated economy. Over the next decade, AI will raise some of the most important public policy questions of our time, and it is critical that state policymakers maintain the ability to respond.”

Nearly every state has introduced legislation to address AI, particularly as it affects constituents, and international governing bodies have done the same.  For example, the EU AI Act, which went into effect on August 1, 2024, with the intent to promote responsible AI development and deployment by regulating high-risk AI systems. As a result, many companies may already have spent time and money to ensure compliance with the most restrictive of the proposed regulations.  Given the Senate’s recent vote, that money may have been well spent.

Colorado was the first US State to enact comprehensive AI legislation with the Colorado Artificial Intelligence Act (CAIA).  The CAIA would go into effect on February 1, 2026, and, similarly to the EU AI Act, has the intent of regulating the development and deployment of high-risk AI systems that make “consequential decisions” (those relating to education, employment, financial services, healthcare, or legal services). The CAIA requires all AI systems that interact with consumers make them aware that they are interacting with an AI system. 

The proposed and enacted state regulations are important to consider for D&O underwriters because the internet does not recognize borders, and many companies may have already taken an approach to comply with the most restrictive proposed regulation. Given the CAIA’s breadth of impact, several businesses operating in the United States would fall under its regulatory scope.  However, if the OBBB AI regulation is revived or modified, would the CAIA be unenforceable as written? Companies implementing a Responsible AI Framework to comply with the CAIA or EU AI Act may face scrutiny from shareholders.

Perhaps it will be worse for companies if there ends up being a patchwork of regulations in the US governing AI. Disclosure to consumers and regulators that AI is interacting with a consumer may impair revenue-generating ability; thus, variance between states regarding disclosure requirements may cause leadership to pick and choose where and when to file disclosures.  And, how can there be AI washing when there may not be clarity from filings that AI is impacting a company’s financial health.

In addition, if there ends up being AI regulation that prevents or impairs state law oversight, there may also be costs saved to companies from audits and D&O carriers defending causes of action stemming from state-mandated disclosures. For example, allegations of fraud may be harder to bring without underlying information surrounding a company’s development or AI deployment to the public.  Similarly, if states begin passing their own AI legislation, corporate leadership will need to create a compliance tracking metric or risk potential regulatory action.

Perhaps the Senate was heeding the warning of Anthropic CEO Dario Amodei about the original House Bill’s attempt to limit AI regulation.  In his June 5, 2025, Guest Essay Amodei stated that “without a clear plan for a federal response, a moratorium [of 10 years] would give us the worst of both worlds – no ability for states to act and no national policy as a backstop.”  If the CEO of a company that develops AI is asking for guardrails, they may be needed.  If the OBBB passes with no moratorium, States may begin an AI regulation-o-rama that may or may not create more risk for corporate leadership and D&O underwriters.

The views expressed in this article are exclusively those of the author, and all of the content in this article has been created solely in the author’s individual capacity. This article is not affiliated with her company, colleagues, or clients. The information contained in this article is provided for informational purposes only, and should not be construed as legal advice on any subject matter.