
In the following guest post, Sarah Abrams, Head of Claims Baleen Specialty, a division of Bowhead Specialty, takes a look at California’s recently enacted SB 53, state AI-related legislation concerning “large frontier developers,” and considers the legislation’s liability implications. I would like to thank Sarah for allowing me to publish her article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like to submit a guest post. Here is Sarah’s article.
***********************
California’s new law, SB 53—the “Transparency in Frontier Artificial Intelligence Act” (SB 53), may create a new risk of liability for D&O underwriters of companies that develop or heavily rely on Artificial Intelligence (AI). Given the increasing amount of investment by companies in AI tools, SB 53, and its regulatory framework may cast its net wider than the large “frontier AI” developers identified in recently acted bill.
Governor Newsom signed SB 53 into law on September 29, citing that the law built on recommendations from California’s March 2025 working report on responsible implementation and use of AI. Notably, the Governor had previously blocked SB 1047, a bill passed by the California legislature in September 2024, which also focused on large AI developers but included more stringent requirements aimed at protecting cybersecurity and safety. In part, SB 1047 would have required safety and testing “kill switches” for AI systems and a 72-hour timeframe to report a safety incident involving AI.
SB 53 does not require “kill switches” and instead provides a 15-day timeframe to report a critical safety incident involving AI to California’s Office of Emergency Services. However, if the incident poses an imminent risk of death or serious injury, reporting to public safety or law enforcement must happen within 24 hours. As D&O Diary readers may recall, recent litigation against OpenAI alleges that ChatGPT’s product design contributed to the death of a California teenager. Moving forward, such an alleged AI-bot defect, which can cause bodily injury, would likely constitute a safety incident requiring more immediate reporting.
With the SB 53 now in force, D&O insurers may want to consider whether “frontier AI” developers and companies integrating AI into day-to-day workplace functions may face increased risk of executive and board liability exposure. In determining the prospective impact, the following will discuss the SB 53 framework in more detail, including the scope and potential D&O risks stemming therefrom.
SB 53
First, as indicated above, SB 53 covers “Large frontier developers” of “frontier models.” A “frontier model” is defined as a foundation model, which is a large AI model trained on vast datasets (text, images, code, etc.) that can then be adapted (“fine-tuned”) for many downstream uses (GPT-4, Claude, or Gemini). A “Large frontier developer” is further defined in SB 53 as a developer whose affiliates had greater than $500 million in annual revenue in the prior calendar year.
SB 53 focuses on qualifying developers, providing transparency in their work, and reporting on internal controls, particularly with respect to safety and cybersecurity. The law requires developers to adopt, implement, and publish AI frameworks that demonstrate, in part, how the developer defines thresholds for assessing catastrophic risk, applies mitigations, reviews and updates the framework annually, uses third-party evaluations, cybersecurity securing model weights, and responds to “critical safety incidents.” Developers in SB 53’s scope may not make materially false or misleading statements about catastrophic risk, or about compliance with their frontier AI framework.
In addition, employees of frontier AI developers (referred to as “covered employees”) may anonymously disclose concerns internally (or externally) if they reasonably believe that the developer’s activities present a catastrophic risk or that misleading statements have been made. In sum, SB 53 provides whistleblower protection. Finally, if an AI developer in the scope of SB 53 fails to publish compliant documents or fails to comply with its internal frontier AI framework, a civil penalty of $1,000,000 may be assigned and enforced by the Attorney General.
With the SB 53 framework in mind, there may be ripple effects via contractual flow-down obligations, procurement diligence, or vendor risk assessments.
Discussion
While narrower than the EU’s AI Act, SB 53 nevertheless represents one of the first legally binding attempts in the U.S. to translate AI safety principles into enforceable obligations. As readers of the D&O Diary may recall, enactment of Colorado’s AI Bill has been delayed until June 2026. The target for SB 53 may be large AI developers; however, the law may have unintended consequences. Particularly, SB 53 may also impact companies with master service agreements with large frontier AI developers for AI integration and services. First, I will review the direct impact.
For D&O insurers of large AI frontier developers, the public-facing disclosure obligations may create a new source of potential risk. Transparency reports, safety frameworks, and compliance certifications may be used if an incident, particularly involving cybersecurity and individual safety, reveals gaps between a disclosure and what its leadership and board knew. As D&O Diary readers are aware from regulatory and follow-on litigation stemming from gaps in cybersecurity disclosures, i.e., Solar Winds, mandated disclosure statements regarding AI security may be the foundation of claims alleging misrepresentation, breach of fiduciary duty, or failure of oversight.
In addition, SB 53’s whistleblower protection may also heighten the governance challenge for large AI developers. Employees who believe the company is mismanaging AI risks or making misleading statements now have statutory avenues to raise concerns internally or externally, with retaliation protections built in. While this whistleblower protection covers employees of the AI developers, a company that has a services agreement to use AI technology for its business may also become aware of misrepresentations or other risks contemplated by SB 53.
Thus, a company using a large frontier developer’s AI tool may want to review the AI developer’s SB 53 risk disclosure ahead of implementation. If it does not, company stakeholders may question the decision to enter into a particular vendor agreement and integrate the developer’s AI tool. For corporate boards, this may mean more pressure to document oversight, respond to red flags, and ensure independent review of AI risk. For insurers, this may mean the potential for derivative suits alleging Caremark-style oversight failures may be fueled by a paper trail of internal complaints and statutory whistleblower protections.
Finally, SB 53 is likely to influence contracting norms, investor expectations, and the standard of care in AI governance. A mid-sized company integrating AI to improve workplace efficiencies may not face direct reporting obligations; however, it could still be held to account if it represents that its AI tools are “safe” or “compliant” while failing to implement meaningful safeguards. As these standards migrate across industries, D&O insurers may want to pay sharper attention to AI usage, governance structures, and disclosure practices.
The views expressed in this article are exclusively those of the author, and all of the content in this article has been created solely in the author’s individual capacity. This site is not affiliated with the author’s company, colleagues, or clients. The information contained in this site is provided for informational purposes only, and should not be construed as legal advice on any subject matter.