Sarah Abrams

President Trump has made it clear that advancing efforts in the U.S. to develop artificial intelligence (AI) is a priority of his administration. But a recent criminal enforcement action and civil litigation raises the question whether the increasing prevalence of AI may pose significant new litigation risks that could have important implications for D&O insurance underwriters. In the following guest post, Sarah Abrams, Head of Claims Baleen Specialty, a division of Bowhead Specialty, takes a closer look at the recent enforcement and litigation developments and considers the potential underwriting lessons. I would like to thank Sarah for allowing me to publish her article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like to submit a guest post. Here is Sarah’s article.

********************

Increasing integration of Artificial Intelligence (AI), particularly bots, into corporate work and the support of development (and investment) in AI by the President may change the profile of D&O underwriting exposure.  Notably, on August 22, 2025, the Trump Administration announced that the United States government would make an $8.9 billion investment in a public company that has focused on AI processors and chip production. In the meantime, recent insurance industry products that either cover specific AI-related risk or completely exclude anything related to AI are being offered.  

As the U.S. government positions itself as both a stakeholder in AI development and a driver of evolving AI litigation and regulation, how will the associated D&O risks shift? For D&O underwriters, it may be challenging to determine whether the risk of exposure rises with rising corporate AI use or if the federal government’s focus on winning the “AI race” may reduce future litigation.  Perhaps consideration of recent criminal and civil case filings, as well as the status of AI legislation and insurance coverage initiatives, will provide some insight.

The following discussion includes potential follow-on D&O risk from both the recent headlines-grabbing criminal action brought by the DOJ against the former CEO and founder of IRL and the civil wrongful death lawsuit against OpenAI.  In addition, I will also briefly review the status of the Colorado Artificial Intelligence Act and certain insurance market positions to either affirmatively offer protection or exclude coverage against evolving AI-risk.

However, as the criminal and civil cases filed within the past month demonstrate, the increasing ability for bot-behavior to perpetrate fraud on investors and, even, affect humans may be outside what the public, legislators, and even insurance carriers thought possible just a few months ago.

Recent Filings

 On July 31, 2025, a federal grand jury in the Northern District of California indicted Abraham Shafi, founder and former CEO of Get Together Inc., parent of the Gen Z-targeted social media app IRL, on multiple felony counts, including wire fraud, securities fraud, and obstruction of justice. According to the DOJ, Shafi defrauded investors of approximately $170 million during IRL’s 2021 Series C round by misrepresenting the company’s ad spending and user growth. Federal prosecutors allege that IRL’s user base was largely fictitious, with 95% of claimed users being ‘bots’.  The scale of alleged fraud perpetrated by the use of bots may be alarming to D&O underwriters of not just social media companies, but also companies that disclose use of the software for customer engagement and support.

It is important to note that, while traditional bots, like the ones that may have been used by IRL to purportedly defraud investors, are software, automated to perform straightforward tasks, AI-bots can behave in a more sophisticated manner and can process language, learn from data, and adapt. Perhaps if AI-bots were used by IRL, the alleged fraud may have taken longer to discover.   Especially if AI-bots were able to determine whether the company was under investigation and adapt behavior to look like “actual” IRL users.  

In addition, while D&O underwriters may be familiar with the criminal regulatory charges brought against IRL, if AI-bots had been deployed, there might also have been FTC scrutiny of IRL, as the agency continues to update its guidance regarding the use of AI;  AI-bots cannot misrepresent themselves as human without disclosure. The allegations of the IRL indictment provide a cautionary tale of executive and founder-alleged fraud, which often include significant expenses incurred by insurers to respond to inquiries and subpoenas and potential parallel securities and derivative claims.  However, it also highlights the increasing risk for reliance on bots, particularly AI-bots.  

I next discuss a devastating example of how more sophisticated AI-bots may influence human behavior, an underlying exposure that D&O carriers may not yet have fully contemplated.

On August 26, 2025, Matthew and Maria Raine, the parents of a 16-year-old named Adam, filed a lawsuit against OpenAI, its CEO, Sam Altman, and the company’s employees and investors, alleging negligence and deceptive business practices under California’s Unfair Competition Law.   The pled facts are heartbreaking. In part, Adam’s parents allege that he began using ChatGPT to help with schoolwork, but his relationship with AI evolved into one of a confidant.  Purportedly, Adam revealed to ChatGPT that he feared he had a mental illness, details of multiple suicide attempts, and drug use.  OpenAI’s ChatGPT-4o chatbot allegedly was sympathetic to Adam’s self-harm disclosures and eventually counseled him on how to take his own life.  The Raines’ lawsuit alleges that Adam’s death was a predictable result of ChatGPT’s defective product design and failure to provide safeguards.

Specifically, the Raines’ lawsuit states that OpenAI prioritized AI market dominance over user safety.  Adults who work at and engage with corporate AI-bots may also suffer from mental illness and addiction. If an AI-bot can cause corporate workers or consumers self-harm, should companies and D&O insurers then underwrite to follow-on risk from potential AI-bot-related products and employment liability? As D&O diary readers may recall, product and employment liability exposures can lead to executive liability risk when plaintiffs allege directors and officers failed in their oversight or disclosure duties.

In addition, as of today, there is no U.S. legislation in-force that addresses corporate disclosure of AI-bot-related risk.  Even Colorado, the first state to pass an AI-related consumer protection bill, appears likely to punt the enactment of its passed bill.

Status of Colorado’s AI-Regulation

On August 27, 2025, Colorado lawmakers proposed delaying the state’s landmark AI law (originally set to take effect on February 1, 2026) to June 30, 2026. Colorado’s AI bill, the first of its kind in the U.S., provides a framework for requiring companies to assess, disclose, and mitigate risks of algorithmic discrimination in high-impact AI systems. Apparently, technology and consumer rights lobbyists protested various provisions.  Thus, while many companies may have begun preparing for partial compliance, the delayed enforcement date may be a precursor to what may happen with proposed federal AI regulations.  Especially now that the federal government is an investor in AI hardware and may financially benefit from more corporate integration and use.

However, even if AI-bot-related regulatory exposure in the U.S. may be stalled along with legislative initiatives, litigation stemming from AI use and promotion continues to be filed, and causes of action from AI-bot-related risk are evolving.

Conclusion

While companies and D&O underwriters may face regulatory, criminal, and civil litigation exposure tied to the use and deployment of AI, insurers have responded by offering AI-related insurance products to both cover specific AI-risk as well as broadly exclude all claims “arising out of” AI.  However, with AI-bot development and potential exposure rapidly changing, existing underwriting approaches may become stale quickly.

Perhaps D&O underwriting should regularly include an ongoing assessment of an insured’s AI-bot integration in the context of evolving litigation and regulatory landscape. Then, carriers can decide whether current market offerings match the accelerating use and risk tied to corporate AI-bot integration and deployment. 

The views expressed in this article are exclusively those of the author, and all of the content in this article has been created solely in the author’s individual capacity. This article is not affiliated with the author’s company, colleagues, or clients. The information contained in this article is provided for informational purposes only, and should not be construed as legal advice on any subject matter.