Sarah Abrams

The incidence of AI-related securities litigation is by this point well-established. But as the laws, regulations, and legal environment relating to AI have continued to evolve, so too has the AI-related litigation risk. In the following guest post, Sarah Abrams, Head of Claims Baleen Specialty, a division of Bowhead Specialty, examines the recent settlement of the securities class action litigation involving Snapchat and considers its potential implicationd for future AI-related litigation risk. I would like to thank Sarah for allowing me to publish her article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like to submit a guest post. Here is Sarah’s article.

***********************

The anticipated $65M settlement between Snapchat Inc. (SNAP) and its investors to resolve a putative securities class action (SNAP SCA) highlights a potential exposure for D&O underwriters of companies adopting artificial intelligence (AI) into core infrastructure.  The SNAP SCA, filed in 2021, alleged, in part, misrepresentations surrounding the impact of major privacy-related platform changes on SNAP’s revenue. As AI-specific disclosure and transparency laws continue to be passed, which may have an immediate impact on revenue, should D&O underwriters anticipate similar litigation against companies and their directors and officers?  

As D&O Diary readers may recall, disclosure-related AI D&O risk has already begun to emerge. A securities class action lawsuit filed in June against Reddit (the Reddit SCA) alleged that the company downplayed the potential impact of artificial intelligence developments on its site traffic and advertising revenue. Now that California’s SB 53 and other state AI governance laws are being enacted and taking effect in the near term, companies deploying or relying on consumer-facing AI may need to evaluate whether more robust disclosures are necessary regarding the technology’s potential impact on revenue and operations.

The following will discuss the SNAP SCA and how emerging AI disclosure and transparency requirements may create increased exposure for companies, executives, and D&O underwriters.

SNAP SCA

The SNAP SCA, filed by plaintiff shareholders against SNAP and its top executives, CEO Evan Spiegel and Chief Business Officer Jeremi Gorman, alleged violations of federal securities laws by making false and misleading statements about SNAP’s ability to withstand privacy-related changes introduced by a major mobile operating system provider. The case was brought on behalf of investors who purchased Snap securities between February 5 and October 21, 2021, under Sections 10(b) and 20(a) of the Securities Exchange Act and Rule 10b-5.

According to the complaint, SNAP—a social media company that generated the majority of its revenue through advertising on its Snapchat app—relied heavily on the operating system’s advertising identifier to target and measure ads. When the platform announced in 2020 that it would overhaul its data collection framework and require users to “opt in” to tracking, SNAP’s shareholders alleged that the company’s advertising model was at risk. To calm investor concern, SNAP publicly stated that it was working closely with the platform provider, was “well prepared” for the upcoming changes, and that advertisers representing a majority of its direct-response ad revenue had “successfully implemented” the new privacy framework.

Purportedly, SNAP’s assurances led analysts to conclude that the company was insulated from these privacy changes, and its stock price climbed nearly 50% over the following months, reaching an all-time high of $83.11 in September 2021. However, according to the shareholder plaintiffs, on October 21, 2021, Snap announced disappointing earnings, missing its revenue guidance for the first time in its history. SNAP attributed the shortfall to the privacy updates, explaining that the new ad-measurement framework was “unreliable” and that many advertisers were only beginning to test and adapt to it. SNAP’s stock allegedly fell 26% in a single day, erasing $27 billion in market value. The complaint further alleges that Spiegel engaged in insider trading, selling nearly $700 million in Snap stock during the class period.

Discussion: AI Disclosure and Litigation Risk

SNAP and the plaintiff shareholders ultimately agreed to settle the case for $65 million before the plaintiff class was certified. Of note, the SNAP SCA settlement amount is slightly greater than the reported average securities class action settlement in the first half of 2025, which, according to NERA, was $56 million. To the extent that a comparable AI-related disclosure risk emerges, D&O underwriters may wish to view the SNAP settlement as an early indicator of potential exposure magnitude for AI-driven securities litigation.

With that potential benchmark in mind, the question becomes: what might this new generation of AI-disclosure claims look like? The recently filed Reddit SCA may offer a glimpse of what’s going to come.

The Reddit SCA plaintiff shareholders alleged that Reddit failed to warn investors that emerging artificial intelligence developments would reduce the visibility of its content and, consequently, diminish ad impressions and user engagement. Like the allegations in the SNAP SCA, Reddit’s alleged missteps center on external technological change colliding with optimistic corporate disclosure. Reddit’s early “AI-disclosure” suit mirrors SNAP’s privacy-era allegations: both involve revenue disruptions tied to major platform-level technology shifts outside the company’s control, and both focus on what management knew, when they knew it, and how candidly they disclosed it.

The Reddit SCA and SNAP SCA parallels highlight how evolving technology and regulation can expose companies to new forms of disclosure liability, now increasingly focused on artificial intelligence. For example, under frameworks like California’s SB53 (the “AI Safety and Transparency Act”) and proposed federal AI accountability legislation, companies deploying or developing AI systems must now report “critical safety incidents,” disclose the provenance of training data, and document safeguards against bias or consumer harm.

As these requirements evolve, executives who publicly tout their companies’ AI systems as “safe,” “transparent,” or “aligned with regulatory best practices” without maintaining a robust compliance infrastructure may find themselves vulnerable to SNAP-like securities allegations, facing claims that they made material misstatements or omissions regarding the reliability and oversight of complex technological systems.  Initially, such allegations almost sound like those pled in “AI-washing” securities cases, which, as D&O Diary readers may recall, have been brought in the wake of a company publicly overstating its AI capabilities.

Unlike AI-washing, however, a SNAP SCA-like scenario might involve a publicly traded company that embeds generative AI into its consumer products or financial operations, publicly assuring investors that its algorithms are compliant with forthcoming state or federal AI standards. If, following an investigation or safety incident disclosure, the company’s testing protocols were determined to be insufficient or if its AI models produced discriminatory or unsafe outcomes, shareholders could argue that management’s statements misrepresented both technical readiness and regulatory resilience, a similar theory of liability articulated in the SNAP SCA.

In such a case, the class period would likely track the timeline between the company’s initial AI-related assurances and the subsequent revelation of non-compliance or system failure. Also, insiders selling shares during that window could face scrutiny under insider-trading theories if it is alleged, they knew of internal AI system deficiencies before those were disclosed.  And the SNAP settlement’s $65 million price tag may provide an early indicator of exposure magnitude for AI-related securities litigation. 

Thus, D&O underwriters may want to consider the potential for the next generation of litigation and significant exposure that may result from AI governance misrepresentations, like what occurred in the SNAP SCA, as state and federal governments move toward mandatory AI transparency and safety reporting.

The views expressed on this article are exclusively those of the author, and all of the content in this article has been created solely in the author’s individual capacity. This article is not affiliated with her company, colleagues, or clients. The information contained in this article is provided for informational purposes only, and should not be construed as legal advice on any subject matter.