
As readers of this blog know, in recent months there have been a number of AI-related corporate and securities suits filed against companies and their executives (as discussed, for example, here). In general, these suits have mostly involved “AI-washing” allegations – that is, allegations that the defendant company misrepresented its AI-related prospects or capabilities. More recently, however, the cases increasingly have involved allegations not that the defendant company overstated its AI-related opportunities, but rather understated its AI-related risks.
Last week, in the latest example of this type of suit, a plaintiff shareholder filed a derivative suit against executives of the digital ad tracking firm DoubleVerify, alleging that the defendants had caused the company to omit to disclose that AI-related developments were undercutting the company’s revenues. A copy of the derivative suit complaint can be found here.
Background
DoubleVerify provides statistical services to Internet advertisers. The company measures the performance of digital ads placed on websites. The company’s data allows advertisers to verify that their ads are being viewed. Among other things, the company also offers advertisers fraud detection.
The company has also expanded into advertising optimization services, a process by which the company uses AI to assist in obtaining the desired ad placement for global brands, a service it calls “Activation Services.” Activation Services are priced at a premium and allow the company to realize substantially higher profit margins than its basic measurement services.
Advertisers have discovered that a significant and growing number of ad impressions are being displayed to robotic agents (“bots”) as opposed to human consumers. This development enabled bad actors to use invalid bot traffic schemes to create false data patterns. The derivative lawsuit complaint alleges that DoubleVerify’s technology could not adequately discern real human traffic from bot traffic, making the company’s Activation Services less useful to advertisers.
The invalid bot traffic schemes caused many advertisers to move their ads from open exchanges to closed platforms, such as Meta Platforms, Google, TikTok, and Amazon. On these closed platforms, access to data is heavily restricted, making it expensive for DoubleVerify to integrate its AI-powered Activation Services. These developments undercut DoubleVerify’s profits and margins.
On March 28, 2025, market research firm Adalytics Research released a report claiming that DoubleVerify’s web advertisement verification and fraud protection services were ineffective and that the company’s customers were regularly billed for ad impressions served to bots. Media reports also alleged that the company regularly missed detection of non-human traffic, in contradiction the company’s claims that it helps brands avoid serving ads to non-human bot accounts.
The Derivative Complaint
On December 9, 2025, a plaintiff shareholder filed a derivative lawsuit in the Southern District of New York against certain of the company’s directors and officers. The complaint alleges that the defendants should be liable for a series of alleged misstatements and omissions during the period beginning in November 2023.
The complaint alleges that during the relevant period the company failed to disclose that: “(i) DoubleVerify’s customers were moving their ad spending from open exchange to closed platforms were the Company’s technological capabilities were limited and competed directly with native tools provided by those platforms including Meta Platforms and Amazon; (ii) DoubleVeryify’s ability to monetize its Activation Services was limited due to the significant expense on the development of technology for closed platforms; (iii) DoubleVerify’s Activation Services in connection with certain closed platforms would take years to monetize; (iv) DoubleVerify’s competitors were in a better position to incorporate AI into their offerings on closed platforms, which hindered DoubleVerify’s ability to compete and impacted their profits; (v) DoubleVerify overbilled for its customers for ad impressions served to both operating out of known data center service farms; (vi) DoubleVerify’s risk disclosures were materially false and misleading because they characterized existing adverse facts as possibilities, when those facts had already begun to impact the Company; and (vii) as a result, the Defendants’ positive statements about the Company’s business, operations and prospects were materially false and misleading and/or lacked a reasonable basis.”
The plaintiff alleges that the defendants violated Sections 14(a), 10(b), and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder; breached their fiduciary duties; and also should be liable for unjust enrichment; abuse of control; gross mismanagement; and insider selling and misappropriation of information.
The same or similar allegations were previously raised in a separate securities class action lawsuit, as noted here.
Discussion
There are a host of allegations in this lawsuit, many but not all of which have to do with AI or with the consequences of AI deployment. Some of the AI-related allegations arguably reflect “AI washing” type allegations, as for example the allegations concerning the company’s alleged disclosures and omissions about the capabilities of its AI products, by comparison to those of its competitors. But the bulk of the AI-related allegations have to do with the impacts on the company, its product offerings, and its ability to compete, due to AI deployment by bad actors, on the one hand, and by competitors, on the other hand.
This is a particularly interesting case to consider in thinking about AI-related risks and the related corporate disclosure challenges. What makes this case interesting is the extent to which the case relates not necessarily to the company’s own AI deployment, but rather to the consequences for the company from AI deployment by others — in tbhis case, bad actors and the company’s competitors. To be sure, there are important allegations in this suit concerning the company’s disclosures about its own AI deployment. But the complaint largely relates to the company’s alleged omissions concerning the risks the company faced due to the bad actors’ and competitors’ AI deployment.
There undoubtedly will continue to be lawsuits filed based on alleged AI-washing allegations. However, I believe that in the weeks and months ahead, we will see more and more lawsuits based on alleged misrepresentations or omissions concerning reporting companies’ AI-related risks. Importantly, the risks that may give rise to these types of allegations may not involve just the risks associated with the defendant company’s own AI deployment, but also the risk associated with AI deployment by customers, competitors, suppliers, vendors, regulators – and even bad actors.
These developments may have D&O underwriting implications. As underwriters seek to develop tools to help them evaluate companies’ AI-related risks, the underwriters may have to try to develop ways to assess the company’s AI-related disclosures, not just with respect to the company’s own AI related strategies and efforts, but also with respect to the increasing deployment of AI in the general business and operating environment. A component of this underwriting assessment will necessarily have to entail the evaluation of the company’s AI risk environment, including the risks associated with AI deployment by competitors, customers, suppliers, regulators, and bad actors.
For further discussion of the D&O related risks associated with AI-powered bots, please see the recent guest post from frequent contributor Sarah Abrams, here.