
AI-related news dominates the business pages these days. Many companies increasingly are adapting their business processes to incorporate AI-related operations, and an growing number of companies are adjusting their business strategies to accommodate AI. While these changes present a host of opportunities, they also involve risks. A securities lawsuit recently filed against the integrated circuit (IC) design software company Synopsys shows how these kinds of AI-related risks can translate into securities litigation. In the complaint, the company is alleged to have understated the additional customization requirements that its customers’ AI-adapted operations would entail. A copy of the October 31, 2025, complaint can be found here.
Background
Synopsys provides electronic design automation software products used to design and test integrated circuits. It operates in two segments, Design Automation and Design IP. The Design IP segment provides pre-designed components that semiconductor companies can use to build chips more quickly and cost-effectively. The Design IP segment has in the recent past been the company’s fastest growing part. The securities complaint alleges that during the class period, the company touted its growth prospects, particularly with respect to its Design IP segment.
On September 9, 2025, Synopsys released its fiscal third quarter 2025 financial results, disclosing that the company’s “IP business underperformed expectations.” Quarterly revenue and net income came in below the company’s guidance. Reported net income represented a 43% year-over-year decline. The Design IP segment revenue declined over 7% on a year-over-year basis. The company also lowered its full year guidance. According to the subsequently filed securities lawsuit complaint, the company’s share price declined nearly 36% on this news.
The Lawsuit
On October 31, 2025, a plaintiff shareholder filed a securities class action lawsuit in the Northern District of California against Synopsys and certain of its officers. The complaint purports to be filed on behalf of investors who purchased the company’s securities between December 4, 2024, and September 9, 2025.
The complaint alleges that during the class period, the defendants failed to disclose to investors: “(1) the extent to which the Company’s increased focus on artificial intelligence customers, which require additional customization, was deteriorating the economics of its Design IP business; (2) that, as a result, ‘certain road map and resource decisions’ were unlikely to ‘yield their intended results’; (3) that the foregoing had a material negative impact on financial results; and (4) that, as a result of the foregoing, Defendants’ positive statements about the Company’s business, operations, and prospects were materially misleading and/or lacked a reasonable basis.”
The plaintiff alleges that the defendants violated Sections 10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The complaint seeks to recover damages on behalf of the class.
Discussion
There have of course been prior AI-related securities lawsuits filed. By and large, these lawsuits have involved so-called “AI-washing” type allegations – that is, allegations that the defendant company overstated its AI-related prospects or opportunities.
This lawsuit involves an arguably different kind of AI-related allegation.
Here, rather than alleging that Synopsys overstated its AI-related prospects or opportunities, the plaintiff alleges that the company understated the risks associated with servicing AI-related business. Synopsys is alleged to have omitted to disclose that its increased focus on AI-related customers could cause the economics of its Design IP business to deteriorate because of the increased customization that the AI-related customers require.
This lawsuit is not the first AI-related securities suit to involve allegations not that the company overstated its AI-related prospects bur rather that the company understated its AI-related risks. For example, and as discussed in detail here, earlier this year the online community bulletin board company Reddit was hit with a securities suit after Google’s adoption of AI-enhanced search results caused Reddit’s user numbers and revenue to shrink. Reddit is alleged to have understated the risk to the company from Google’s adoption of AI-enhanced search results.
While it is interesting to me that this new lawsuit involves allegations of the under-disclosure of AI-related risks (rather than the over-disclosure of AI-related opportunities), there is another aspect of the allegations that is particularly interesting to me. That is, it was not Synopsis’s own adoption of AI that created the risks involved here; rather it was its customers’ AI-adoption that created the supposedly undisclosed risks for Synopsys. Similarly, in the Reddit case, it was not Reddit’s own adoption of AI that created the allegedly undisclosed risks, but rather it was Google’s adoption of AI that created the risks for Reddit.
I emphasize this aspect of the allegations in this case and in the Reddit case because in a business environment in which so many company’s are adopting AI-related processes or strategies, risks will be proliferating for every enterprise, even those that are not themselves expressly adopting AI. The fact is, AI-adoption by a host of economic players could affect any given firm’s operations, and as customers, suppliers, even regulators adopt AI, it could give rise to a host of risks for the firm.
The potential proliferation of these kinds of AI-related risks does put a premium on company disclosures. The disclosures of companies that experience disruption as its business partners adopt AI will be subject to hindsight scrutiny, as plaintiffs’ lawyer seek to establish that the companies failed to disclose the risks involved.
This kind of AI-related disclosure scrutiny is interesting in the context of this case, as the complaint here expressly quotes the company’s AI-related risk factor disclosure in its periodic SEC filings. In each SEC report filed during the class period, the company expressly disclosed that its AI initiatives “could” impact the company’s results, stating that “we may not be successful in our AI initiatives, which could adversely affect our business, operating results or financial condition.”
The risk factor disclosure goes on to say, among other things, that “While these AI initiatives can present significant benefits, the AI landscape is rapidly evolving and may create risks and challenges for our business,” adding that “If we fail to develop and timely offer products with AI features, if such products fail to meet our customers’ demands, if these products fail to operate as expected, or if our competitors incorporate AI into their products more quickly or more successfully than we do, we may experience brand or reputational harm or lose our competitive position.”
These AI-related risk factor disclosures are interesting. The company may well contend that through these risk factor disclosures the company tried to identify and disclose its AI-related risks. The problem for the company in trying to make this argument is that the risk-factor disclosures seek to address the company’s risks associated with its own adoption of AI processes and strategies. The plaintiffs may well contend that the risk factor disclosure does not address, or even attempt to address, the risks to the company from its customer’s adoption of AI related processes or strategies. For me, this just highlights the litigation risks firms will face as their customers, suppliers, and regulators adopt AI-related processes or strategies.
In any event, the widespread adoption of AI is going to continue to be a significant contributing factor to corporate and securities litigation in the weeks and months ahead.