Since OpenAI launched ChatGPT in November 2022, the race to capitalize on emerging artificial intelligence (AI) technologies has super-charged the financial markets. The stock prices of AI-associated companies, such as Nvidia and Super Micro Computer, have soared. Several AI-related companies  — such as, for example, Astera Labs and Rubrik — have recently successfully completed IPOs, so much so that that the long-moribund market for IPOs is showing definite signs of life. Other AI companies – including for example, Zapata and MultiplAI Health Ltd. — recently became public through mergers with SPACs.

With the consuming interest in AI in the financial markets, many companies want to try to catch some of the lightning for themselves. However, what the companies say about AI, their AI prospects, and their AI risks could have significant consequences for the companies’ corporate and securities litigation risks, as well as their risks of regulatory scrutiny.

A May 1, 2024, Law360 article entitled “AI is Top of Mind for Companies – and Securities Regulators” (here, subscription required) discusses the disclosure challenges that publicly traded companies face when it comes to AI. As the article puts it, a company’s public reference to AI can cut one of two ways; on the one hand, companies that are developing AI-related products will want to try to highlight their growth opportunities, while on the other hand, many companies find that they must disclose potential security or competitive risks. In either case, or for some companies, in both cases, increasing numbers of companies are finding AI-related issues material to their operations. Indeed, according to a source the article cites, since the launch of ChatGPT, reporting companies have referred to AI or artificial intelligence in SEC filings more than 20,200 times.

The SEC has already made it clear that it is keenly interested in reporting companies’ AI-related disclosures. As I noted in a post on this site at the time, last December, SEC Chair Gary Gensler specifically warned against AI-related misrepresentations, and cautioned reporting companies against so-called “AI-washing,” echoing concerns about climate change-related “greenwashing,” and referring to companies that attempt to burnish their investment profile with outsized claims about their AI-related opportunities. More recently, SEC Enforcement Director Gurbir Grewal warned about the potential for AI-washing to mislead investors, harm consumers, and violate the securities laws.

The SEC has not yet brought an enforcement action against a reporting company based for “AI washing” or otherwise related to AI-related disclosures, although in March the agency did fine two investment advisers for making misleading statements about their use of AI. And even though the SEC has itself not brought an enforcement action based on AI-washing or other AI-related disclosures, there have been several AI-related securities class action lawsuits filed this year, including the action brought in February 2024 against software platform Innodata (discussed here) and the lawsuit filed in March 2024 against security screening company Evolv (as discussed here).

The Law360 article doesn’t mention it, but there is an added regulatory concern when it comes to company’s use of AI, that has to do with the EU’s Artificial Intelligence Act, which the European Parliament approved in March of this year. The Act seeks to classify and regulate AI applications based on their risk to cause harm, with the highest risk level uses banned entirely, and other high-level risk uses subject to security, transparency, and quality requirements. The Act applies to both suppliers and users of AI within the EU. It can apply to companies from outside the EU if they have products or services within the EU. The Act will take effect 20 days after its publication in the Official Journal of the EU, which is expected some time this month. Many of the Act’s provisions will stage in over the next two years. The newly created EU AI Office will oversee the Act’s implementation and enforcement. The consequences for noncompliance can be hefty, ranging from penalties of €35 million or 7 percent of global revenue to €7.5 million or 1.5 percent of revenue, depending on the infringement and size of the company. With potential penalties that sizeable, the risk attendant to the Act extends not only to enforcement but also to the possibility of a follow-on civil action brought by investors alleging that management misrepresented their company’s compliance with the Act or violated their duty of care with respect to the implementation of the Act.

Given all of these developments, it should come as no surprise that many companies increasingly are finding it advantageous, useful, or simply prudent to refer to AI in their periodic reports. Notably, even companies that are “not traditionally thought of as technology companies are incorporating AI-related disclosures in their periodic reports.” Companies’ various disclosures have taken a variety of forms, as noted in the Law360 article to which I linked above.

For example, many companies have included AI-related references in their periodic risk disclosures. For example, as cited in the article, some companies have noted that the company could be harmed if its applications produce faulty analysis or recommendations.

Other companies have noted that the company could be competitively disadvantaged if competitors deploy AI faster than they do.

The article notes that even companies that are not themselves developing AI related products or services may need to consider whether to make AI-related disclosures, for example whether and to what extent their operations could be affected by competitors’, customers’, suppliers’ or vendors’ use of AI.

Companies, including even non-technology companies, will want to consider disclosing the extent to which they use AI systems internally, such as, for example, in hiring decisions or processes, or for customer support services.

Companies may also want to consider AI-related disclosures in connection with the SEC’s recently released cybersecurity disclosure guidelines; for example, the company may want to disclose the ways in which AI-related issues could affect the company’s data security and privacy-related vulnerabilities and capabilities.

Finally, in light of the potential applicability of the EU’s Artificial Intelligence Act, companies may want to address their regulatory risks in the risk disclosure statements.

All of these issues underscore the ways in which AI represents a developing disclosure challenge. Many companies, even well-advised and well-intentioned companies, may struggle to match their disclosures to the evolving risks and opportunities that AI presents. The SEC has made it very clear that it is monitoring companies’ AI-related disclosures. And the plaintiffs’ lawyers have also already shown that they will target companies that overstate their AI capabilities and prospects or understate their AI-related risks.

I don’t have a sense that, to this point, AI-related disclosure has been a high-priority D&O underwriting item, although I suspect that underwriters are wary of companies’ AI-related hype, particularly for companies whose business models are built around the development or use of AI-related products or services. I suspect that increasingly underwriters will be closely scrutinizing companies AI-related risk-factor disclosures, particularly with respect to operational and competitive risks that AI presents. I also suspect that even if AI issues are not an underwriting priority today, they will become so over the next few months.