The risks and opportunities that AI presents have emerged quickly and may be evolving even faster; the whole AI phenomenon has developed much more quickly than legislators’ and regulators’ ability to respond. Among the many AI effects that regulators and other observers are struggling to assess is the extent of the AI-related litigation potential, including but not limited to the prospects for AI-related corporate and securities litigation.
While the nature and scope of the AI-related litigation is yet to be fully revealed, there are some clues emerging that could indicate what we should be looking for. In a December 5, 2023 speech before an AI industry group, SEC Chair Gary Gensler suggested some of the kinds of AI-related corporate behaviors his agency is concerned about, which could prefigure the kinds of allegations that could make their way into AI-related corporate and securities lawsuits. Gensler’s speech is reported in a December 5, 2023 Law360 article, here.
Gensler delivered his remarks in a December 5, 2023 speech before The Messenger AI Summit in Washington, D.C. In particular, Gensler commented on the kinds of AI-related corporate behaviors that he and his agency consider to be problematic.
The first corporate behavior on which Gensler commented drew upon existing litigation risk exposures arising from what has been called “greenwashing” – that is, the actions of some companies to overstate the companies’ efforts to make their operations or products more sustainability from a climate change or environmental perspective. Gensler said that just as companies should not “greenwash,” companies should not “not AI wash.” By that, Gensler apparently meant that companies should not overstate or mislead investors as to their true AI capabilities or the extent to which the company has incorporated AI into their operations or products.
In particular, Gensler noted that as AI-related technology is emerging, there are a variety of processes and operations encompassed within the term “AI,” and that ambiguity means that businesses may seek to try to promote their use of AI when the businesses are in fact using only a small amount of AI, or even none at all. The implication is that in the current environment, companies may be tempted to throw around the term AI when the company may not have in fact have incorporated true AI processes or functions in its operations.
Gensler also expressed a concern about the gap that could arise between what companies may be promising with respect to their adoption of AI-related tools and the reality of what the tools may actually be able to deliver. Gensler expressed particular concern for companies within the financial services industries. The particular concern Gensler noted is that the AI-related tools financial services companies might adopt could be based on AI tools based on overlapping or identical base data sets, which could in turn lead to a “monoculture” that could drive decision making within the industry into a narrow groove. Gensler expressed the concern that that pattern could lead to “financial instability.”
The basic message Gensler delivered is that in many ways the rules that apply even to company statements about a new and emerging technology like AI are the same as those that apply in other contexts; that is, that the statements are governed by the “same basic set of laws” and that obligation not to mislead is the same in the AI context as it is in all contexts. That is, Gensler said, “You can sell a security, you can promote an opportunity, but fairly and accurately describe the material risks.”
Gensler’s point that companies might be tempted to oversell their AI capabilities, in the same way that some companies have been tempted to try to overstatel the sustainability of their products or operations, is valid. AI is one of those hot topics that just about everyone right now (including, for example, even bloggers) feel obliged to talk about. Many companies may feel pressure from investors and others to demonstrate that they are “with it” when it comes to adaptation to the new AI era. The risk is of course that as companies try to show they are responding to the AI wave that they go too far and overstate their companies’ actual AI reality.
Gensler’s larger message — that even in the AI context the basic rules apply and that companies cannot mislead investors — is the more important point. Just because AI is a newly emerging technological tool does not mean that the long-standing rules about corporate disclosures have been suspended or that they do not reach this context. A misrepresentation about AI is no different from a liability standpoint than any other misrepresentation about a company’s operations or financial condition.
In his remarks, Gensler did not address a related but slightly different point that I am also worried about; that is, just as companies may be tempted to try to overstate their AI capabilities, companies may also be tempted to understate the risks and threats that AI may represent to the companies’ prospects and future financial results. Some industries are going to be significantly disrupted by AI; as corporate managers become aware of AI’s disruptive potential, investors will want to know about it. Companies that are not forthcoming about AI’s potential disruptive potential may face later allegations, when the AI impact has become apparent, that the corporate executives were not sufficiently forthcoming when managers first discerned the rising threat.
Yet another exposure some corporate managers may face could arise for companies that are too slow to respond to the changing AI circumstances. Corporate managers may be aware that their competitors’ adoption of AI-related tools or processes is changing the competitive landscape. When the competitive threat starts to translate into lagging financial performance, corporate executives could be open to potential claims that the company failed to disclose to investors the competitive AI-related threat when it first became apparent.
All of which is a long way of saying, as Gensler’s comments underscored, that along with the many changes that AI is bringing to industry and financial markets, AI is also brining a host of potential litigation risks and exposures.
Gensler’s point – that though AI is a new technology, the same disclosure principles still apply – is an important one. While there may be a sense in the current climate that we have to be talking about AI because it is the hot topic du jour does not take away from the fact that there are some real and identifiable litigation risks associated with corporate adoption of AI tools. I suspect we will not have to wait long to see AI-related corporate and securities lawsuits.