The SEC wants everyone to know that it is watching what the companies and firms it regulates are saying about their use of Artificial Intelligence (AI). SEC Chair Gary Gensler set the stage in a speech he made last December in which he warned companies about “AI Washing” – that is, making unfounded AI claims to the public. Now the agency has brought settled enforcement actions against two investment advisers for making allegedly false statements about their use of AI technology. As if the enforcement actions themselves were not enough to send the message that the SEC is on the AI beat, the agency also released a video statement from Gensler emphasizing the agency’s AI-related concerns.

The SEC’s March 18, 2024, press release about the enforcement actions can be found here. The SEC’s March 18, 2024, Administrative Order against Delphi (USA) Inc. can be found here. The SEC’s March 18, 2024, Administrative Order against Global Predictions, Inc. can be found here. The link to Gensler’s March 18, 2024, video can be found here.

The Enforcement Actions

According to the SEC’s Administrative Order against the Toronto-based firm Delphia, from 2019 to 2023 the firm made misleading statements in its SEC filings and other disclosure statements regarding its “purported use of AI and machine learning” that incorporated client data in its investment process. Among other things, the Order alleges that Delphia claimed that it “puts collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else does.” The Order alleges that these statements were false and misleading because “Delphia did not in fact have the AI and machine learning capabilities that it claimed.”

In the SEC’s Administrative Order against the San Francisco-based Global Predictions, the SEC alleges that the firm made false and misleading claims in 2023 on its website and on social media about its supposed use of AI. Among other things, the firm claimed to be the “first AI financial advisor” and misrepresented that its platform provided “expert AI-driven forecasts.”

Both firms consented to the entry of orders against them finding that they had violated the Investment Advisers Act of 1940, ordering them to be censured and to cease and desist from the violations. Delphia agreed to pay a civil penalty of $250,000 and Global Predictions agreed to pay a civil penalty of $175,000.

The agency’s press release quotes Gensler as saying that the agency had found that Delphia and Global Predictions “marketed to their clients that they were using AI in certain ways when, in fact, they were not.” He added that “We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisers should not mislead the public by saying they are using an AI model when they are not.”

The press release also quotes SEC Enforcement Division Director Gurbir Grewal as saying that as AI-powered tools become more widespread, the agency is “committed to protecting [investors] against those engaged in ‘AI washing,’” adding a note to investment firms that “if you claim to use AI in your investment processes, you need to ensure that your representations are not false or misleading.”

The SEC really wanted to emphasize its concerns about AI-related disclosure and to highlight the fact that it will be watching what companies and firms say, so on the same day as it announced the settled enforcement actions against the two investment advisors, the agency also released a video statement by Gensler. In the video, Gensler says, among other things, that:

Investment advisers or broker dealers might want to tap into the excitement about AI by telling you that they’re using this new technology to help you get a better return. Public company execs, they might think that they will enhance their stock price by talking about their use of AI.

Well, here at the SEC, we want to make sure that these folks are telling the truth. In essence, they should say what they’re doing, and do what they’re saying. Investment advisers or broker dealers should not mislead the public by saying they are using an AI model when they’re not, nor say that they’re using an AI model in a particular way, but not do so.

Public companies should make sure they have a reasonable basis for the claims they make and yes, the particular risks they face about their AI use, and investors should be told that basis.

AI washing, whether it’s by financial intermediaries such as investment advisers and broker dealers, or by companies raising money from the public, that AI washing may violate the securities laws.

Discussion

The agency is working hard to try to communicate that it has concerns about AI-related disclosures, particularly with respect to AI washing. Indeed, the agency has a number of AI-related concerns. These two settled enforcement actions follow the agency’s January 2024 release of an Investor Alert about “Artificial Intelligence (AI) and Investment Fraud” (here). The alert warns investors about firms making unwarranted claims about AI capabilities or services; about getting scammed into investing in companies that claim to have AI-related products or services; and about possible use of “deepfake” videos or audios to mislead investors.

The agency’s AI-related messages and warning are noteworthy not only because they highlight the potential risks of future SEC enforcement actions, but also because they illustrate the kinds of concerns that might also lead to private securities litigation brought by investors who claim they were misled by AI-related disclosures.

The possibility of AI-related securities litigation is not merely theoretical. As I noted in a post last month (here), investors brought an AI washing-related securities lawsuit against the software platform company Innodata alleging that the company “misrepresented the extent to which the company’s products and services actually employ AI technology and also the extent of the company’s investment in AI.”

Although I said at the time the Innodata lawsuit was filed that I thought it was the first AI-related securities lawsuit, I have subsequently learned that there have been prior AI-relate securities suits. For example, in May 2022, plaintiff shareholders brought a securities class action lawsuit against Upstart, which claimed to be a cloud-based artificial intelligence lending platform. Among other things, the plaintiffs alleged that the defendants misrepresented the extent of its AI platform to adequately account for interest rates and other macroeconomic factors.

In addition, in November 2021, a plaintiff shareholder launched a securities class action lawsuit against the online real estate firm Zillow, in which the plaintiff alleged, among other things, that the company misrepresented the capabilities of its Zillow Offers tool. The company allegedly claimed that it used the tool to allow consumers to buy or sell houses quickly. Zillow allegedly said that its tool’s neural networks used artificial intelligence capabilities to map millions of data points, providing predictive power. However, the plaintiffs allege, the tool was unable to accurately predict home prices, and eventually the company had to shut it down. The Zillow case is currently set for trial in June 2025.  

Another consideration that must be added to the mix is the advent of governmental regulation of AI, such as the EU’s adoption earlier this month of its sweeping Artificial Intelligence Act, which not only institutes AI implementation and use requirements but also various transparency requirements, as discussed at length here. The Act’s new requirements not only carry the risk of regulatory compliance enforcement actions, but also the possibility for follow-on civil litigation, building on the allegations in the regulatory enforcement action.

My point here is that AI, in addition to representing an emerging technological opportunity, also potentially brings with associated risks for corporate and securities litigation. Among these risks are the risk of AI-related SEC enforcement action; indeed, the SEC has taken unusually proactive steps to reinforce the message that it is monitoring and policing AI-related disclosures. But in addition to the possibilities for SEC enforcement action, companies making statements about their adoption of AI or their AI capabilities also face the risk of separate corporate and securities litigation, including in particular private securities class action litigation.

It is clear that these days any list of emerging D&O liability exposure risks has to have AI at or near the top of the list.