The SEC has already made it clear that it intends to pursue enforcement actions against firms that misrepresent their Artificial Intelligence (AI) capabilities. In the latest example of the SEC’s commitment in that regard, earlier this week the SEC filed an enforcement action against an investment advisory firm, its holding company, and the two firms’ CEO, in part based on allegations that the advisory firm claimed it would provide exceptional returns for investors through its use of artificial intelligence. The firm also sought to attract investors by claims about the firm’s plans to go public and about the firm’s relationships to well-known banks and law firms. The SEC’s August 27, 2024, complaint against the firm and its CEO can be found here. The SEC’s August 27, 2024, press release can be found here.

QZ Asset Management Limited is a China-based investment advisory. Its South Dakota-based holding company is QZ Global Ltd. The CEO of both companies is Blake Yeung Pu Lei. In its complaint, the SEC said that it “cannot corroborate” Yeung’s identity and that Yeung’s state or country of residence is unknown.

In its complaint, the SEC alleges that the defendants made a series of false statements in order to attract U.S. and foreign-based investors. Specifically, the SEC alleges that the defendants “engaged in a concerted scheme involving multiple false statements on social media to defraud hundreds of individuals out of millions of dollars.” After raising the funds, the defendants stopped communicating with the investors and its website went dark in May 2023. As many as 285 clients “lost all access to their accounts and funds.”

The SEC alleges that the defendants deceived the investors by falsely claiming that “(1) QZ Asset would provide exceptional returns using artificial intelligence while ensuring that clients’ ‘capital was 100% protected’; (2) QZ Global had take steps to go public, including submitting an application to have its common stock listed on the Nasdaq Global Select Market; and (3) certain well-known and reputable firms were providing financial and legal services to QZ Asset.”

The complaint further alleges that in order to “provide an air of legitimacy to their fraudulent enterprise,” the defendants “pointed clients and prospective clients to QZ Global’s SEC filings,” which were available to view on the SEC’s EDGAR filing system, “but which were materially deficient and incomplete.”

With respect to the defendants’ alleged AI claims, the SEC’s complaint alleges that QZ Asset told prospective clients that it would provide “exceptional investment returns” by “using artificial intelligence – the QZ Big Data and Artificial Intelligence (“BDAI”) analytics.” The complaint alleges that QZ Asset claimed that BDAI “provides high returns” in “all market conditions.” The complaint alleges that QZ Asset further claimed that BDAI trading would “generate guaranteed ROIs between 2.5% and 7% weekly, “capped at 400%” total. QZ Asset “touted the simplicity of its services,” allegedly stating “all you do is invest, sit back & enjoy your weekly profit.”

The SEC’s complaint alleges that the defendants violated numerous provisions of the Securities Act of 1933, the Securities Exchange Act of 1934, and the Investment Advisers Act of 1940. The complaint seeks a permanent injunction against all defendants; disgorgement of ill-gotten gains, with pre-judgment interest: civil penalties; and an order barring Yeung from serving as an officer or director of a public company.

Discussion

The SEC’s complaint basically alleges that Yeung, using the two firms as cover, executed a flat-out fraudulent scheme to fleece investors. He gave the scheme a contemporary twist by including claims that by using artificial intelligence, he could supercharge investment returns.

As I noted at the outset, this case is far from the first example of an enforcement action in which the SEC charged that a firm had overstated (or outright fabricated) its claimed AI-related capabilities. For example, in March 2024, the SEC filed two AI-related enforcement actions against two investment advisory firms, as discussed here. In addition, and as discussed here, in June 2024, the SEC filed an SEC enforcement action against an employment recruiting firm and its advisor. In each of these cases, the SEC alleged the defendant firms had misrepresented their AI capabilities to investors.

But while this case undeniably has an AI-related twist, that was not the SEC’s greatest concern – what really aggravated the SEC is that the defendants misrepresented the SEC’s filing processes to try to deceive investors. The SEC’s press release includes a statement from a SEC spokesperson who said that “the defendants’ brazen fraud alleged in our complaint” included “their abuse of the SEC’s filing process to prey on individuals” and to “provide an air of legitimacy to their fraudulent endeavors.”

That said, this case is the latest example underscoring the fact that the SEC is targeting firms’ exaggerated or false claims about the firms’ AI capabilities. Interestingly enough, though the SEC has now brought a number of AI-related enforcement actions, to date none of the actions involves a reporting company. I may be proven wrong, but I think it may only be a matter of time before the SEC files an enforcement action against a listed company.

Another AI-Related Risk to Worry About: Readers may have seen the news last week that the U.S. Department of Justice, along with the Attorneys General of several U.S. states, charged that RealPage, Inc. had engaged in a scheme to fix rental prices. The DOJ’s August 23, 2024 press release about the lawsuit can be found here. The complaint alleges that RealPage gathered rent price data from landlords around the country, and then used the data to “train and run” RealPage’s algorithmic software, so the software could provide pricing recommendations. The DOJ in effect alleges that property owners using RealPage colluded to inflate rental prices, in violation of the antitrust laws.

In an interesting August 27, 2024, Law360 article about the RealPage action and entitled “RealPage Suit Shows Growing Algorithm, AI Pricing Scrutiny” (here, subscription required), Andre Geverola and Leah Harrell of the Arnold & Porter law firm examine the ways in which “the use of algorithmic and artificial intelligence tools to assist with pricing decisions has drawn increasing scrutiny across the government.” Government agencies were already wary of firms’ use of algorithmic pricing; the ability of AI tools to supercharge the number-crunching creates even greater risk that algorithmic pricing could have a negative impact on consumers and run afoul of laws targeting anti-competitive behavior.

As the authors put it, “The development of AI offers companies significant promise in optimizing pricing and other business decisions. But companies need to remain alert to antitrust and privacy issues to avoid government peril.”

In other words, this is just one more way that use of AI tools may come with a myriad of complex associated risks.