Investors and entrepreneurs everywhere are impressed with the potentially transformative promise of artificial intelligence. Unfortunately, AI’s seemingly unlimited promise has also attracted companies and other players who, in order to participate in the current AI wave, overstate their AI capabilities. These kinds of statements have already attracted the attention of plaintiffs’ lawyers and the SEC. Now the Federal Trade Commission (FTC) has gotten into the act. The agency has launched a “crackdown on deceptive AI claims and schemes” called Operation AI Comply. In a September 25, 2024, press release (here), the FTC announced five recent law enforcement actions the agency has launched against “operations that use AI hype or sell AI technology that can be used in deceptive and unfair ways.” The agency’s initiative highlights the regulatory scrutiny companies can face with respect to the AI-related operations and marketing.

In launching Operation AI Comply, the agency noted that “claims around artificial intelligence have become more prevalent in the marketplace, including frequent promises about the ways it could potentially enhance people’s lives through automation and problem solving.” The agency said its recent enforcement actions show that “firms have seized on the hype surrounding AI and are using it to lure consumers into bogus schemes, and are also providing AI powered tools that can turbocharge the deception.”

The agency’s September 25 press release lists five specific enforcement action that agency has brought based on deceptive conduct involving AI-related products or services:

  • DoNotPay: The FTC has filed an agency enforcement action against UK-based DoNotPay, a company that claimed to offer the “world’s first robot lawyer” but that, according to the agency, “failed to live up to its lofty claims that the service could substitute for the expertise of a human lawyer.” Among other things in its marketing materials, the company allegedly claims that the company would “replace the $200-billion-dollar legal industry with artificial intelligence.” The complaint alleges that the company did not conduct testing to determine whether its chatbot’s output was equal to the level of a human lawyer and the company itself did not hire or retain any attorneys. The company has agreed to a $193,000 fine, and to provide notice to consumers who used the service warning them about the limitations of the law related features.
  • AscendEcom: The FTC has filed a lawsuit in the Central District of California against this company which allegedly operated an “online business opportunity scheme” and allegedly “falsely claimed its ‘cutting edge’ AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts. The company allegedly “claimed the company was a leader in ecommerce, using proprietary software and artificial intelligence to maximize clients’ business success.” As it turned out, for nearly all consumer, “the promised gains never materialized.” The agency has received numerous consumer complaints. The agency’s action against the firm is ongoing, but the court has issued a temporary order halting the scheme.
  • Ecommerce Empire Builders: The FTC filed a complaint in the Eastern District of Pennsylvania against this firm alleging that the company also has operated a business opportunity scheme. The FTC alleges that the company made false claims to “help consumers build an ‘AI-powered Ecommerce Empire’ by participating in training programs that can cost almost $2,000 or by buying a ‘done for you’ online storefront.” Apparently numerous consumers complained to the FTC that the stores they purchased from this firm made little or no money. The FTC’s enforcement action is ongoing, but the federal court has issued an order temporarily halting the scheme.
  • Rytr: In the agency enforcement action against this firm, the FTC alleges that since April 2021, the company has marketed and sold an AI “writing assistant” service. Among other things, the service “Testimonial & Review” generation, which allowed subscribers to generate an unlimited number of detailed consumer reviews based on very limited, generic input. The agency alleges that the reviews often contained material that had no relation to the user’s input, adding that the reviews almost certainly would be false for the users who copied them. The agency alleges further that some Rytr customers had used the service to produce hundreds, and in some cases, tens of thousands, of reviews potentially containing false information. The agency has presented a proposed order that would bar the company from engaging in similar conduct.
  • FBA Machine: The FTC has filed a complaint in the District of New Jersey against FBA Machine, alleging that the company operated another business opportunity scheme that allegedly “falsely promised consumers that they would make guaranteed income through online storefronts that utilized AI-powered software.” Among other things, company sales agents told consumers that the business was “risk-free” and “falsely guaranteed refunds to consumer who did not make back their initial investments.” The court action against the company is ongoing but the court has issued an order temporarily halting the scheme.

The agency’s press release emphasized that these actions build upon and follow at least a half-a-dozen other actions alleging misrepresentations or deceptions in the use of artificial intelligence, including at least one action charging one firm with the use of facial recognition technology without reasonable safeguards.

The FTC’s press release, its launch of the AI-related crackdown, and the various enforcement and court actions the agency has filed, underscore the fact that AI-related claims and the use of AI-related technology are being closely monitored. It is already well-established that the SEC is also monitoring companies’ claims about their AI-related products and services. The FTC’s initiative illustrates the further extent of the regulatory oversight surrounding AI-related misrepresentations and deceptions.

These regulatory initiatives are of course salutary to the extent they help police the marketplace in order to protect consumers and investors from AI-related misrepresentations and deceptions. Many companies these are already aware of the potential risks associated with AI-related representations, and well-advised companies are already taking steps to try to avoid attracting the unwanted attention of regulators or even of plaintiffs’ lawyers. The FTC’s new initiative represents one more aspect of the AI-associated regulatory risk that companies now face.

It is worth noting that the FTC is not an agency that I frequently have occasion to write about on this site. Some readers may question whether this particular agency’s actions represent the kind of thing that is relevant to the world of directors’ and officers’ liability and insurance. For whatever it may be worth, I note that several of the complaints and enforcement actions to which I link above name individual directors and officers as defendants in the proceedings. So while the FTC may not be featured regularly on this site, the agency’s actions can and sometimes do involve claims against corporate directors and offices, and to that extent at least the agency is relevant to the issues on which this site focuses.

Special thanks to a loyal reader for sending me a link to the FTC press release.