Some readers may recall that at the end of last year, the New York Times very publicly sued OpenAI and Microsoft for copyright infringement, in connection with the defendants’ alleged use of the newspaper’s content for purposes of training chatbots and other AI tools. Although this kind of lawsuit is pretty far outside the blog’s usual bailiwick, the litigation is still of interest as the landscape of AI-related litigation continues to develop. Now it appears that other media organizations are joining the bandwagon, as two different groups have now filed lawsuits against OpenAI, and, in one case, also against Microsoft. These latest cases are described in an interesting March 5, 2024 post on the SDNY Blog (here).

The two new lawsuits were both filed by news organizations, one suit brought by The Intercept Media (complaint here), and the other brought by Raw Story Media and AlterNet Media (complaint here). Both lawsuits allege copyright infringement in violation of the Digital Millenium Copyright Act (DMCA). The plaintiffs allege that their copyrighted works were used to train OpenAI’s generative artificial intelligence systems and large language models, including OpenAI’s artificial intelligence ChatGPT model.  

The complaint filed by Raw Story Media and AlterNet Media alleges that in deciding what information to include in the ChatGPT training materials,

Defendants had a choice: they could train ChatGPT using works of journalism with the copyright management information protected by the DMCA intact, or they could strip it away. Defendants chose the latter, and in the process, trained ChatGPT not to acknowledge or respect copyright, not to notify ChatGPT users when the responses they received were protected by the journalists’ copyrights, and not to provide attribution when using the works of human journalists.

The plaintiff allege that the defendants knew that including in their training sets plaintiffs’ content without identifying author, title, and copyright information would “induce ChatGPT to provide responses to users that incorporated material from Plaintiffs’ copyright protected works or regurgitated copyright-protected works verbatim or nearly verbatim.” The plaintiffs allege further that ChatGPT users would be “less likely to distribute ChatGPT responses” if they were made aware of the author, title, and copyright information applicable to the material used to generate those responses.

The plaintiffs seek to injunctive relief to require all of the plaintiffs’ copyrighted material from the defendants’ training sets, as well as statutory damages.

Discussion

These new lawsuits, and others like them that other media sources have filed against OpenAI and other AI companies, certainly represent an interesting test for the DMCA. The statute is now 26 years old and was built for a very different technological time and place. The Act’s effectiveness in handling alleged AI-generated violations depends on how courts interpret its provisions and adapt them to emerging technologies like AI.

Even if the DMCA can be made to apply to AI as the plaintiffs seek to do in these lawsuits, the defendants are not without defenses. Among other things, the defendants are likely to argue that their use of plaintiffs’ material is permitted under the copyright law, as the law permits users to use copyrighted material to create new, different, or innovative products.

It may well be that the plaintiffs’ objective in bringing this litigation is to try to bring OpenAI to the negotiating table, so that the parties can hammer out acceptable use policies for the plaintiffs’ copyrighted material.

However, if it ever gets to that stage, the potential statutory damages available could be significant, depending on how many of its works with respect to which the plaintiff is able to show that the defendants infringed; the Act provides for statutory damages of not less than $750 per work infringed and not more than $30,000 per work infringed. However, if the infringement is willful, the court can award up to $150,000 per work. If the plaintiffs are able to show that many of its works were infringed, the aggregate damages could be substantial.