Sarah Abrams

Readers may be aware of Anthropic’s recent settlement of its high-profile AI copyright infringement case. The lawsuit and its settlement clearly involve important intellectual property issues. In the following guest post, Sarah Abrams, Head of Claims Baleen Specialty, a division of Bowhead Specialty, considers whether the lawsuit and its settlement may also point toward D&O liability and insurance issues as well. I would like to thank Sarah for allowing me to publish her article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like submit a guest post. Here is Sarah’s article.

****************************

In a first-of-its-kind settlement of an AI copyright infringement case, Anthropic agreed to a $1.5 billion payment to resolve the Bartz et al. v. Anthropic PBC litigation (Anthropic Case).  Given that intellectual property disputes are often excluded from D&O coverage, underwriters may wonder how the Anthropic Case and its settlement would impact D&O insurers. In reviewing the Anthropic case and settlement, I found similarities to the early 2000s Napster litigation and potential for a new breed of Caremark claims. Particularly, the duty of oversight risks arising from the increasing use of large language models (LLMs) and other artificial intelligence tools by companies and their employees.

As described in more detail below, the Anthropic Case was filed by a group of authors and publishers who sued Anthropic for allegedly pirating their work to train its large language model (LLM), Claude.  The plaintiff authors alleged that Anthropic infringed copyrights by downloading their books from online repositories and using their work without permission or compensation. Class certification was recently granted ahead of the parties’ settlement.

The following discusses certain settlement terms of the Anthropic case, the plaintiffs’ allegations against Anthropic, and whether the alleged large-scale theft of creative works by an AI company to train LLMs may have a broader D&O underwriter impact.  Are we about to enter a “New Era” of Caremark claims?

Anthropic Case

Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, are bestselling authors with works registered at the U.S. Copyright Office.  On behalf of a class of authors and publishers, they filed the Anthropic Case, alleging that Anthropic built its Claude models through mass copyright infringement; downloading and copying pirated books from online repositories such as Books3 (a dataset derived from the “Bibliotik” piracy site) and The Pile.  The plaintiff authors alleged that their works were used without permission or compensation to train Anthropic’s Claude models, and that they concealed the full extent of the LLMs’ reliance on infringing sources.

According to the complaint, beginning in 2021, Anthropic’s co-founders and engineers amassed up to seven million pirated works, retaining them in a central repository for LLM training. The author plaintiffs contended this constituted “Napster-style” infringement, as Anthropic avoided licensing costs and relied on metadata and hashing techniques to catalog the stolen material. The author works allegedly copied include Bartz’s The Lost Night and The Herd, Graeber’s The Good Nurse and The Breakthrough, and Johnson’s To Be a Friend Is Fatal and The Feather Thief.

Settlement

The Anthropic Case settlement is considered the largest copyright recovery to date: a minimum $1.5 billion non-reversionary fund. This means that once Anthropic pays money into the settlement fund, it can’t take it back, even if not all class members file claims. Anthropic agreed to fund the settlement through staged payments over two years. Thus, instead of a lump sum payment, Anthropic will pay the settlement in installments over two years (e.g., $300M at preliminary approval, $300M at final approval, and the balance in two tranches with interest).  In addition, all Anthropic Case-related copyrighted material has been included in a “Works List” that will be assigned a dollar value. If the total number of works in the Works List is greater than 500,000, Anthropic will pay an extra $3,000 per work in compensation to infringed authors.

In addition, Anthropic agreed to a release of the authors’ claims limited to past conduct only (pre–August 25, 2025), leaving open the possibility of future claims and disputes over AI-generated outputs. The certified class consists of copyright owners of books downloaded from LibGen and PiLiMi. Beyond monetary relief, Anthropic must destroy all datasets derived from LibGen/PiLiMi within 30 days of final judgment and certify compliance. Administration will be handled by JND Legal, with notice via direct outreach, trade groups, and a dedicated settlement website.

Discussion

The comparison in the Anthropic Case complaint to Napster triggered the thought that there may be potential for future Caremark claims arising out of the use of LLMs, like Claude.  Especially, because the Anthropic Case settlement does not apply to future copyright infringement claims.

For those who were not in college with me, sharing and downloading Backstreet Boys songs for free courtesy of Napster, Napster was a pioneering peer-to-peer (P2P) file-sharing service founded in 1999. It allowed users to share and download MP3 music files directly from each other’s computers over the Internet. Napster quickly gained millions of users by making it easy to find and share songs for free, disrupting the traditional music industry, and was just as quickly sued by record labels and artists for copyright infringement. In 2000, the Northern District of California ruled that Napster was liable for enabling users to share copyrighted songs without authorization, a decision upheld by the Ninth Circuit Court of Appeals.

Napster was ordered to block access to infringing works. Unable to comply, it was forced to shut down operations in July 2001.  Notably, while the Anthropic Case settlement requires destroying pirated work, the injunctive relief will likely not result in shutting down Claude. In addition, future claims for copyright infringement can be made, along with future payments to authors whose work was not part of the original “Works List.” However, now both Anthropic and the general public, including members of corporate boards, may be considered on notice that infringing work was used to train LLMs.

As a result, it may also be important to note that in 2003, after Napster’s free music-sharing services were shut down, the Recording Industry Association of America (RIAA) began suing thousands of individual free music-sharing users (often college students and households) who continued to download or share copyrighted songs. Many users settled for amounts between $3,000–$5,000 to avoid statutory damages of up to $150,000 per song.

What if post-Anthropic Case settlement plaintiffs, like RIAA, begin suing individuals and companies for copyright infringement?  This emerging risk underscores the potential Caremark-style oversight risk for boards. With the massive financial Anthropic Case settlement in place, plaintiff authors, publishers, or artists may next target corporations and executives that deploy Claude or similar AI systems, alleging that downstream use amounts to copyright infringement. If that happens, shareholders may assert that corporate directors and officers breached their duty of oversight by failing to implement and monitor AI systems, including LLM use, to ensure proper licensing or evaluate third-party IP risks.  

As D&O Diary Readers may recall, under Delaware law, liability can attach where directors “utterly fail” to implement reporting or monitoring systems for mission-critical risks. AI adoption and compliance with copyright licensing, particularly now that the Anthropic case has been settled, with terms made public, may require proactive oversight and documented board-level engagement to meet governance standards.  Thus, while the Anthropic Case settlement is the first example of a billion-dollar copyright infringement exposure, it may also serve as a warning about AI-governance risk for boards, executives, and their D&O underwriters.

The views expressed in this article are exclusively those of the author, and all of the content in this article has been created solely in the author’s individual capacity. This site is not affiliated with the author’s company, colleagues, or clients. The information contained in this site is provided for informational purposes only, and should not be construed as legal advice on any subject matter.