Sarah Abrams

Many readers may have seen the recent news that the New York Times had sued Microsoft and OpenAI alleging that OpenAI’s use of New York Times content to train their AI tool’s database infringed the newspaper’s copyright. The lawsuit raises its own set of issues but lawsuits of this type relating to AI development also pose an interesting set of insurance coverage related issues. In the following guest post, Sarah Abrams, Head of Professional Liability Claims at Bowhead Specialty, takes a look at the insurance questions that these kinds of lawsuits present. The views of the author are her own and not necessarily that of Bowhead Specialty Underwriters. I would like to thank Sarah for allowing me to publish her article on this site. I welcome guest post submissions from responsible authors on topics of interest to this blog’s readers. Please contact me directly if you would like to submit a guest post. Here is Sarah’s article.


Artificial intelligence (AI) is having a very public moment.  In particular, the ouster and then rehire of OpenAI’s CEO Sam Altman brought to light OpenAI’s not-for-profit, for-profit-ish corporate structure.  Microsoft has invested $13.1B in OpenAI, all while OpenAI maintains its mission statement “that [the] principal beneficiary [of its Generative AI technology] is humanity.”[i] 

OpenAI’s altruistic view is now being challenged by the New York Times (NYT) in its copyright infringement lawsuit filed against Microsoft, OpenAI and others in the United States District Court for the Southern District of New York; in re New York Times v. Microsoft Corporation, Open AI, Inc., et. al, Case No. 1:23-cv-11195[ii].

Specifically, NYT alleges that Microsoft and OpenAI use to train their Generative AI Large Language Models (LLM) “to collect and incorporate its material for public use without providing appropriate compensation to the NYT for the material.[iii]  NYT posits OpenAI is getting a “free-ride on the [NYT]’s massive investment in journalism” and using the aggregated material to “steal audiences away” from reading the NYT.[iv]

In response, OpenAI and Microsoft argue that using NYT [and other] copyrighted works to train AI products amounts to “fair use,” a legal doctrine governing the unlicensed use of copyrighted material.[v] Fair use “promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances.”[vi] Section 107 of the Copyright Act identifies certain types of uses—such as criticism, comment, news reporting, teaching, scholarship, and research—as examples of activities that may qualify as fair use.[vii]

NYT is not seeking a specific amount of damages, but estimated damages in the “billions of dollars.” It also wants OpenAI and Microsoft to destroy chatbot models and training sets that incorporate its material.[viii] Given the Copyright Act definition of “fair use” and that Generative AI has surged in use over the last three years, with theoretical AI as a discipline originating from Dartmouth in the 1950s,[ix] the NYT pursuit to shut LLM down sounds Quixotesque. 

Notably, Generative AI takes information from all over the internet, not just the NYT archives.  Garbage-in, garbage-out.  And proprietary information-in, proprietary information-out; original-thought-in, original-thought-out?  Is there really still an ability to claim that “trade secrets” or “proprietary information” exists when it is being pulled and repurposed regularly?

This begs the question, in the context of management liability coverage, is there really the ability to exclude [the] theft of any intellectual property rights by the insured, including, but not limited to patent, copyright or trademark, service mark, trade dress,, trade secret, or trade slogan.  Because, is it theft? [x] The typical “intellectual property” exclusion contains this provision which is meant to preclude poaching original thought from one party to the profit of another from coverage.[xi] 

However, if Generative AI is really just allowing for “fair use” of information and data readily available in the internet, can an insurer take a position that the catchall provision of the “intellectual property” exclusion applies?  Not only is Generative AI using a wealth of information, it is also capable of producing original “thought;” as was the case in the Mata v. Avianca, Inc.[xii], where ChatGPT created cases for the lawyer representing the Plaintiff in support of its case.

Despite being wholly inaccurate, the citation and case was original. Therefore, if there is a purchase of a technology platform (i.e. a billing review system, financial trading or research tool), that regularly uses AI, particularly Generative AI, how can a competitor company that uses the same technology platform be alleged to have stolen from its competitor.  The Generative AI is just learning from the company employees that use the platform.

There has not yet been a coverage case addressing the applicability of the “intellectual property” exclusion in a NYT copyright or trade secret scenario yet.  Given the continued widespread use of AI in virtually every sector of the economy, it certainly warrants consideration by executive liability insurers.

[i] OpenAI

[ii] new-york-times-microsoft-open-ai-complaint.pdf (

[iii] Why the New York Times is unlikely to win its AI lawsuit against Microsoft (

[iv] Id.

[v] Id.

[vi] Section 107 of the Copyright Act, U.S. Copyright Office | U.S. Copyright Office

[vii] Id.

[viii] new-york-times-microsoft-open-ai-complaint.pdf (

[ix] Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, New York: BasicBooks. p. 109. ISBN 0-465-02997-3.

[x] Overcoming Intellectual Property Exclusions In Insurance Policies: Ervin Cohen & Jessup LLP (

[xi] Overcoming Intellectual Property Exclusions In Insurance Policies: Ervin Cohen & Jessup LLP (

[xii] F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at *2 (S.D.N.Y. June 22, 2023),