
Whenever the discussion turns to the question of emerging risks, among the first topics to come up these days is artificial intelligence (AI). But just as AI technology itself is still taking shape, the legal risks that the emergence of AI may present are still forming as well. On October 30, 2023, in what is unquestionably a key step in the development of a regulatory and legal framework for the administration of AI, the White House issued an Executive Order on the development and use of AI. The Order, which is entitled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” can be found here. At a minimum, the Executive Order has important implications for AI-related corporate risk management. The order may also point toward the future development of regulatory and legal standards pertaining to AI, as well as the litigation risks that AI may present.
The Executive Order
The White House’s release last week of the Executive Order follows the January 2023 release of the National Institute of Standards and Technology’s AI Risk Management Framework (here) [hereafter, the Framework], the content of which is incorporated into certain of the new Executive Order’s initiatives.
The 63-page Executive Order opens with a paragraph emphasizing both the promises and risks that AI presents. With respect to the risks, the Order notes that the “irresponsible use” of AI “could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” The Order seeks to initiate a coordinated effort at the federal government level to ensure the “development and use of AI safely and responsibly.”
In focusing this effort, the Order identifies a list of significant concerns, including security and privacy issues; intellectual property concerns; the protection of American workers against job loss and economic disruption; the interests of equity and civil rights; protection of consumers from fraud and deception; the limits of the federal government to be able to regulate the rapidly emerging technology; and the impact of the emergence of AI on the global order, including with respect to national and international security.
In order to address the identified concerns, the Executive Order identifies a variety of actions and initiatives, which for purposes of this memo may be summarized into four categories, as noted in a November 3, 2023, memo from the Wachtell Lipton law firm on The CLS Blue Sky Blog (here).
First, the Order requires a number of federal agencies to assess AI risk within the industries the agencies oversee, after which the regulatory agencies must publish guidelines for within those industries to incorporate the Framework. For example, the Order directs the Commerce Department to help the risks posed by AI-based synthetic content (such as, for example, “deep fakes” that misuse voice content or images), by proposing the development of science-based standards and techniques for authenticating and detecting synthetic content.
Second, the Order also directs agencies to evaluate and determine the extent of AI-related discrimination and bias. For example, the Department of Health and Human Services is directed to review the risk of AI-based discrimination in healthcare, while other agencies are directed to evaluate how existing consumer protection laws and laws regarding racial equality in housing might govern the use of AI-technology in lending and housing.
Third, in light of existing concerns about competition protection in arenas dominated by large technology companies, the Order directs federal agencies to consider and assess the possibilities for concentration of market power, in order to ensure continued competition in the AI marketplace.
Fourth, the order directs the Department of Labor to prepare guidelines to protect workers, given concerns about the use of AI to “undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or chase harmful labor-force disruption.”
As the Robinson & Cole law firm also notes in a November 3, 2023 post on the firm’s Data Security and Cybersecurity Insider blog (here), the Order also requires that within 90 days , and annually thereafter, the head of each agency with regulatory authority over critical infrastructure to consider cross-sector risks and evaluate potential risks related to the use of AI in critical infrastructure sectors.
The Order also establishes the White House Artificial Intelligence Council (the White House AI Council) with representatives from 28 federal agencies and departments. The White House AI Council is to be responsible for “coordinating the activities of agencies across the Federal Government to ensure the effective formation, development, communication, industry engagement related to, and timely implementation of AI-related policies.”
Discussion
The recently issued Executive Order is not itself prescriptive; instead, the Order requires future federal agency actions that will lay down the AI-related requirements to which companies must adhere. But while the actual requirements are yet to come, the Order clearly points to an environment in which the federal government aims to impose a wide variety of AI-related regulatory requirements.
The regulatory requirements that are to follow clearly will create an environment in which companies must work to manage their AI-related regulatory risks. The likely future development of an AI-related regulatory environment also raises the question of companies’ future AI-related legal and liability risks, as well.
The fact is that while AI itself as a technology is still just emerging, AI-related litigation has already arrived. An interesting September 5, 2023 memo from the KL Gates law firm (here), details the “growing number of recently filed lawsuits associated with generative artificial intelligence (AI) training practices, products, and services.” The memo describes the legal theories that have served as the basis of existing AI-related claims, including invasion of privacy and property rights; patent, trademark, and copyright infringement; libel and defamation; and violations of consumer protection laws, among others.
For readers of this blog, one rather urgent question is the extent to which AI also presents corporate and securities litigation risks. It doesn’t take much of an imagination to prognosticate that AI-related risks could lead to significant corporate and securities litigation, although it perhaps does take more imagination than this author can muster to envision all of the various ways that AI-related issues could lead to future litigation. For now at least, there are certain categories of claims that seem to be the most likely.
First, the very regulatory efforts that the recently issued Executive Order has launched foreshadows a business environment in which companies use of AI tools could involve significant regulatory risk. Companies use of AI in their operations could lead to a host of regulatory concerns, particularly when it comes to workplace issues (including, for example, in hiring and supervision).
Second, particularly for publicly traded companies, there will be a host of disclosure-related issues, including, for example, disclosures around the company’s use of AI, the impact of AI on its operations and competitive environment, and the impact of AI on future revenues and overall financial performance. For example, companies touting a supposed business advantage they enjoy because of the company’s incorporation of AI technology in their business strategy may be subject to allegations concerning the extent to which investors were fully informed about the risks associated with the strategy.
Third, boards likely will be subject to claims based on allegations pertaining to the board’s duties of oversight. For example, picture a company that is using images or other content generated by AI and that subsequently faces claims that it has been using synthetic content deceptively. These kinds of allegations could not only lead to consumer protection-type claims but could also lead to allegations that the company’s board failed to properly oversee a critical corporate function.
There undoubtedly will be other types of claims to emerge as well, not the least of which may be employment-related claims based on allegations of discrimination or bias, as well as other workplace-related allegations, such as invasion of privacy.
There are some other kinds of claims that I can imagine when I really try to stretch myself on the question of future AI-related claims. Imagine if you can a board that uses AI to identify merger candidates or to assess the potential merits of a possible merger; if the subsequent merger goes sour, claimants might allege that the board failed to use sufficient care in its reliance on AI technology.
Whether and to what extent these kinds of claims may emerge remains to be seen. But given the fast-moving environment of AI-related technology, I don’t think we will have to wait long to find out.