On March 13, 2024, the European Parliament approved the adoption of the EU Artificial Intelligence Act, legislation that the Wall Street Journal, in a front-page article, called the “World’s First Comprehensive AI Law.” The sweeping law, the effectiveness of which will be staged-in over the next several years, will affect all companies deploying or using Artificial Intelligence (AI) in the EU. As discussed below, the passage of the Act, which has been several years in the making, could have significant implications for the adopting and deployment of AI worldwide, and could also have significant liability risk implications as well. A copy of the EU’s March 13, 2024, press release about the Act’s adoption can be found here. The Act’s text as adopted can be found here.

Background

The initial proposal for the legislation was published in 2021, before the current generation of AI tools, such as ChatGPT and Co-Pilot, were released. The initial proposal was substantially revised and updated in the interim. In negotiations in December 2023, the EU member states reached a political agreement on the adoption of the Act. The European Parliament voted in favor of the Act’s adoption on March 13, 2024. Each of the member states must now each separately agree to the adoption of the Act, but given the December 2023 agreement, member state acceptance is a formality.

The Act

The Act itself is voluminous; the text runs some 459 pages. The Act is also wide-ranging and covers a host of different topics and considerations. As a general matter, the Act does a variety of kinds of things:

Classification System: The AI Act introduces a classification system that assesses the level of risk posed by an AI technology to health, safety, and fundamental rights of individuals. This system helps categorize AI systems based on their potential impact.

Development and Use Requirements: The legislation mandates various requirements for the development and deployment of AI systems. These include rules related to data quality, transparency, human oversight, and accountability. The goal is to ensure that AI technologies adhere to ethical standards and respect fundamental rights. AI systems that are deemed by legislators to be high risk, such as those used for immigration or critical infrastructure, must conduct risk assessments.

Transparency Requirements: The legislation seeks to impose transparency around the use of AI tools. The law requires clear labeling of images, audio, or video that have been generated or manipulated by AI and might otherwise appear to be authentic.

Ethical Considerations: The AI Act aims to address ethical questions related to AI deployment across different sectors such as healthcare, education, finance, and energy.

Ban on Certain Uses: Notably, the Act prohibits the use of AI technology in biometric surveillance and requires generative AI systems (like ChatGPT) to disclose when content is AI-generated. The Act bans biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the Internet. It also bans emotion recognition in the workplace and schools, social scoring, and predictive policing.

The new legislation applied to AI products in the EU market, regardless of where they were developed. The regulation imposes requirements on companies designing and/or using AI in the European Union. Despite the Act’s avowed EU focus, it is, as the Journal put it, “expected to have a global impact,” because large AI companies are unlikely to want to forego access to the bloc. The legislation could also serve as something of a signpost for other jurisdictions, as other countries could use the new law as a model for their own AI legislation.

The law establishes a framework for liability in cases where AI systems cause harm or damage. Most violations of the act will cost companies €15 million or 3% of annual global turnover, but can go as high as €35 million or 7% of annual global turnover for violations related to AI systems that the act prohibits (e.g., using AI-enabled manipulative techniques, or using biometric data to infer private information).

The law will enter into force twenty days after its publication in the official Journal and will fully applicable 24 months after its entry into force, except for banks on prohibited practices, which apply six months after the entry into force. Codes of practice go into effect nine months after entry into force with other requirements staging in thereafter.

Discussion

The EU’s adoption of the Artificial Intelligence Act is a development with important significance for all organizations doing business in the EU, whether or not they organizations are based in the EU. The Act is also extraordinarily sweeping, with relevance not just to the development of AI tools but also to the use and deployment of AI-enabled functions by all sorts of uses. As if all of this were not challenge enough for many organizations, it comes at a time when many firms are struggling to understand what the availability of AI-enabled tools may mean for the operations.

Clearly, the EU’s new act represents a significant compliance challenge for all organizations trying to adapt to the AI era. Boards and senior management must develop processes and controls to ensure compliance with the regulatory requirements.

Although I have many concerns about the new rules, my biggest concern has to do with companies that are alleged to have failed to comply with the Act’s requirements. The prospect for fines for non-compliance that the Act implements is daunting enough. But there are further risks here as well.

First, I am concerned about follow-on corporate and securities litigation — that is, lawsuits filed in the wake of an Artificial Intelligence Act enforcement action. Here, I have in mind the kinds of lawsuits that were filed as follow-on actions to General Data Protection Regulation (GDPR) enforcement actions, as discussed, for example, here. I can easily foresee follow-on actions in which the claimants contend either that the companies hit with Artificial Intelligence Act enforcement actions  either misrepresented their compliance with the Artificial Intelligence Act’s requirements or the effectiveness of the company’s AI-related controls and processes, or failed to disclosure the risks associated with the company’s use of AI.

Second, I can also foresee claimants using the Act’s various development, use, or transparency requirements, or its ethical standards, as guidelines against which to measure alleged corporate misconduct, as a way to show that AI-related corporate conduct at issue in a lawsuit are shown to have fallen below legal standards.

Finally, I am also concerned about allegations against corporate boards and officers built around failure to monitor type claims. The reputational and operational risks associated with corporate use of AI, as underscored by the Act itself, will likely be alleged to put special burdens on senior executives to have information reporting systems in place and to monitor for “red flags.” The Act could be argued to provide a roadmap for the kinds of information that the reporting systems should provide to senior management or the kinds of red flags that the executives should be responding to.

AI as a social, economic, and legal phenomenon is still emerging. Some level of governmental regulation was inevitable – indeed, many jurisdictions other than the EU are also moving forward to develop their own regulatory frameworks. There is no doubt that AI-enabled tools present almost every organization with opportunities. The EU’s adoption is one more reminder that along with the opportunities, the use if AI-enabled tools also present risks, including regulatory, compliance, and legal risks.