
The recent meteoric rise of Artificial Intelligence (AI) has not only upended many traditional business processes and set financial markets ablaze, but it has also captured the attention of the world’s political leadership. The leaders’ response includes not only excitement about AI’s impressive potential, but also concerns about the legitimate risks that AI presents. At the recent Artificial Intelligence Action Summit, held in Paris on February 10 and 11, 2025, many event participants advanced the view that for AI to realize its full potential, a regulatory “light touch” is required. While this restrained regulatory perspective has many advocates, the concerns associated with AI will still have to be addressed one way or another – which underscores the question about what the appropriate approach to AI regulation should be.
For all of AI’s vaunted promise, there is a long list of issues and concerns that the rapidly evolving AI phenomenon presents. Just to quickly tick off some of the concerns, there is the risk of consumer or business fraud through the use of synthetic content (think: phony videos or phone calls); the risk of discrimination or bias in AI-enabled tools for hiring, educational admissions, insurance, or health care; concerns about data privacy as AI tools access user data or population databases; intellectual property concerns both about the use of proprietary content in AI model training and about the ownership of AI output produced using the content of others. Then there are the even larger issues, such as human safety issues associated with the use of AI in healthcare, transportation, or warfare, or the potential for job losses or employee displacement as ever more capable AI tools are deployed.
These and many other legitimate concerns have led different governmental authorities to try to implement regulatory structures to try to address these issues. The most comprehensive approach is the EU Artificial Intelligence Act, which, as discussed here, was enacted in March 2024 and which will be staged in over the coming months. The EU Act includes extensive measure intended to address privacy and transparency concerns, as well as the avoidance of discrimination or bias, and the prohibition of certain uses (such as social scoring, biometric identification, and deceptive practices). The EU Act has a broad reach, applying to non-EU companies with certain levels of business in the EU. The potential penalties under the EU Act are steep.
The EU has certainly taken the initiative on AI-related regulation, just as it did with privacy issues a few years ago. But at the recent Artificial Intelligence Action Summit, at least some of the European participants seemed to be having serious second thoughts about the EU’s regulatory approach to AI.
French President Emmanuel Macron, one of the event’s co-hosts, urged Europe to “cut red tape, foster more AI start-ups, and invest in computing abilities.” According to a February 10, 2025, New York Times article entitled “Macron Pitches Lighter Regulation to Fuel A.I. Boom in Europe” (here), Macron wants to “position Europe as a top contender – not just a leading regulator” in the global AI competition. The article quotes Macron as saying that “If we regulate before we innovate, we won’t have any innovation of our own.” Other investors at the conference, according to the article “warned that Europe was not as competitive as the United States or China because it had layers of regulations, higher taxes, and fewer financial incentives.”
It also became clear at the Summit that the current U.S. political leadership is prepared to preach to the Europeans about the virtues of the U.S.’s lighter-touch AI regulation. Vice President JD Vance, in his first major international appearance, warned conference participants to “get on board” with the U.S.’s light-touch approach to tech regulation or risk being left out, according to a February 11, 2025, Wall Street Journal article entitled “Vance Warns U.S. Allies to Keep AI Regulation Light” (here). The article quotes Vance as saying that the development of AI will require a regulatory landscape that “fosters the creation of AI technology rather than strangles it,” adding that “we believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.”
To be sure, Vance did not completely overlook the possibility that there might be risks associated with the adoption of AI. He specifically noted that AI tools could reflect discrimination or bias. However, he was not talking about discrimination based on, say, race, gender, or genetic information; no, what Vance is concerned about is that AI tools could reflect “ideological bias.”
The upshot of all of this is that in the great global rush not to be left behind in the development of AI, political leaders everywhere are seeking to suppress regulatory urges that might otherwise, it is feared, hold competitors back. It also seems clear that at least for the foreseeable future there will be no comprehensive AI regulation at the federal level in the United States.
All of that said, there still are the legitimate AI concerns I noted above. Those concerns are not going to go away; if anything, as AI tools develop and as the use of AI becomes more pervasive, these concerns are only likely to grow in significance.
To be sure, while global political leaders may be trying to outdo each other in demonstrating their commitment to a regulatory light touch, there is not and will not be a regulatory vacuum. For starters, the EU regulations remain in place, although how vigorously they will be enforced remains to be seen. In addition, there has been regulatory experimentation even in the U.S., at least at the state level.
For example, as discussed here, in May 2024, Colorado’s legislature passed what was at the time the first U.S. state’s AI Regulatory Bill. The Colorado bill is primarily focused on the avoidance of algorithmic discrimination in the design and deployment of AI systems, particularly with respect to “consequential decisions” (relating, for example, to educational enrollment, employment, healthcare services, and insurance). The bill also addresses concerns relating to the use of AI-generated synthetic content.
Other U.S. states are adopting or considering adopting their own approaches. As discussed in a February 14, 2025 Law360 article (here), there are now two bills pending before the Virginia General Assembly to provide guidance to both public and private organizations. The Virginia bills, like the Colorado bill, addresses the use of AI in “consequential decisions.” The proposed Virginia bills not only address AI “developers” and “deployers,” but also address “integrators” – that is firms or individuals who knowingly integrate an AI system “into a software application and places such software application” on the open market. The proposed bills also contain provisions specifically designed to protect consumers.
The states’ consideration and adoption of these legislative initiatives shows that notwithstanding the “soft touch” rhetoric, AI is not going to be entirely unregulated. It is not going to be the Wild West, even in the U.S.
However, even with the passage of the state legislation, there are a lot about AI that is just going to have to be dealt with in real life exercises. If even some of the many AI concerns are realized, there could be classes and categories of people who are harmed. I suspect strongly that, in the U.S. at least, these aggrieved persons will seek remedies from the courts for their harms. The upshot of the recent, emerging consensus among political leaders that regulatory authorities should step away from regulating AI could be that the burden to address AI’s risks will fall on the courts. I suspect strongly that in the months and years ahead, there will be rapidly expanding AI-related case law addressing discrimination and bias; privacy issues; intellectual property issues; slander, libel, and other reputational issues; and consumer fraud issues, among many other concerns.
Within the confines of this blog’s bailiwick, I think the coming AI-related litigation will also involve extensive corporate and securities litigation. To date, much of the AI-related corporate and securities litigation has involved so-called AI-washing, in which the defendants are alleged to have overstated their firms’ AI-related capabilities or prospects. This kind of litigation will undoubtedly continue to be filed.
But going forward, I think we are going to see increasing amounts of corporate and securities litigation having to do with AI related risks – and not just the failure to disclose AI-related risks, but allegations relating to AI misuse or the faulty deployment of AI tools, or the failure to adapt to or address the competitive urgency of AI development. Litigants will also seek to hold corporate managers and their employers accountable for the failure to avoid, in the deployment of AI tools, discrimination, privacy and IP violations, or consumer fraud.
On one level, these lawsuits will just be like the kinds of lawsuits that have always been filed, they will just involve AI-related allegations. But at another level, these lawsuits could not only involve novel content (that is, AI tools themselves) but novel areas of corporate liability exposure, as AI becomes an ever more pervasive part of the business environment.
There is no doubt, as the political leaders expressed at the recent AI summit, that the adoption of AI holds great potential. The adoption of AI also involves many risks. From both of these perspectives, there is a lot of change ahead.