At this point, there is nearly universal agreement that artificial intelligence (AI) is (or at least will be) transformative. It is also clear that as companies struggle to adapt to the new technology, they also face a host of challenges, including disclosure and regulatory risks, and the related risk of litigation. As a result, AI poses an exceptionally difficult set of circumstances for corporate directors, as discussed in an August 14, 2024, Wall Street Journal article entitled “Why AI Risks Are Keeping Board Members Up at Night” (here). As the article makes clear, while many directors recognize the importance of getting a handle on AI and how it might affect their companies, they are struggling to find the right approach even as AI-related questions become more pervasive.
One thing the Journal article highlights is that AI is a predominant topic – if not the predominant topic – at current conferences and training sessions for corporate directors. The directors attending these sessions know they are dealing with issues that are both technical and important. And challenging. Underlying all of these issues is a persistent concern “that they could be held liable in the event AI leads to company problems.”
As the article makes clear, most directors recognize the opportunities that AI-related technologies may present for their companies. They also fully understand that with the opportunities come risks, including data security and privacy, employee use of proprietary code on AI platforms, and even so-called “hallucinations” where AI produces false or inaccurate responses. At the same time, they and company management recognize the competitive threat that the advance of AI presents; companies that shun AI “risk becoming obsolete or disrupted.” And just to make things even more complicated, AI “is a moving target,” as the technology advances and develops.
The rapid advent of AI technology “has many boards racing to catch up.” The Journal article cites a recent survey by the National Association of Corporate Directors, which found that 95% of corporate directors said that they believed the increased adoption of AI tools would affect their businesses, 28% said it was not regularly discussed at board meetings.
Corporate boards of course have in the past been through other technological transformations. Commentators quoted in the Journal article cite “the early days of the internet, cloud computing, and cybersecurity” as “key technological inflection points.” While there are lessons that can be drawn from these past circumstances, AI could, the article notes, “have an even broader effect.”
The article suggests several available avenues for boards struggling to get into a position to appropriately address AI for their companies. The first is to get a handle on what AI may represent for their companies. The article quotes one board member as saying that their a several very basic questions board directors should be asking, including: who is the senior leadership at our company focused on AI? Where and how is AI being used within the company? How are risks being identified and monitors? How are our competitors using AI? What are the mechanisms for upward reporting (to senior management and to the board) on AI?
The article also quotes directors on the importance of board processes and structures to address AI. Specifically, one commentator cited the importance of having a board audit or risk committee that is focused on developing an understanding of how a company is using AI, as well as related privacy and confidentiality issues, and associated disclosure issues, as well.
Several commentators quoted in the article also emphasized the importance for directors of staying up to date on AI risks and keeping AI issues front and center at the board level, including consulting with AI experts where appropriate.
The article ends with what arguably is a cautionary note. The article quotes an influential corporate governance figure as saying that “AI is exceedingly complex” in a way that is “putting stressors on generalist board and their reticence to demand explanations from management.” Some board members may not be cut out for this kind of work, the commentator observed, as technology is “quickly changing business practices, and directors who are no longer active executives at companies may struggle to keep up with emergent uses.”
Discussion
As I read the Journal articles, the ways that boards might be best responding to the challenges and opportunities that AI may represent fall into two categories, the procedural and the substantive.
The procedural responses encompass board training; board structure (audit or risk committees); board process, including reporting and oversight. There is the interrelated question of board composition – that is, making sure that the board is constituted in a way that the board can realistically grapple with these issues.
The substantive responses are more encompassing and perhaps better addressed as a series of question: what are the opportunities and risks that AI present for our company? How is our company now using AI and what are the ways we could better use AI? In particular, what are the ways that our company might use AI to help provide service to customers and clients? What are the risks associated with AI for our company and how are those risks being monitored? How are our competitors using AI and what risks does that present for our company?
Although it was not a key focus of the Journal article, questions concerning possible AI-related board liability exposures pervade all of these issues. The article does correctly recognize that oversight and monitoring issues could be very important in this context. Companies that experience AI-related problems or disruption could well face the unwanted attention of plaintiffs’ lawyers, who, armed, with the benefit of hindsight, might well scrutinize prior company actions, particularly board activity.
While there is nothing any company can do to prevent these kinds of lawsuits from being filed, companies and boards can take steps to put themselves in a position where they are better able to defend themselves. In order to be best positioned, boards will want to be able to cite minutes of board meetings showing not only that AI-related issues were actively considered, but that the board actively sought to be informed about AI activities and issues and to act on the information provided as appropriate.