Burkhard Fassbach

The increasing prevalence of artificial intelligence (AI) tools and processes present companies with a host of opportunities and risks. These opportunities and risks in turn create challenges for corporate boards as they try to navigate the changing environment. In the following guest post, Burkhard Fassbach, considers the corporate governance implications AI presents for companies and their boards. Burkhard is a D&O lawyer in private practice in Germany. I would like to thank Burkhard for allowing me to publish his article as a guest post on this site. I welcome guest post submissions from responsible authors on topics of interest to this site’s readers. Please contact me directly if you would like to submit a guest post. Here is Burkhard’s article.

**************************

The corporate landscape is undergoing a profound transformation driven by artificial intelligence (AI). For corporate boards, gaining a comprehensive understanding and establishing effective oversight of AI is a critical necessity. Key areas of focus include evaluating the board’s current readiness, recognizing the strategic opportunities and challenges, and diligently addressing the inherent risks.

Board Preparedness for AI Governance

A recent Deloitte survey reveals that while AI is increasingly appearing on board agendas, significant gaps in understanding and engagement persist. One-third (33%) of respondents are “not satisfied” or “concerned” with the amount of time their boards devote to discussing AI. Two-thirds of boards (66%) still report “limited to no knowledge or experience” with AI. 40% are rethinking board composition due to AI. An insightful article about the survey can be found here.

For boards to effectively oversee AI, a strategic governance framework is essential. Maureen Bujno, Deloitte Governance Services Leader, introduces a valuable AI Governance Roadmap recommending key questions for the board to consider. Here are some of the instructive questions related to strategy, risk and performance: Does management have a position on AI’s relevance and how the organization is currently using or planning to use AI? How does management evaluate risks and opportunities related to AI, and how is the evaluation incorporated within the AI strategy? Does management have a process in place to identify and assess risks related to current AI use cases as well as those under development? How is management addressing identified risks, and what monitoring and reporting processes are in place to facilitate oversight? What metrics and KPIs should be used to measure the success of AI initiatives? How frequently are these KPIs reviewed to ensure they remain relevant? How does management monitor the AI regulatory and compliance landscape? What will trigger the board’s involvement in a regulatory or compliance matter?

Noteworthy, Leo E. Strine, Jr., Of Counsel at Wachtell, Lipton, Rosen & Katz, recommends that boards should use AI to improve their own thinking. He argues that although human judgment should ultimately drive corporate decision making and policy, human beings are themselves subject to cognitive bias and blind spots. His worth reading Research Paper about the “Governance of Corporate Use of Artificial Intelligence” is available at SSRN.

Mitigating AI Risks

As AI systems become more complex, companies are increasingly exposed to reputational, financial and legal risk from developing and deploying AI systems that do not function as intended or that yield problematic outcomes. The range of potential risks is wide and can include fostering discriminatory practices, causing products to fail, and generating false, misleading or harmful content, according to an instructive to an instructive Skadden Publication about “The Role of the Board in Assessing and Managing AI Risk”, covering also the AI regulatory landscape in the U.S. and the EU.

Cybersecurity is another key risk area. The protection of  data from cybersecurity and hacking threats will be a central consideration of how AI is adopted and governed. Given the sensitivity and proprietary nature of the data fed into AI models, significant steps will need to be taken to protect against unauthorized access. The risk will be higher for large corporations with multiple connection points to suppliers, customers, and employees. Reference is made to a Stanford University Working Paper about “The Artificially Intelligent Boardroom”.

Of utmost relevance to boards is a suite of AI risk management tools published by The National Institute of Standards and Technology (NIST), a Department of Commerce agency leading the U.S. government’s AI risk management approach. This includes an AI Risk Management Framework and a Risk Management Profile on Generative AI. A complete list of NIST statements and publications on AI can be found at the NIST Trustworthy and Responsible AI Resource Center.

As an advantage, AI offers considerable potential to enhance risk management processes by simplifying and transforming data gathering and analysis. According to a PwC memorandum an ERM process can begin to include real-time data analysis and predictive analytics with AI. This can allow organizations to identify emerging risks earlier and manage vulnerabilities more proactively.

AI Insurance

The rapid integration of AI systems into various sectors presents unique challenges for traditional liability frameworks, as highlighted in a Munich Re Whitepaper. The authors Iris Devriese and Mike Crowl warn of potentially protracted and costly lawsuits stemming from AI-related incidents. While some AI risks may fall under existing insurance policies, significant gaps in coverage remain, leaving many AI-related exposures uninsured. The analysis also provides valuable insights into the burgeoning AI insurance market, drawing parallels with the historical emergence of cyber insurance.

AI Insurance Policies address a spectrum of critical and unique AI risks, including protection against claims alleging algorithmic bias and discrimination; Intellectual Property (IP) infringement claims directly related to AI products; defense costs for investigations into AI-specific regulatory violations; AI Product Failures or AI Technical Errors. Covered are incidents such as a medical decision-support Large Language Model (LLM) suggesting an incorrect treatment or a supply chain intelligence system inaccurately predicting excessive orders. Reference is made to information from Vouch, a U.S. insurance platform for startups, which can be found here

Evolving Expectations for Board Oversight

In approaching oversight of AI, directors can rely on the same fiduciary mindset and attention to internal controls and policies as they apply to other matters, according to Holly J. Gregory of the Law Firm Sidley Austin. His recommendable article about “AI and the Role of the Board of Directors” can be found here.

The phrase “Noses In, Fingers Out” is used to describe how an effective board collaborates with management to stay informed about the company’s key risks in its operations (“Noses In”), while staying out of operational management issues (“Fingers Out”). It is the board’s responsibility to ask insightful questions, while management is responsible for carrying out the company’s operations with high-level direction and guidance from the board. Reference is made to an article by the Law Firm Paul Hastings and the public relations firm Edelman Smithfield, which can be found here.

Large institutional investors, such as BlackRock, Vanguard, and State Street, have varying policies on board oversight of material risks. Some investors have already submitted shareholder proposals demanding greater transparency on the use and impact of AI technology. The percentage of companies disclosing AI oversight is expected to accelerate as well, according to an article by Institutional Shareholder Services, which can be found here.

Caremark Doctrine and Securities Disclosures

AI-related risks create two significant sources of litigation risk: suits filed under the Delaware judiciary’s Caremark doctrine concerning a board’s duty to monitor, and securities class action litigation related to materially misleading statements and omissions in corporate communications.

Under the Caremark doctrine, directors (and, following recent judicial decisions, officers) may face personal liability for failing to monitor certain risks that cause damage to their company. The Delaware Chancery Court held “In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996)”, that directors’ fiduciary duties include an obligation to implement and maintain an information and reporting system to detect and respond to wrongdoing or other serious legal risks facing the company. Stanford Law School Professor Curtis J. Milhaupt points out that initially the bar for a finding of liability was set extremely high – essentially, directors must do nothing in the face of red flags indicating that the company is in legal jeopardy, such that bad faith can be inferred from their “utter failure” to act. Recent cases have suggested that the Delaware judiciary is softening application of the standard, particularly with respect to a board’s failure to monitor “mission critical” risks facing the company, where the corporation’s reporting system did not ensure that the board of directors (as opposed to officers or compliance personnel) would be apprised of those risks. Further references to case law can be found in his worth reading Law Working Paper published by the European Corporate Governance Institute (ecgi) – SSRN.

Under federal securities laws, “material” risks to a corporation’s business and operations must be publicly disclosed for the protection of investors. Material AI-related risks need to be disclosed to avoid potential securities fraud claims arising from misleading statements or omissions in corporate communications. Appropriate risk factor disclosure is also crucial to address SEC concerns of  “AI-washing”. Therefore, a White & Case Client Alert points out that companies should ensure that they accurately address risks related to their particular use of AI technologies.

Conclusion and Outlook

The integration of AI into corporate operations underscores the urgent need for boards to enhance their governance frameworks, ensuring robust oversight of AI strategies, risks, and ethical implications. As AI-related risks and expectations for oversight continue to evolve, boards should actively engage with C-suite management to ensure responsible adoption. Securities disclosure requirements further underscore the necessity for vigilant and informed board oversight. Looking forward, proactive board engagement with AI-specific policies and metrics will be pivotal in mitigating risks and capitalizing on AI’s transformative potential.