

In the following guest post, Evan Bundschuh and Burkhard Fassbach share and analyze their research into the AI-related Form 10-K disclosures of 26 U.S.-listed public companies, in order to assess the level and significance of public companies’ disclosure statements pertaining to artificial intelligence. Evan is Vice President at GB&A, a retail insurance brokerage in New York, and Burkhard is a D&O lawyer in private practice in Germany. My thanks to Evan and Burkhard for allowing us to publish their article on this site. Here is their article.
**************************
Introduction: The 2025 10-K Season as a Pivotal Moment for AI Transparency
The fiscal year 2025 represents the definitive watershed moment for corporate transparency regarding artificial intelligence. As AI transitioned from an experimental capability to a core operational engine, the Securities and Exchange Commission (SEC) 10-K filings for the year ending December 31, 2025, offer the first comprehensive mapping of the global AI risk landscape. These filings reflect a systemic shift in the corporate narrative: AI is no longer merely a strategic opportunity highlighted in investor decks; it is now a documented material risk managed at the highest levels of corporate governance.
The trajectory of cyber disclosures may also provide some insight into where AI disclosures could be headed. While initial cyber disclosures were mainly focused on the disclosures of security incidents, they quickly matured in terms of their breadth, transparency and specificity, extending well beyond just material incidents, to include regulatory compliance, emerging threats, and implementation of formal policies and procedures. AI disclosures are on track to follow a similar path. What initially began as disclosures aimed at the integreation of AI and its capabilities, is developing into a much more mature, multi-faceted governance framework, and its trajectory will continue upward as corporate officers better understand the entirety of AI specific risks, and investors expect greater transparency.
For securities litigation attorneys and D&O insurance professionals, these disclosures provide the evidentiary baseline required to assess fiduciary oversight and the potential for “Event-Driven” litigation in the years ahead.
Cross-Sector Analysis: The Expansion of AI Risk Profiles
The 2025 filing season demonstrates that AI risk has successfully permeated every major industrial sector. Our analysis of 26 unique companies across 10 distinct sectors (totaling 27 sectoral entries as Tesla’s profile spans both Technology and Automotive) underscores that AI-related materiality is now a universal concern.
The Technology and Automotive Pioneers: Leaders such as Alphabet, Amazon, Meta, and Netflix have fundamentally shifted their disclosures from “innovation-led growth” to “material operational risk.” These filings highlight the immense capital expenditures required to maintain AI parity and the potential for model failure to disrupt core revenue streams. In the automotive sector, Ford and General Motors, alongside Tesla, emphasize the liability risks associated with autonomous systems and the reliance on complex, AI-driven supply chain logistics.
Financial Services: Banking and Insurance: Within the financial sphere, JPMorgan Chase, Citigroup, Wells Fargo, and Bank of America have introduced granular disclosures regarding AI in credit scoring, fraud detection, and algorithmic trading. These filings are designed to preempt allegations of “black box” decision-making. Similarly, insurance heavyweights AIG and Berkshire Hathaway are now documenting risks associated with AI-driven underwriting models and the potential for systemic loss events triggered by automated financial correlations—what underwriters are increasingly viewing as “Interrelated Wrongful Acts.”
Healthcare and Life Science: Healthcare giant UnitedHealth Group, alongside pharmaceutical leaders Pfizer, Eli Lilly, and Merck, have identified AI as a critical, yet volatile, component of drug discovery and patient care delivery. Their 2025 filings emphasize the risks of clinical reliance on unproven algorithms and the legal vulnerabilities inherent in AI-assisted diagnostics, where a failure could trigger catastrophic professional liability claims.
Consumer, Infrastructure, and Energy: A sophisticated wave of disclosures has emerged in the Food & Beverage (Coca-Cola, PepsiCo, McDonald’s) and Telecommunications (AT&T, Verizon) sectors, focusing on supply chain automation and the ethical implications of AI-driven consumer targeting. In the Energy sector, Exxon Mobil and Chevron have integrated AI risk factors not just for efficiency, but as a mechanism to mitigate Environmental, Social, and Governance (ESG) underperformance and ensure operational safety in high-stakes infrastructure.
Industrial and Chemical Leaders: In the Industrial and Chemical sectors, 3M Company and Dow Inc. have introduced disclosures concerning AI’s role in material science R&D. These filings highlight the risk that AI-generated chemical formulations may carry unforeseen long-term liabilities or environmental impacts, necessitating robust oversight of the “Inquiry” prong of the board’s fiduciary duties.
Core Thematic Clusters in AI Disclosures
The 2025 filings reveal some critical themes, indicating which disclosures are curently being made, and which disclosures may need to be considered moving forward, providing a “ready-made” roadmap for potential plaintiffs.
Implementation and Integration of AI: The implementation of artificial systems, and its associated risks, is the core driver of disclosures. Companies are increasingly disclosing the risk of discriminatory outcomes in automated systems. From a litigation perspective, these disclosures acknowledge that failure to mitigate bias can lead to severe reputational damage and regulatory enforcement actions that impair corporate value. In addition to hallicunations, bias and other AI failures, companies also need to consider disclosing any financial impact posed by the implementation and integration of its AI infrastructure such as; increased data processing costs, potential cannabalization of other products, and even AI related workforce reductions, as evidenced by disclosures made by companies like IBM, Salesforce and Pinterest following large layoffs.
Cybersecurity and Data Privacy: The integration of Large Language Models (LLMs) has introduced vulnerabilities. Filings highlight the risk of proprietary data leakage through the usage of third-party AI tools and the threat of AI-enhanced cyber-attacks that could compromise sensitive corporate data.Unlawful or unrecognized collection of data is another emerging risk. Failing to mention LLMs are collecting data in order to train their models, can subject companies to litigation and enforcement alike. The third party governance required to mitigate such risk, can also prove challenging. While the direct risk posed by such wrongful acts should be coverd by an appropriate cyber policy, such claims can create follow-on litigation creating D&O liaiblity in the form of securities and derivative litigation and enforcement actions.
Intellectual Property (IP): Challenges regarding the ownership of AI-generated content and the “fair use” of protected data for training are now categorized as material risks. Companies are disclosing the high degree of uncertainty surrounding pending training-data litigation, which could strip them of key competitive advantages.
Regulatory Compliance: The current regulatory environment is a patchwork of AI-specific regulations and amendments to existing cyber security, discrimination, and consumer protection laws, among others – all being developed and implemented at a rapid pace and varying greatly at the state level, creating considerable compliance challenges for corporate officers. To demonstrate the pace at which AI legistlation is being introduced and passed: in 2023, there were fewer than 200 bills introduced, with a minority of those being passed – as of today, that number has gown to over 1,500 such bills being introduced with 100 AI regulations being pass in 2025 alone. Reference is made to this report. Compliance with emerging frameworks is cited as a risk that could result in substantial fines and operational injunctions.
Policies, Procedures and 3rd Party Governance: According to a recent report by Glass Lewis research assessing AI governance of S&P 100 companies, “65% of US investors believe companies should provide clear disclosure of the board’s oversight of AI governance issues and AI ethics“, however only 28% of those companies were reported to have an established AI policy with board level oversight. The Glass Lewis report can be found here. This percentage is likely even smaller for companies with lower market caps. These figures indicate a big disconnect with investor expectations, and are in stark contrast to the robust cyber security governance frameworks currently maintained by the vast majority of public companies.
Implications for D&O Liability and Board Oversight
For the D&O community, the 2025 10-Ks are a double-edged sword.
Securities Litigation Risks: The specificity of these 10-K risk factors raises the bar for “AI-washing” protection. If a company fails to meet its touted AI capabilities, overstates the extent to which services are performed by AI, falsely advertises its AI, or if a disclosed risk (such as a bias event) manifests after being dismissed as “immaterial” in previous statements, the company and its officers face a high probability of Rule 10b-5 class action lawsuits. Recently some companies have even been accused of falsely attributing AI as the cause for mass layoffs, which can also be considered a form of AI washing. The Guardian article can be found here. These detailed risk factors make it significantly harder for defendants to argue a lack of scienter when trying to survive a Motion to Dismiss.
Caremark Duties and Board Oversight: Under the Caremark doctrine, boards have a fiduciary duty to monitor “critical” business risks. By moving AI risks into the 10-K, companies have formally conceded that AI is a critical operational component. Boards that fail to implement “reasonable” reporting systems for AI performance and safety are now vulnerable to derivative claims for breach of the duty of oversight.
Individual Accountability: While there’s currently no specific SEC rule requiring the designating of individuals to oversee AI risk, disclosure requirements may emerge as regulatory scrutiny increases. Such requirements have previously been proposed by California’s SB468 and have already been adopted by EU’s AI Act which impose “designated party“ requirements for certain providers. Much of the increased liability related to the integration of artificial systems will utimately fall on the shoulders of the company’s CISO’s and technology officers (particularly in the context of privacy, security, and data collection) who will now need to navigate a more complex risk landscape including more complicated 3rd party governance, and emerging internal risks such as employee’s usage of AI systems. Individual liability is also likely to arise from regulatory enforcement, as regulators target individuals responsible for any AI failures and misleasing statements. Last year in a landmark enforcement action, the DOJ and SEC jointly brought their first AI washing action against the CEO of Nate Inc , over allegations that the the company misreprensented its AI capabilities to its investors. The press release can be found here.
Determining Materiality: Determining materiality in the context of cyber disclosures created considerable challenges for public companies, particularly as SEC guidance was in its infancy and still developing. Given the operational impact AI has on organizations, and considering the complexity of risks associated with artificial systems, companies are likely to encounter even greater challenges when considering materiality for AI related disclosures. When does a known LLM error meet the threshold for becoming material? At what point do inflated AI expenses, and/or a competitor’s adoption of AI (that could absorb a company’s market share) become material enough to require disclosures?
Insurance Underwriting Impact: D&O underwriters are using these expanded risk factors to distinguish between companies with mature AI governance and those relying on boilerplate language. The presence of localized, specific AI risk factors is becoming a primary differentiator for Side A/B/C coverage pricing. Furthermore, underwriters are increasingly focused on “Interrelated Wrongful Acts” to ensure that a single AI failure does not trigger multiple policy years or different coverage towers.
Actionable Insights for Boards and Underwriters
Enforce Disclosure Precision: To prevent “AI-washing” allegations, internal legal teams must ensure that every AI claim in the 10-K is backed by demonstrable technical capabilities and documented internal audits.
Mandate Board-Level AI Oversight: Boards should formalize AI governance through dedicated committees that include External AI Ethics Counsel. This bolsters the defense against Caremark claims by proving a robust information-gathering system was in place.
Adopt Holistic Governance Frameworks: Litigation risk is best mitigated by adopting standardized AI ethical guidelines and rigorous technical testing protocols that are applied consistently across all global business units.
Execute Integrated Data Audits: Companies must conduct regular audits of datasets used for AI training. This explicitly links the mitigation of IP infringement risks to the reduction of algorithmic bias, creating a cohesive narrative of proactive risk management.
Dynamic Regulatory Mapping: Establish a real-time compliance map to stay ahead of AI legislative shifts. For underwriters, the absence of such a map should be viewed as a significant red flag during the renewal process.
Careful Policy Coordination and D&O Audits: From an insurance policy perspective, both public and private companies alike will need to perform refreshed insurance coverage audits on all of their policies, specifically with AI in mind. While still rare, some D&O insurers are attaching AI specific exclusions. Equally problematic however are any “silent” exclusions that could preclude coverage, such as overly broad professional services exclusions, IP exclusions, contract exclusions, product defect exclusions, bodily Injury and property damage exclusions (that also preclude coverage for invasion of privacy), and cyber specific exclusions. CISOs, any desginated officers and those serving on any AI oversight commitees should ensure they qualify as insured persons under the policy’s defintinitions, and corporate officers should re-assess their Side A limits, accouting for the increased potential for new derivative litigation.
Lessons for the Private Sphere: Applying 10-K Insights to Non-Public Entities
While non-public companies are not bound by the same SEC reporting mandates as the 27 market leaders identified in recent filings, the 2025 10-K disclosures serve as a critical risk-mapping tool for private firms. The transparency provided by giants across sectors such as Technology (Alphabet, Meta), Banking (JPMorgan Chase, Citigroup), and Healthcare (UnitedHealth Group) offers a blueprint for identifying “blind spots” in AI implementation.
Regulatory Trickle-Down and Standard Setting: Private companies often operate within the supply chains or service ecosystems of the large public corporations listed in the sources. For instance, the rigorous AI risk frameworks disclosed by the banking sector—including Wells Fargo and Bank of America—frequently become the de facto industry standard. Private firms should anticipate that D&O liability standards will eventually mirror the “best practices” established by these public leaders, particularly regarding algorithmic transparency and data privacy.
Cybersecurity as a Shared Burden: The 10-K filings for 2025 emphasize that AI is a “double-edged sword” in cybersecurity. Companies like AT&T and Verizon highlight the risks of AI-driven social engineering and sophisticated phishing. For non-public companies, the lesson is clear: AI-related cyber-readiness is no longer optional. Directors and officers of private firms can be held liable for failing to oversee the implementation of AI-resistant security protocols that are now considered standard in the public sphere. As previously mentioned, improper data collection and failing to obtain consent is also driving litigation. The recent class action lawsuit: Lisota v Heartland Dental is one example of the risks posed by such failures. In the complaint, the plaintiff alleges that Heartland Dental and RingCentral’s AI enabled phone answering system, used by her dental practice, collected patient data without consent, in violation of Federal Wiretap Act. Similarly Runway AI, Perplexity AI and Clearview AI are among the growing list of private companies that have encountered litigation alleging unlawful data collection.
Disclosure Failures and Operational and Competitive Displacement: The disclosures from the Automotive (Ford, GM) and Chemical (3M, Dow) sectors illustrate how AI is fundamentally altering production and R&D. Non-public entities in these sectors must recognize that “AI-washing” is not their only risk; the failure to adopt AI—a risk frequently cited by Amazon and Netflix—can be just as damaging. Private boards must document their AI strategy to protect themselves against claims that they breached their duty of care by ignoring transformative technological shifts.
Ethical and Biometric Risks: With Meta Platforms and UnitedHealth navigating complex landscapes of user data and health algorithms, private firms in the consumer and medical space should use these 10-Ks to benchmark their own ethical AI guidelines. The litigation risks regarding bias and discrimination, which public companies must now disclose in detail, apply equally to private organizations and can lead to significant D&O claims if not proactively managed.
In summary, the 2025 10-K filings are not just a regulatory hurdle for the “world-famous” public elite; they are a forward-looking risk registry for the entire corporate world.
Reference Appendix
The following 26 unique companies provided the foundational data for this analysis through their Form 10-K filings for the fiscal year ended December 31, 2025.
Technology
Automotive
- Ford Motor Company
- General Motors Company
- Tesla, Inc. (Also listed under Technology)
Banking
Insurance
Healthcare / Health Insurance
Pharmaceutical
Food & Beverage
Telecommunications
Energy / Oil & Gas
Chemical
______________________________________
Evan Bundschuh is Vice President at GB&A, a retail insurance brokerage in New York with a special focus on management liability risks including D&O, E&O, and cyber insurance programs.
Burkhard Fassbach is a D&O lawyer in private practice in Germany.