Mayme Donahue

In recent months, the SEC’s position with respect to AI regulation and enforcement has emerged, with important implications for reporting companies. In the following guest post, Mayme Donohue, a partner in the Hunton Andrews Kurth law firm takes a detailed look at the SEC’s emerging approach and provides specific pointers for reporting companies’ AI-related disclosures. I would like to thank Mayme for allowing me to publish her article as a guest post on this site. Here is Mayme’s article.

***********************

2025 was a transition year in the SEC’s posture toward artificial intelligence (AI). The Commission continued to signal that “AI washing” and other AI-linked misstatements remain classic enforcement targets while it leaned into AI internally to modernize its own operations. We saw the Staff’s disclosure review program probe AI-related narratives through targeted comments and the SEC clearly messaged that AI would be an examination focus for 2026.

The SEC’s 2025 Message on AI Focused on Existing Securities Principles

Across speeches and public statements in 2025, a consistent theme emerged. AI does not require a new set of investor-protection principles to trigger SEC scrutiny. Instead, the SEC repeatedly framed AI issues as variations on familiar securities law concepts:

  • Accuracy and completeness of statements, particularly around “AI-enabled” products, revenue drivers, competitive differentiation, and R&D claims;
  •  Reasonable basis and substantiation for AI-related assertions, including performance, automation, “proprietary models,” and the role of humans; or
  • Material risk disclosure where AI meaningfully affects operations, cybersecurity, data use, IP, regulatory exposure, or human-capital impacts.

These themes were reinforced late in 2025 when SEC leadership highlighted an Investor Advisory Committee (IAC) workstream on AI disclosure. At the IAC meeting in December, Chairman Atkins reinforced that the “principles-based rules were intentionally designed to allow companies to inform investors of material impacts of any new development, including how AI affects their financial results, how AI can be a material risk factor to an investment, and how AI is a material aspect of their business model.” Additionally, Commissioner Uyeda emphasized that the SEC has proposed, at a minimum, that issuers define what they mean by “AI,” describe board oversight (if any), and separate discussion of AI’s impacts on internal operations vs. customer-facing products and services.

Practical Point for Drafting 10-K/20-F/Registration Statement Disclosures

In 2025, the SEC’s posture strongly suggested that “AI” is not a safe buzzword, rather it is a potentially material disclosure that should be treated accordingly within the existing framework of materiality-based disclosure principles.

The SEC Embraced AI Internally with AI Task Force and Chief AI Officer

One of the clearest 2025 developments was institutional when the SEC announced the creation of an AI Task Force to centralize and accelerate responsible AI integration across the SEC, with an emphasis on governance and lifecycle management. The SEC also publicly identified its Chief AI Officer as leading the task force, underscoring that the initiative is meant to be durable and cross-divisional rather than ad hoc experimentation. Separately, the SEC maintained an Artificial Intelligence at the SEC landing page that highlights internal planning, including the SEC’s 2025 AI Compliance Plan aligned with OMB AI guidance.

Why This Matters for Issuers

The Commission’s internal build is not just operational, it is also a signal that AI governance and controls are becoming table stakes. As the SEC adopts AI-enabled tools, its expectations for how registrants manage similar risks (like data provenance, human oversight, testing/validation, vendor management and documentation) are likely to become more concrete in exams, comment letters, and enforcement.

“AI Washing” Remains an Enforcement Focus

The SEC’s messaging in 2025 continued to highlight AI-related misstatements as a priority area. Often labeled “AI washing” (i.e., overstating or mischaracterizing AI capabilities), the SEC kicked off 2025 by settling charges against Presto Automation Inc., a restaurant-technology company that was listed on the Nasdaq until September 2024, for making materially false and misleading statements about critical aspects of its flagship AI product, Presto Voice.

Additionally, the SEC staff issued AI-related comment letters in 2025, including requests for more detail on development, validation, third-party dependencies, and the real operational role of AI/ML. Examples from publicly available correspondence show staff asking companies to expand and operationalize AI-related discussions, for example describing governance policies around AI use, or revising business/ risk factor disclosure to more fully address the state of AI adoption and regulatory landscape.

Practical tips to Avoid Inadvertent AI Washing

The risk is not only in investor decks or marketing pages. It can show up in:

  • Business descriptions that portray AI as core to differentiation without describing the actual state of deployment;
  • Risk factors that acknowledge generic AI risks but do not align with how the company truly uses data/models/vendors;
  • MD&A narratives that attribute efficiencies or margin expansion to AI without a clear basis; or
  • Forward-looking claims about “AI roadmaps” that are inconsistent with budget, staffing, vendor contracts, or product readiness.

AI Is on the List of 2026 Examination Priorities

The SEC’s Division of Examinations identified AI as a focus area in its Fiscal Year 2026 Examination Priorities, emphasizing that it will be analyzing registrant’s AI-related disclosures focusing on “recent advancements in AI and will review for accuracy registrant representations regarding their AI capabilities.” The SEC is not hiding the ball, and combined with the public statements from the Chair and other commissioners along with the 2025 comment letter trends, companies should not take their AI-related disclosures lightly.

2026 Practice Pointers for AI-Related Public Disclosures

  • Inventory and Map AI Use Cases Identify where AI/ML is used across the business (product, operations, finance, HR, cybersecurity, compliance, legal, customer service) and separate pilot, internal-only, third-party enabled, and customer-facing uses.
  • Pressure-Test External Statements Validate claims in earnings scripts, roadshow decks, investor presentations, web copy, and product collateral to confirm that “AIenabled” statements reflect real functionality and not marketing shorthand.
  • Align Risk Factors to the Company’s Actual AI Profile Consider topics like data provenance and usage rights; IP risks (training data, outputs, open-source/model licensing); cyber and fraud risks (including deepfakes and social engineering); and regulatory exposure (sector-specific rules, cross-border data regimes).
  • Evaluate Governance and Disclosure Controls Document oversight (board/committee, management steering group, escalation paths), implement vendor and model risk management (testing/validation, monitoring, change management) and treat AI-related disclosure as a disclosure-controls topic, not just “innovation messaging.”