Just about every company these days is grappling with the arrival of Artificial Intelligence (AI). But what should companies be telling their investors about the impact of AI deployment on their operations and financial results? At a recent meeting, the SEC’s Investment Advisory Committee recommended that the agency issue guidance requiring issuers to provide disclosures about the impact on the company from AI. As discussed below, while the committee’s recommendations may be unlikely to cause the agency to issue AI disclosure rules or guidance, the committee’s recommendations do provide a useful framwork to consider corporate AI-related disclosure best practices.

Background

The SEC’s Investor Advisory Committee was created by the Dodd Frank Act, to provide findings and recommendations to the SEC. The Committee meets quarterly, with the most recent meeting held at the SEC on December 4, 2025. At the meeting, the Committee discussed a variety of topics, including questions concerning AI-related disclosures. Specifically, the Committee considered a discussion draft prepared in advance of the meeting. The draft presented the Committee’s findings with respect to current AI-related disclosures and present recommendations for the agency to adopt to provide “comprehensive guidance” from the SEC on AI disclosures.

The Committee’s Recommendations

In presenting the case for agency guidance on AI disclosures, the Committee’s discussion draft noted that while AI poses a variety of risks and opportunities, there is “a lack of consistency” in company disclosures about AI that “can be problematic for investors seeking clear and comparable information,” making it “challenging for investors to assess and compare the risks and opportunities effectively.”

The discussion draft recommends that the Commission issue guidance for the benefit of Issuers that have “struggled with providing consistent disclosures to investors.” The discussion draft recommended that the Commission:

  1. Require issuers to adopt a definition of the term “Artificial Intelligence”;
  2. Disclose board oversight mechanisms, if any, for overseeing the deployment of AI at the company; and
  3. Report separately on how issuers are deploying AI and, if material, the effects of AI deployment on (a) internal business operations , and (b) consumer facing matters.

The Commissioners’ Response

SEC Chair Paul Atkins and Commissioner Hester Pierce both delivered remarks at the committee meeting, and in their remarks both addressed the committee’s recommendations about AI disclosures. Both of them signaled pretty strongly that they don’t see the need for the Commission to issue specific guidance on AI disclosures.

In his remarks, Atkins noted that “with every new development, the question for the SEC to consider is not necessarily its novelty, but whether our existing disclosure framework sufficiently provides investors with material information about it.” Atkins said that “I believe that investors can rely on our current principles-based rules to inform them of how AI impacts companies,” adding that “we should resist the temptation to adopt prescriptive disclosure requirements for every ‘new thing’ that affects a business.” He also said the agency’s principles-based rules “have stood the test of time because they rely on the fundamental principles of materiality rather than on ever-expanding checklists.”

In her remarks, Pierce also focused on what she called “the evergreen tug-of-war between principles-based rules and prescriptive rules responding to the hottest issue du jour.” She also questioned whether the differences in AI-related disclosures among reporting companies is a problem given the differences between the companies themselves and the way they were using AI. She asked “if company and industry adoption of AI is not entirely homogenous why should our disclosure regime force conformity?” She was also concerned that prescriptive rules would shape corporate behavior; for example, she said, “a requirement to disclose board oversight, if any, of AI implementation and utilization could nudge companies to set up a superfluous AI oversight function.”

Notwithstanding the Commissioners’ skepticism of the Committee’s AI-related disclosure recommendations, the Committee voted to recommend that the Commission provide the suggested guidance.

Discussion

For readers who saw my post last week about Atkins’s speech to the NYSE entitled “Revitalizing America’s Markets at 250,” his comments about the Investor Advisory Committee’s proposed AI-related disclosure guidelines are no surprise. In the prior speech, Atkins decried what he called the history in the recent past of the agency’s “regulatory creep” as a result of which “rules have multiplied faster than the problems they were intended to solve.” He blamed this past proliferation of rules for undermining America’s competitiveness. He also proposed as a general principle with respect to the agency’s disclosure requirements that “the SEC must root its disclosure requirements in the concept of financial materiality.”

These earlier comments foreshadowed the skepticism Atkins evinced in his remarks at the Investor Advisory Committee meeting about the proposed AI-disclosure requirements. New prescriptive rules are not likely to be a thing at the Commission with Atkins in the Chair. Indeed, a Law360 article about the committee meeting quotes one committee member as explaining his vote against the committee’s adoption of the recommendations by saying “I’m concerned that these recommendations are headed in the exact opposite direction as the Commission.”

It does seem unlikely that the current Commission is unlikely to adopt the Investor Advisory Committee’s recommendations. However, I do not think the Committee’s recommendations – or its actions in voting in favor of the recommendations – are superfluous.Even if the Commission does not adopt guidance consistent with the recommendations, the committee’s recommendations do provide a useful perspective on the question of what investors should be expecting from reporting companies about AI. Think of the committee’s recommendations as a framework for AI disclosure best practices, even if they do not go on to become actual rules.

To be specific, I think it is would be a good idea that a company that is proposing to communicate with its investors about its adoption of AI should expressly say what the company means when it refers to “artificial intelligence.” I also think it is a good idea for companies to tell investors about what its board is doing to oversee AI – among other things, investors would want to know if the board is not doing anything to oversee AI. Finally, I think it would be a good idea, as the committee recommends, for companies to report separately on how they are deploying AI, and the effects of deployment on internal business operations and consumer-facing matters.

While I do think the committee’s proposed disclosure framework is both interesting and useful, I would fault the proposal for not going far enough in at least one respect. I think the recommendations would be even more useful if they expressly required not only disclosure concerning the effects of AI deployment on business operations, but also the disclosures of the risks associated with the deployment.

Moreover, I think the recommendations would be even more useful if the disclosures concerning AI addressed not only the effects on operations from the reporting company’s own deployment of AI, but also, if material, the effects on operations as a result of AI deployment by others  – including the deployment of AI by competitors, customers, suppliers, and even regulators.

I emphasize these last two points about risk and deployment by others because I think that going forward we are going to see a host of investor actions in which the claimants allege that the company failed to disclose the risks associated with AI deployment. I refer readers to my recent post (here) about the securities class action lawsuit filed against Reddit, in which the plaintiffs allege the company soft-pedaled the impact on the company from AI deployment. The Reddit lawsuit also provides an example of a case where it was not the company’s own deployment of AI that caused the company’s problems, it was the deployment by a supplier (Google) that disrupted the company.

The Reddit case highlights why I think best practices related to AI disclosures should aim specifically to disclose risks associated with the deployment of AI, with an eye not only to the reporting company’s own AI deployment, but with respect to AI deployment by others that have or could impact the company’s operations.

In any event, even if the Commission ultimately disregards the Commission’s AI-related disclosure recommendations, we have by no means heard the last about AI-related disclosures. I think that, one way or the other, we are going to be hearing a lot about AI-related disclosures in the weeks and months ahead.