As I have noted in recent posts (most recently, for example, here), the proliferation of AI in many industries is changing the way business gets done. According to a new study, AI could also be changing the language companies use to report to regulators and to communicate with their investors, in ways that potentially could increase the companies’ securities class action litigation exposure.

In their quarterly and annual SEC filings, publicly traded companies publish a section called the Management Discussion and Analysis, or MD&A. Analysts have long studied companies’ MD&A, in order discern important trends and possible future company prospects. Academics have also studied company MD&A. For example, a 2008 study found that better-performing companies had shorter, simpler reports, while longer and more complex ones often signaled trouble.

AI has changed things. As discussed in a June 16, 2025, Wall Street Journal opinion article entitled “Quarterly Reports are Written for AI” (here), Hebrew University Business School Professor Keren Bar-Hava notes that the use of AI has changed the way analysts study companies’ MD&A. As Professor Bar-Hava notes, with recent advances in AI, “anyone can analyze dozens of MD&A sections using tools like ChatGPT.” With a single prompt, “artificial intelligence can identify trends in tone, complexity and word choice, spotting patterns that once took teams of analysts weeks of work.” But that is not all.

If, as Professor Bar-Hava observes, AI can quickly analyze company MD&A, companies can use AI to shape their MD&A. The “most innovative companies” are already doing so. The companies “know their reports are being scanned, scored, and compared by machines before any human reads them. And they are writing accordingly.” The result is a “quiet but significant shift in corporate reporting.”

Professor Bar-Hava studied 108 MD&A reports from 27 top U.S. firms during the period 2021-24. She found that, by contrast to the earlier studies noted above, in which better-performing companies tended to have simpler, shorter reports, a different pattern has emerged. She found that positive tone has steadily increased, even when financial performance declined. Words like “growth,” “resilient,” “opportunity” have become more common. Terms signaling uncertainty, such as “might” or “could,” have declined.

Even “more strikingly,” she observed, the “most positive reports often came from the worst-performing firms.” Professor Bar-Hava says this is “no coincidence,” it is, rather, “strategy.” Tone, she says, has “become a tool to manage how algorithms ‘feel’ about performance.” It also “creates a risk” – that is, “the growing gap between what’s said and what’s true.”

What is happening, Professor Bar-Hava explains, is that companies are responding to “AI-induced disclosure pressure – the incentive to write in a way that performs well under algorithmic scrutiny.” The result “isn’t always more transparency.” It may be the opposite; the result may be “performative optimism crafted to influence machines, not people.”

Professor Bar-Hava identifies three levels on which the “AI-induced disclosure pressure” operates:

Exposure pressure. AI flags vague or evasive language. Companies feel compelled to sound confident, even when the outlook is uncertain.

Competitive pressure. Algorithms benchmark tone across peer firms. If a competitor sounds stronger, you look weak by comparison.

Reputational  pressure. AI feeds analyst dashboards, investor platforms and news summaries. One poorly framed sentence can ripple fast.

Professor Bar-Hava rightly notes the potential implications of these developments for possible liability under the securities laws. She notes that the SEC in the past has issued rules intended to improve narrative disclosure, “encouraging clarity, conciseness, and plain English.” But tone, she says, is “now a powerful drive of perception,” and it remains unregulated. That, she says, is a “blind spot.” AI driven tone scores are “influencing market behavior.” And if markets are being gamed, she says, “investors are misled.”

Professor Bar-Hava suggest that “tone” should be treated as “a material disclosure element.” We should monitor linguistic choices, as we do accounting choices, especially as “algorithms become the first line of interpretation.” Otherwise, we risk “building a world where clarity is polished but meaning is lost.”

The final paragraph of Professor Bar-Hava’s article makes an important point, which is that corporate boards “must understand that they’re writing for two audiences, people and machines.” Machines she says, don’t read between the lines, “they read the lines.” If we care about truth in reporting, “we must care how it sounds, not merely what is says.”

Discussion

Professor Bar-Hava was correct to draw out the concerns about securities law issues from the use of AI in writing MD&A, and she was correct that boards should be aware of the problems the new AI focused methods could create.

In both cases, I can easily envision companies that experience operating problems or financial setbacks — but that have been publishing overly optimistic MD&A to satisfy AI analysis — will be subject to harsh hindsight scrutiny from plaintiffs’ lawyers (who may themselves be armed with AI tools). This is not just me conjecturing here; Professor Bar-Hava expressly notes that companies’ use of AI tools in writing their AI could lead to “a growing gap between what’s said and what is true,” which is the very essence of what plaintiffs’ lawyers (and/or the SEC) would allege in a securities law action.

Professor Bar-Hava is also correct that these observations have important implications for corporate boards at companies that are using AI tools to write, or to “improve,” their companies’ MD&A. That is, if a company is using AI to improve the way the company’s MD&A is scored under AI-driven analysis, the board must try to ensure that there is no gap between what’s said and what’s true. Otherwise, there is a risk that investors could be misled, and that certainly could be what plaintiffs’ lawyers will allege.

There is another audience for whom Professor Bar-Hava’s observations should be concerning, and that is D&O insurance underwriters. To the extent that AI-aided MD&A writing could lead to the kinds of overly optimistic statements that drive securities litigation, these corporate practices potentially could increase D&O claims frequency. There is a lot of talk these days about possible roles for AI in the insurance world; perhaps one way AI could be used is for D&O underwriters to use AI tools to scrutinize MD&A disclosures to search for overly optimistic signs. At a minimum, D&O underwriters may want to adopt their own MD&A tone analysis, to try to discern whether the company is crafting its MD&A to try to satisfy AI analysis tools, perhaps of the cost of accuracy.

Special thanks to a loyal reader for sending me a copy of Professor Bar-Hava’s Journal article.