As I have noted on this site in discussing artificial intelligence, among the risks and opportunities that the recent rapid emergence of AI represents for organizations of all kinds are the risks associated with AI-related regulatory oversight and supervision. Until now, references to AI-related regulatory concerns have mostly pertained to the EU’s Artificial Intelligence Act, which the European Parliament approved in March of this year. It is now clear that AI-related regulatory concerns likely will also extend to supervisory efforts of U.S. states as well, as reflected in the Colorado legislature’s May 8, 2024 passage of the Colorado Artificial Intelligence Act. This legislation, if signed into law by Colorado governor Jared Polis, would make Colorado the first U.S. state to enact comprehensive AI-related regulation.
As discussed below, the Act may or may not become law, but whether or not it does become law, it contains key signposts concerning the likely course of future AI-related regulation, as well as key AI risk management measures that well-advised companies will take to try to address their AI-related regulatory risk.
Background
In Summer 2023, a group of more than 60 state lawmakers from more than half of the U.S. participated in a multistate artificial intelligence workgroup, in order to provide interested lawmakers with information and to explore regulatory alternatives. The group’s goal, as described in a May 8, 2024 post on the Husch Blackwell firm’s ByteBack blog (here), was to create a “legislative structure that balanced allowing for continued innovation while at the same time providing for basic guardrails to protect consumers.”
After the group met multiple times over the past year, various legislator members of the workgroup began circulated draft proposed state-level legislation, as a result of which AI-related bills were later proposed in both the Connecticut and Colorado legislatures. The Connecticut version of the bill was passed by the state’s Senate, but a vote on the bill in the state’s House was blocked after the state’s governor threatened to veto the legislation. By contrast to the Connecticut bill, the proposed Colorado legislation progressed through the entire legislative process, securing passage in both the House and the Senate.
The Colorado Legislation
As a general matter, the Colorado bill requires both AI developers and deployers to use reasonable care to avoid algorithmic discrimination in their AI systems. The Act applies to companies doing business in Colorado that develop or use “high-risk artificial intelligence systems,” which are defined as AI systems that make or significantly contribute to making “consequential decisions” in various specified areas.
“Algorithmic discrimination” is defined as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact” that disfavors an individual or group on the bases of several specified protected classifications.
A “consequential decision” is one that has a “material legal or similarly significant effect on the provision or denial to any consumer” with respect to educational enrollment; employment or employment opportunity; financial or lending service; healthcare services; housing; insurance; or a legal service. Crucially, the legislation does not limit its definition of AI to any specific area; instead, it covers any form of AI, including not only generative AI and large language models, including even technologies like optical character readers used, for example, to scan resumes.
The legislation requires companies doing business in Colorado to disclose to the state’s attorney general “any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report.”
In addition to its scrutiny of “algorithmic discrimination,” the bill also addresses AI-related content. As described in the May 10, 2024 CIO Magazine article entitled “Colorado AI Legislation Further Complicates Compliance Equation” (here), “if an artificial intelligence system, including a general purpose model, generates or manipulates synthetic digital content, the bill requires the deployer of the artificial intelligence system to disclose to a consumer that the system to disclose to a consumer that the synthetic digital content has been artificially generated or manipulated.”
The bill does create a rebuttable presumption that developers used reasonable care if they comply with the statute’s requirements, including that the developers make the following available to deployers: a general statement of the reasonably foreseeable uses and known or harmful uses of high-risk artificial intelligence systems; documentation disclosing a high-level summary of the type of data used to train the system, as well as the known or foreseeable limitations of the system; documentation of the ways the system was evaluated for performance and mitigation of algorithmic discrimination; as well as documentation and information necessary for a deployer to complete an impact assessment.
Similarly, the bill creates a rebuttable presumption that deployers used reasonable care if they take certain steps specified in the statue, including: the implementation of a risk-management policy and program to govern their deployment of a high-risk artificial intelligence system; completion of an impact assessment for the high-risk artificial intelligence system; notification of consumers if the deployer uses a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision; making available on the website a statement summarizing information such as the types of high-risk artificial intelligence systems.
The bill is to be enforced exclusively by the Colorado Attorney General. There is no private right of action. The bill provides further that in any enforcement action, there is an affirmative defense if the developer, deployer or other person discovers and cures the violation.
The bill, if signed into law, goes into effect on February 1, 2026. Importantly, a companion bill creates a task force to meet and discuss whether any changes should be added to the bill before it takes effect in February 2026.
Colorado Governor Jared Polis has until June 7, 2024, to decide whether he will sign the bill. According to the CIO Magazine article to which I linked above, Polis has so far declined to say whether he plans to sign the legislation into law. His office reportedly issued a statement saying that “This is a complex and emerging technology and we need to be thoughtful in how we pursue any regulations at the state level,” adding that the Governor will “review the final language of the bill when it reaches his desk.”
Discussion
It remains to be seen whether or not the Colorado legislation will be signed into law. Were the Colorado bill to become law, it could, as Microsoft’s ChatGPT-powered tool Co-Pilot informed me in response to my query, “set a precedent for other states considering similar legislation.” Indeed, whether or not the Colorado bill becomes law, I think the bill points to a possible future in which companies developing or deploying AI are subject to a patchwork of different regulatory requirements from different jurisdictions, imposing potentially conflicting and potentially onerous compliance requirements.
Among other challenges I can see for companies seeking to comply with a regulatory regime like the one contemplated by the Colorado legislation are the difficulties that could arise from the fact that so many companies outsource many aspects of their IT functions. As a practical matter, many companies may not be aware of all of the AI-related modalities in use throughout their IT programs (including, for example, where companies deploy third-party cloud-based services or implement software-as-a-service [SaaS] apps).
On the other hand, whether or not the Colorado bill becomes law, it provides a useful road map for companies seeking to identify potential areas of risk associated with its AI-related initiatives. The bill highlights the risks associated with discriminatory AI-based decision making, and it also identifies a number of high-sensitivity areas where potential discriminatory bias could be particularly problematic.
The bill also helpfully identifies certain categories of information that AI deployers should be receiving from AI developers in order to understand and to try to manage the risks associated with deployment of AI-based services. Along the same lines, the bill also directs deployers to develop and implement AI-related risk management programs and specifies features such a program ought to incorporate. It seems to me that whether or not the Colorado bill becomes law, this type of proactive risk management process would be a prudent measure that well-advised companies should implement.
I know that for many readers one area of particular focus with respect to this legislation concerns enforcement. I know many will consider it significant that the bill expressly does not provide for a private right of action, and that enforcement is exclusive to the Colorado Attorney General. While the absence of a private right of action under the bill eliminates the kind of consumer litigation that we have seen, for example, with respect to the Illinois Biometric Information Privacy Act, it does not eliminate the possibility of other kinds of claims.
In particular, were this legislation to become law, I would be concerned not only that companies could face the possibility of an enforcement action by the Colorado Attorney General, but also the possibility of a follow-on action brought in the wake of an AG enforcement action. Shareholders might allege that the failure of company management to implement compliance procedures violated their duty of care, or that the company and its executives misrepresented the company’s compliance with the statute’s requirements.
Indeed, the possibility of this type of follow-on action is one of the reasons I think that the AI-related regulatory burden represents one of the significant AI-related litigation risks that companies developing or deploying AI face. These risks are one more reason why, as I noted above, well-advised companies will now be proactively taking steps to try to manage the risks associated with the AI-related initiatives.
One final point is that, whether or not this Colorado bill becomes law, many other jurisdictions may seek to implement their own regulatory regimes. As one commentator quoted in the CIO Magazine article to which I linked above put it, “Various state governments, federal governments, and foreign governments are tripping over themselves to regulate AI.”