SB 205 addresses bias in automated decision-making tools, sets a precedent for future legislation and imposes new obligations on companies.
To address the growing concerns over artificial intelligence (AI) bias, Colorado has become the first state to enact a comprehensive law that mandates employers to mitigate bias in automated decision-making tools. The law, known as SB 205, is set to take effect in February 2026, but not before undergoing a period of rulemaking and potential legislative changes as requested by Gov. Jared Polis.
This pioneering piece of legislation places Colorado at the forefront of regulatory efforts by state, federal, and international bodies to curb discrimination facilitated by AI and algorithmic decision tools. The law includes provisions for annual impact assessments to detect unintentional bias and requires companies to notify and offer formal appeal processes to consumers negatively impacted by AI-assisted decisions.
Michelle Duncan, an employment lawyer with Jackson Lewis PC, highlights the challenges employers may face in complying with the notice and appeal requirements, particularly given the volume of applicants that could be screened out by AI tools. Meanwhile, Matt Scherer of the Center for Democracy & Technology has expressed concerns that the rights to challenge an employer's decision are too vaguely written to be practically useful to workers.
Colorado Attorney General Phil Weiser, in a recent interview, emphasized that while it's too early to determine how the law will be interpreted, his office will seek input from various stakeholders and focus enforcement on flagrant violations.
The law's broad scope extends beyond employment, affecting AI use in sectors such as education, health care, housing, and lending. It also imposes obligations on both technology developers and companies using the automation tools, such as upfront notice of AI usage and the option for consumers to opt out in favor of human decision-making.
Comparatively, New York City's law, which took effect in July 2023, is limited to AI tools used in hiring and promotion decisions within the employment context. Rachel V. See, an employment lawyer at Seyfarth Shaw LLP, points out that Colorado's law could potentially cover a wider range of employment-related decisions and mandates AI developers to report any known risks of discrimination.
The enforcement of the Colorado law will be the responsibility of the attorney general's office, without a private right of action. This enforcement approach, as suggested by Michael Schulman of Morrison & Foerster LLP, may be initially lenient as employers navigate their legal obligations amidst rapidly evolving AI technologies.
Overall, Colorado's law sets a precedent and could influence the legislative landscape as other states, including California and New York, consider their own measures to address AI bias in employment and beyond.