AI vendors are directly accountable for violations under the California Fair Employment and Housing Act.
The California Supreme Court recently ruled that third-party entities can be held directly liable for violations of the California Fair Employment and Housing Act (FEHA), broadening accountability under California's anti-discrimination laws. This ruling creates new avenues of liability and responsibility for third-party entities involved in the employment hiring process, and potentially for the employers who engage such entities.
The case originated from a class action filed by employees alleging they were asked inappropriate medical questions during job screenings conducted by third-party medical providers. The court concluded that a business entity agent that has at least five employees can be held responsible for violating anti-discrimination laws under FEHA, provided they engage in FEHA-regulated activities on behalf of the employer.
Notably, this ruling does not allow employers to insulate themselves by delegating their obligations under FEHA to these agents or otherwise exempt employers from liability. Instead, it expands the potential list of parties liable under FEHA. California employers now face the task of not only ensuring compliance with FEHA within their in-house practices, but also being mindful of the practices of the entities they engage with during the hiring process.
“[W]e recognize as a necessary minimum that, consistent with the FEHA’s language and purpose, a business-entity agent can bear direct FEHA liability only when it carries out FEHA-regulated activities on behalf of an employer,” Justice Martin Jenkins wrote for the court.
The court's ruling carries significant implications for California employers and third-party entities involved in hiring processes. The direct liability of third-party agents means employers must carefully select and monitor those entities to ensure they adhere to FEHA standards. While the ruling does not establish a blanket requirement for employers to monitor their third-party agents, it does introduce the possibility that plaintiffs' lawyers may argue for a heightened duty of oversight.
“The major impact of this is with outsourcing, and that’s what a lot of companies are doing for a lot of their employment-related functions,” said attorney Randy Erlewine of the San Francisco law firm Phillips, Erlewine, Given & Carlin. “Everyone is being hired these days online, and you’ve got these major third-party platforms that may have discriminatory biases or metrics, and the employer may not know it is discriminatory.”
As such, the ruling underscores the growing significance of thorough vetting and monitoring of third-party agents to safeguard against potential legal disputes and to maintain FEHA compliance. This ruling also places a higher burden on third-party agents to ensure compliance with anti-discrimination laws.
However, the ruling could also lead to positive changes in the way that AI is used in hiring. Businesses may be more careful about how they use AI algorithms, and they may be more likely to invest in bias testing and mitigation. This could help ensure that AI is used in a fair and equitable way.
The ruling comes at a time where State legislators and policymakers are struggling to determine the government's role in regulating the use of automated tools that leverage algorithms to make decisions about employment, health care, and insurance eligibility.