-
The policy emphasizes accountability, with judges, attorneys, and litigants being responsible for ensuring AI-generated content meets ethical and legal standards.
-
Safeguards against bias and inaccuracy are implemented, as the policy prohibits using AI in ways that compromise due process, perpetuate bias, or mislead decision-making.
-
Privacy and confidentiality are paramount, and AI tools must protect sensitive data, including PII and PHI, and comply with judicial conduct standards.
The Illinois Supreme Court has issued a policy allowing the use of generative AI in courts, provided users adhere to legal and ethical requirements. Set to take effect on January 1, the policy applies to attorneys, judges, court staff, and even self-represented litigants.
“Courts must do everything they can to keep up with this rapidly changing technology,” Chief Justice Mary Jane Theis said in a press release cited by LawNext.
-
The policy, drafted by the Illinois Judicial Conference Task Force on Artificial Intelligence, underscores accountability. Regardless of AI’s involvement, the individuals using it remain responsible for their submissions to the court.
-
“All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations,” the policy states.
-
“Prior to employing any technology, including generative AI applications, users must understand both general AI capabilities and the specific tools being utilized.”
Bias, Privacy, and Compliance
-
A significant focus of the policy is mitigating the risks posed by AI. It explicitly prohibits using the technology to create misleading or biased content. “The Illinois Courts will be vigilant against AI technologies that jeopardize due process, equal protection, or access to justice. Unsubstantiated or deliberately misleading AI-generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be tolerated,” the policy warns.
-
The policy also stresses the need to maintain confidentiality. Any AI applications employed in court must protect sensitive information, including personal data, health information, and security-related materials. “AI applications must not compromise sensitive information, such as confidential communications, personal identifying information (PII), protected health information (PHI), justice and public safety data, security-related information, or information conflicting with judicial conduct standards or eroding public trust,,” it adds.
Task Force Recommendations
The policy was shaped by the Judicial Conference Task Force on Artificial Intelligence, established earlier this year. Chaired by Judge Jeffrey A. Goffinet and Court Administrator Thomas R. Jakeway, the task force included judges, attorneys, court staff, and other stakeholders.
The task force’s work reflects the court’s acknowledgment of AI’s rapid evolution. “This policy recognizes that while AI use continues to grow, our current rules are sufficient to govern its use. However, there will be challenges as these systems evolve and the court will regularly reassess those rules and this policy,” Chief Justice Theis said.
In addition to the policy, the court released a reference sheet for judges to help navigate AI’s use in judicial settings.