Attorneys, judges, and judicial clerks in Illinois may utilize artificial intelligence tools as long as they comply with the rules of professional conduct, the state supreme court announced, issuing a policy dictating the use of such tools.
The policy emphasizes accountability, with judges, attorneys, and litigants being responsible for ensuring AI-generated content meets ethical and legal standards.
Safeguards against bias and inaccuracy are implemented, as the policy prohibits using AI in ways that compromise due process, perpetuate bias, or mislead decision-making.
Privacy and confidentiality are paramount, and AI tools must protect sensitive data, including PII and PHI, and comply with judicial conduct standards.
The Illinois Supreme Court has issued a policy allowing the use of generative AI in courts, provided users adhere to legal and ethical requirements. Set to take effect on January 1, the policy applies to attorneys, judges, court staff, and even self-represented litigants.
“Courts must do everything they can to keep up with this rapidly changing technology,” Chief Justice Mary Jane Theis said in a press release cited by LawNext.
The policy, drafted by the Illinois Judicial Conference Task Force on Artificial Intelligence, underscores accountability. Regardless of AI’s involvement, the individuals using it remain responsible for their submissions to the court.
“All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations,” the policy states.
“Prior to employing any technology, including generative AI applications, users must understand both general AI capabilities and the specific tools being utilized.”
A significant focus of the policy is mitigating the risks posed by AI. It explicitly prohibits using the technology to create misleading or biased content. “The Illinois Courts will be vigilant against AI technologies that jeopardize due process, equal protection, or access to justice. Unsubstantiated or deliberately misleading AI-generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be tolerated,” the policy warns.
The policy also stresses the need to maintain confidentiality. Any AI applications employed in court must protect sensitive information, including personal data, health information, and security-related materials. “AI applications must not compromise sensitive information, such as confidential communications, personal identifying information (PII), protected health information (PHI), justice and public safety data, security-related information, or information conflicting with judicial conduct standards or eroding public trust,,” it adds.
The policy was shaped by the Judicial Conference Task Force on Artificial Intelligence, established earlier this year. Chaired by Judge Jeffrey A. Goffinet and Court Administrator Thomas R. Jakeway, the task force included judges, attorneys, court staff, and other stakeholders.
The task force’s work reflects the court’s acknowledgment of AI’s rapid evolution. “This policy recognizes that while AI use continues to grow, our current rules are sufficient to govern its use. However, there will be challenges as these systems evolve and the court will regularly reassess those rules and this policy,” Chief Justice Theis said.
In addition to the policy, the court released a reference sheet for judges to help navigate AI’s use in judicial settings.
The U.S. judiciary is addressing ethical concerns in law clerk hiring, prompted by recent incidents and complaints of bias and favoritism.
After launching an AI program in 2017 focused on cloud migration, the firm has entered the second phase of its AI program, centered on generative AI and a larger internal team.
Legal Operations professionals talk about how they approach regulatory investigations or inquiries.
Published weekly on Friday, the Legal.io Newsletter covers the latest in legal, talent & tech.
In a recent development, the European Commission has initiated a formal investigation into Microsoft's practices concerning its communication and collaboration product, Teams. The investigation aims to determine whether Microsoft has violated EU competition rules by tying Teams to its widely used business suites, Office 365 and Microsoft 365.
The current surge in technology, particularly in the field of generative AI, is creating a new kind of gold rush. However, the spoils of this boom are not evenly spread. In fact, they are highly concentrated.
The House of Representatives passed the Cares Act on Friday, sending it to President Donald Trump for his signature. We provide an overview.
Guardian US Welcomes Kai Falkenberg as its first general counsel
DocuSign has announced its acquisition of AI-based contract lifecycle management provider Lexion for $165M, aiming to bolster its Intelligent Agreement Management platform.