Judge Kevin Newsom of the U.S. Court of Appeals for the Eleventh Circuit embraced the use of ChatGPT in judicial decision-making, highlighting its potential benefits.
Judge Kevin Newsom of the U.S. Court of Appeals for the Eleventh Circuit has openly embraced the use of generative AI in judicial decision-making.
In a well-detailed 32-page concurrence for a May 28 opinion, Judge Newsom outlined his unconventional decision as far as using ChatGTP to inform his analysis of a pivotal issue in an insurance appeal.
Judge Newsom acknowledged that his decision might be seen and even considered as "heresy" by traditional legal experts. However, he stands by his decision, which marks a significant step toward integrating artificial intelligence into the judicial process.
Unpacking the Unthinkable
Judge Newsom's decision is one that started with a provocative proposal, acknowledging the "Ordinary meaning" approach applied to legal interpretation. Newsom pushed for traditionalists to consider leveraging AI-powered LLM systems to enhance their analysis of cases.
He admits that the use of ChatGPT is in no way conventional, and lays out a compelling argument for its potential benefits. The case before Judge Newsom involved an in-ground trampoline and hinged on whether a specific wooden "cap" constituted "landscaping" under the terms of an insurance policy.
Judge Newsom was faced with the challenge of interpreting these terms. He described spending hours consulting traditional dictionaries and case photos. Eventually, he decided to explore a more novel approach—consulting ChatGPT.
The AI Consultation
Judge Newsom's work with ChatGPT started with a basic question, "Is it absurd to think that ChatGPT might be able to shed some light on what the term 'landscaping' means?"
At the start of his research, he was skeptical of what to expect, but he was surprised by the coherence and relevance of the responses generated by the AI. ChatGPT's first definition of "landscaping" aligned well with his impressions, prompting him to delve deeper.
He then pushed further into his research asking ChatGPT critical questions involving the case. These questions included, "Is installing an in-ground trampoline 'landscaping'?" To make sure that he was thorough, he went through a couple of other LLMs, including Google Bard, and he got the same answer.
For both LLMs, the answer Newsom got suggested that the trampoline-related work—including excavation, retaining wall construction, mat installation, and the addition of a decorative wooden cap—could plausibly be considered landscaping.
While Judge Newsom's appellate decision did not ultimately hinge on defining "landscaping," he found the consultation and research valuable. This spurred his consideration of the broader implications of using LLMs in legal interpretation. This alone moved him from skepticism to cautious endorsement.
The Case for LLMs in Legal Interpretation
In Judge Newsom's detailed concurrence, he outlines several advantages of using LLMs for ordinary-meaning analyses in legal contexts. These advantages include:
LLMs like ChatGPT are designed to easily process and generate human-like text based on the vast number of datasets available to it. These datasets included ordinary language, thus making them well-suited for easy understanding of common meanings.
LLMs can and do easily grasp the contextual meaning behind the terms used. Doing this allows for more nuanced interpretations than traditional methods.
With LLMs being readily available to legal professionals and the general public, the legal process is democratized and accessible for anyone seeking advanced interpretive aids.
LLMs have better access to data online. This allows them to offer more empirical and data-driven insights compared to other interpretive methods.
With all these advantages, Newsom noted he doesn't overlook the potential pitfalls of LLMs. He warns about the overall risk of AI hallucinations—where the model generates inaccurate or nonsensical information—and acknowledges concerns about the exclusion of offline speech, manipulation of AI outputs, and the broader dystopian implications of over-reliance on AI.
A Cautious Optimism
Judge Newsom's call for caution when using AI for legal work is echoed by other legal professionals. A good example of this is U.S. District Judge Paul Engelmayer harshly criticized a law firm's reliance on ChatGPT for fee estimation in a recent case, underscoring the ongoing skepticism and caution within the judiciary.
Other judges, like Chief Magistrate Judge Helen Adams and Judge Xavier Rodriguez, advocate for a balanced approach when using LLMs. Other judges support the use of LLMs for the initial drafting of legal workings and research, but only if there are safeguards in place to ensure the results are factual and accurate.
The innovative use of tech within the legal ecosystem is continuously evolving, and Judge Newsom's account is a reflection of that. His overall endorsement of AI is one to be considered by legal professionals across the country and is likely to be followed by growing interest in how generative AI can support legal professionals in their everyday work.
Starting a business requires many steps, and takes hard work and dedication. This guide covers ten things you should think about to increase your chances of success.
Legal Operations professionals talk about how they prepare their team for mergers and acquisitions.
The new metric for lawyer productivity, RPM, measures attorneys’ relative performance in generating fees and collecting them, compared to their peers.
Our in-house professional community discuss the best online communities for in-house counsel.
Law firms are cutting back on summer associate hires, while boosting lateral partner recruitment to expand transactional services and capitalize on an economic rebound.
According to the Small Business Administration's Office of Advocacy, in 2008 there were 29+ million small businesses in the United States, and half of all American workers were employed by small businesses.
Virtus Investment Partners Welcomes Andra Purkalitis as Chief Legal Officer
Ross Intelligence failed to convince a federal judge that Thomson Reuters is forcing people to buy its Westlaw search tools in exchange for access to its caselaw database.
Our in-house professional community discuss their impressions during in-house panel interviews.