A federal judge turned to AI programs to interpret a key legal term in a man's appeal of his more than 11-year prison sentence, and despite being “spooked” by variances, concluded that the software could be a "valuable" tool.
A federal judge used AI to analyze a legal definition in an armed robbery case.
The AI experiment aimed to determine the ordinary meaning of " physically restrained," a key term in the case.
Slight variations in generated answers show AI programs' models could accurately predict the ordinary meaning of words, according to Judge Newsom.
A U.S. federal judge recently conducted an experiment using artificial intelligence to help resolve a legal dispute. More specifically, U.S. Circuit Judge Kevin Newsom turned to AI programs like ChatGPT to help him interpret the meaning of “physically restrained” in an armed robbery case, according to a Reuters report.
The judge detailed his research in a concurrence to an opinion by the Atlanta-based 11th U.S. Circuit Court of Appeals rejecting a defendant's appeal of his 11-year sentence for an armed robbery of a Florida convenience store.
Newsom's opinion follows the one written earlier this year, when he called for courts to use AI programs to help interpret words and phrases in contracts, regulations and other legal texts.
The defendant is Joseph Deleon, who was charged and found guilty of armed robbery and sentenced to 11 years in prison.
Deleon is believed to have walked into a store, pointed a gun at a cashier, demanded a register be emptied, and then left.
The sentencing judge said that by doing so, Deleon "physically restrained" the cashier, even though he never touched the victim.
In his appeal, Deleon argued that the judge wrongly invoked an enhancement under the federal sentencing guidelines to his case that applies if a person was "physically restrained" to facilitate an armed robbery or an escape.
The Atlanta-based 11th U.S. Circuit Court of Appeals rejected his appeal, holding that that defendants physically restrain their victims if they create "circumstances allowing the persons no alternative but compliance."
Newsom agreed to the panel’s ruling in a separate concurring opinion where he also detailed his AI experiment.
As there is no dictionary definition for "physically restrained" as a combined phrase, Judge Newsome said he asked OpenAI's ChatGPT and two other generative AI programs what the phrase's ordinary meaning is.
The "humble little mini-experiment" revealed that the programs produced answers similar to an initial one ChatGPT generated, which said the phrase "refers to the act of limiting or preventing someone’s movement by using physical force or some kind of device."
Slight variations in the wording and length of the answers the programs produced emerged when asked the same question repeatedly, which initially "spooked" and "rocked" Judge Newsom.
The technical explanation, according to the judge, is that the slight variations "accurately reflect real people's everyday speech patterns," which shows AI programs' models could accurately predict the ordinary meaning of words.
The results of the experiment, variations included, made Judge Newsom conclude that generative AI programs "may well serve a valuable auxiliary role as we aim to triangulate ordinary meaning."
The experiment also highlights the potential for gen AI technology to provide more support in legal interpretation. Though the judge emphasized that AI would not replace traditional legal reasoning, the experiment opens a door for more advanced technological integration in courtrooms.