-
A federal judge in Minnesota dismissed testimony from Stanford AI expert Jeff Hancock after finding fabricated citations in his declaration.
-
Hancock admitted to relying on ChatGPT, which "hallucinated" fake sources in the expert filing.
-
The case involves a challenge to Minnesota’s deepfake law, which bans using AI-generated content to influence elections.
A Minnesota federal judge has excluded an expert’s testimony in a case challenging the state’s deepfake law after discovering that the filing contained fake citations generated by artificial intelligence, Reuters reports.
The ruling came in a lawsuit brought by Republican lawmaker Mary Franson and satirist Christopher Kohls, who are contesting the constitutionality of Minnesota’s 2023 deepfake statute. The law prohibits using AI-generated content to sway elections.
-
Jeff Hancock, a Stanford University communications professor specializing in misinformation, had submitted a declaration supporting Minnesota Attorney General Keith Ellison’s defense of the law.
-
However, Hancock relied on ChatGPT for drafting his filing, which generated two non-existent article citations.
“Irony” in Expert Testimony
U.S. District Judge Laura Provinzino highlighted the irony of the situation, noting that Hancock, an expert on AI dangers, fell victim to the very technology he critiques. Although the judge did not believe Hancock intentionally fabricated sources, she ruled that his credibility had been “shattered.”
Provinzino excluded Hancock’s testimony from consideration in deciding whether to issue a preliminary injunction against the deepfake law and barred Ellison’s office from submitting revised testimony.
-
A lawyer for the plaintiffs, Upper Midwest Law Center president Doug Seaton, said in a statement that "AG Ellison’s ‘expert’s’ opinion has proven to be all AI, and the Judge is correct not to allow him to cover his tracks by changing his flawed report."
-
Ellison’s office has not yet commented on the court’s ruling.
Deepfake Law Under Fire
The Minnesota deepfake law is part of a broader legal debate over regulating AI-generated content in elections. The plaintiffs argue that the law infringes on free speech protections under the First Amendment.
Kohls, known online as “Mr. Reagan,” created a satirical AI-generated video parodying Vice President Kamala Harris. The video included AI narration mimicking Harris and was shared on social media by various public figures, further fueling debate about AI’s role in political discourse.
Broader Legal Challenges
The Minnesota case, Christopher Kohls et al. v. Keith Ellison et al., is part of a growing legal landscape grappling with the intersection of AI, free speech, and election integrity. Kohls and others are also challenging similar AI regulations in California, further amplifying the legal scrutiny of deepfake laws across the United States.