By Sara Merken
(Reuters) – U.S. personal injury law firm Morgan & Morgan sent an urgent email this month to its more than 1,000 lawyers: Artificial intelligence can invent fake case law, and using made-up information in a court filing could get you fired.
A federal judge in Wyoming had just threatened to sanction two lawyers at the firm who included fictitious case citations in a lawsuit against Walmart. One of the lawyers admitted in court filings last week that he used an AI program that “hallucinated” the cases and apologized for what he called an inadvertent mistake.
AI’s penchant for generating legal fiction in case filings has led courts around the country to question or discipline lawyers in at least seven cases over the last two years, and created a new high-tech headache for litigants and judges, Reuters found.
The Walmart case stands out because it involves a well-known law firm and a big corporate defendant. But examples like it have cropped up in all kinds of lawsuits since chatbots like ChatGPT ushered in the AI era, highlighting a new litigation risk.
A Morgan & Morgan spokesperson did not respond to a request for comment. Walmart declined to comment. The judge has not yet ruled whether to discipline the lawyers in the Walmart case, which involved an allegedly defective hoverboard toy.
Advances in generative AI are helping reduce the time lawyers need to research and draft legal briefs, leading many law firms to contract with AI vendors or build their own AI tools. Sixty-three percent of lawyers surveyed by Reuters’ parent company Thomson Reuters last year said they have used AI for work, and 12% said they use it regularly.
Generative AI, however, is known to confidently make up facts, and lawyers who use it must take caution, legal experts said. AI sometimes produces false information, known as “hallucinations” in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets.
Attorney ethics rules require lawyers to vet and stand by their court filings or risk being disciplined. The American Bar Association told its 400,000 members last year that those obligations extend to “even an unintentional misstatement” produced through AI.
The consequences have not changed just because legal research tools have evolved, said Andrew Perlman, dean of Suffolk University’s law school and an advocate of using AI to enhance legal work.
“When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that’s incompetence, just pure and simple,” Perlman said.
‘LACK OF AI LITERACY’
In one of the earliest court rebukes over attorneys’ use of AI, a federal judge in Manhattan in June 2023 fined two New York lawyers $5,000 for citing cases that were invented by AI in a personal injury case against an airline.
A different New York federal judge last year considered imposing sanctions in a case involving Michael Cohen, the former lawyer and fixer for Donald Trump, who said he mistakenly gave his own attorney fake case citations that the attorney submitted in Cohen’s criminal tax and campaign finance case.
Cohen, who used Google’s AI chatbot Bard, and his lawyer were not sanctioned, but the judge called the episode “embarrassing.”
In November, a Texas federal judge ordered a lawyer who cited nonexistent cases and quotations in a wrongful termination lawsuit to pay a $2,000 penalty and attend a course about generative AI in the legal field.
A federal judge in Minnesota last month said a misinformation expert had destroyed his credibility with the court after he admitted to unintentionally citing fake, AI-generated citations in a case involving a “deepfake” parody of Vice President Kamala Harris.
Harry Surden, a law professor at the University of Colorado’s law school who studies AI and the law, said he recommends lawyers spend time learning “the strengths and weaknesses of the tools.” He said the mounting examples show a “lack of AI literacy” in the profession, but the technology itself is not the problem.
“Lawyers have always made mistakes in their filings before AI,” he said. “This is not new.”
(Reporting by Sara Merken in New York; Editing by David Bario and Aurora Ellis)