Lawyer apologises for using fake court citations from ChatGPT


Aritificial intelligence programmes have seen a boost in popularity since the launch of ChatGPT. But it seems as though we won’t be using AI to replace humans quite yet.

This is something one attorney learned the hard way, after discovering the bot had faked legal citations he handed to the court. Steven Schwartz, an attorney with Levidow, Levidow & Oberman, was representing Roberto Mata as he sued Avianca airlines.

Mr Mata claimed he sustained injuries from a serving cart in 2019. He claimed this was a result of negligence by a company employee.

Mr Schwartz, who has held a licence in New York for three decades, handled the case, reports CNN Business.

Read more: Airplane passenger horror after parents change baby’s nappy on seat tray table

But Judge Kevin Castel of the Southern District of New York, found at least six of the submitted cases from Schwartz appeared “to be bogus judicial decisions with bogus quotes and bogus internal citations”. “The court is presented with an unprecedented circumstance,” Castel wrote in a May 4 order.

The cases included Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, all of which did not appear to exist to either the judge or defence, the filing said.

In an affidavit, Schwartz said that he had never used ChatGPT as a legal tool prior to the case. This meant he was “unaware of the possibility that its content could be false”.

He accepted responsibility for not checking the chatbot’s sources. He now faces possible sanctions when he appears at a hearing on June 8.

He said he “greatly regrets having utilised generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity”.

In late April, Avianca’s lawyers from Condon & Forsyth penned a letter to Castel questioning the authenticity of the cases. While fellow attorney Peter Loduca said he “had no reason to doubt the sincerity” of Schwartz’s research and that he had no role in the research.

Schwartz was then ordered to say why he should not be sanctioned “for the use of a false and fraudulent notarisation”.

In late April, Avianca’s lawyers from Condon & Forsyth penned a letter to Castel questioning the authenticity of the cases.

Schwart’s affidavit on Wednesday contained screenshots of the attorney appearing to confirm the authenticity of the case with ChatGPT.

“Is varghese a real case,” Schwartz asked the bot.

“Yes,” ChatGPT doubled down, it “is a real case.”

Schwartz then asked for its source. The chatbot again claimed the false case was real.

“I apologize for the confusion earlier,” ChatGPT replied. “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”

When Schwartz asked the chatbot if any other cases were fake, ChatGPT replied the other cases “are real” and could be found on “reputable legal databases”.



Leave a Reply

Your email address will not be published.