A lawyer used ChatGPT for legal research and cited a number of cases in a filing. The problem is that none of these cases actually exist (AI just made them up), and he is now in big trouble with the judge.
The US Lawyer was representing a man who sued an airline. The man, named Roberto Mata sued Avianca Airlines for alleged injuries. This was the normal part of the court case.
However, Roberto’s lawyers submitted a brief citing fake cases, with bogus quotes.
Mr. Mata’s lawyer submitted a 10-page brief that cited more than half a dozen relevant court decisions.
There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
He went ahead with this as ChatGPT assured him were all real.
Needless to say, neither the airline’s lawyers nor the judge could find the decisions or the quotations cited and summarized in the brief. Basically, ChatGPT had magically conjured up everything.
Hence, the airline’s lawyers later wrote to the judge stating they could not find some of the cases.
Judge Castel also wrote: “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.“
Lawyer “Unaware” of AI Faults
Steven A Schwartz has practiced law for 30 years. On this occasion, he decided the best thing to do is to use AI to find past similar cases.
In a written statement, Mr. Schwartz stated that he “greatly regrets” relying on the chatbot, he also added that he was “unaware that its content could be false”.
Mr Schwartz and his colleague Peter LoDuca, work for the firm Levidow, Levidow & Oberman. Due to the incident, the court has ordered them to explain why they should not be disciplined at an 8 June hearing.
There have been concerns over the potential risks of artificial intelligence (AI), including the potential spread of misinformation and bias.