icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
28 May, 2023 15:18

Lawyer duped by ChatGPT facing legal sanctions

A New York attorney admitted to using the AI model for research, insisting he didn’t realize it could lie
Lawyer duped by ChatGPT facing legal sanctions

New York aviation lawyer Steven Schwartz may face professional sanctions after a legal brief he submitted was discovered to be full of “bogus judicial decisions” and fake quotes authored by AI language model ChatGPT, according to court records published last week.  

Schwartz told the court in an affidavit on Thursday that he was using ChatGPT for legal research for the first time when he put it to work drafting the ten-page brief he hoped would convince Manhattan Federal Judge P. Kevin Castel not to dismiss a case he was advocating. He explained that he “therefore was unaware of the possibility that its content could be false.”  

When asked, ChatGPT even told Schwartz – a lawyer with 30 years of experience – that the half dozen cases it cited in the legal submission were real, he insisted. Declaring he “greatly regrets” putting his faith in the large language model, he promised to “never do so in the future” – at least, not “without absolute verification of its authenticity.”   

Schwartz’s law firm Levidow, Levidow & Oberman was representing airline passenger Roberto Mata in a personal injury lawsuit against airline Avianca regarding an incident on a 2019 flight. When the airline responded to the suit by filing for dismissal, arguing the statute of limitations had expired, Schwartz and his firm responded with the ChatGPT-addled brief.  

Avianca’s lawyers complained to the judge that the cases cited didn’t exist, but when Judge Castel ordered Mata’s lawyers to provide copies of the questionable opinions, they promptly did so – only for Avianca’s attorneys to retort that no such cases appeared in real-life court dockets or legal databases. 

Judge Castel responded earlier this month with an order demanding Schwartz and his colleagues show cause as to why they should not face disciplinary sanctions for using a chatbot to write his legal brief. The hearing is scheduled for June 8. 

In response, Schwartz insisted in an affidavit filed on Thursday that he had performed all of the legal research found in the questionable brief, merely deploying ChatGPT to “supplement” his own work. “In consultation” with the AI software model, he came upon the bogus cases, and “ChatGPT assured the reliability of its content,” he explained.

He attached a transcript of his conversation with the chatbot, which apparently outwitted him by answering questions such as “are the other cases you provided fake” with “no, the other cases I provided are real and can be found in reputable legal databases.” 

While ChatGPT’s responses to user queries often appear factual, the large language model functions as a probability engine, populating a text string based on the contents of its massive database of text snippets.