ChatGPT and the Case of the Fictitious Legal Research
Artificial intelligence is rapidly integrating into the legal industry, but it’s not without its perils. Attorneys Steven A. Schwartz and Peter LoDuca learned this lesson the hard way, as they face possible punishment for a court filing that included references to fake past court cases invented by the AI-powered chatbot, ChatGPT.
The Misconception of ChatGPT
Schwartz utilized ChatGPT to assist in his search for legal precedents supporting his client’s case against Avianca for an injury incurred on a 2019 flight. The chatbot suggested several cases involving aviation mishaps that Schwartz hadn’t been able to find through standard methods used at his law firm. The problem was, several of those cases weren’t real or involved airlines that didn’t exist.
Schwartz claimed that he failed to comprehend that ChatGPT could fabricate fake cases. He also failed miserably at doing follow-up research to ensure the citations were correct. During his testimony, Schwartz stated that he was “operating under a misconception … that this website was obtaining these cases from some source I did not have access to.”
The Judge’s Disappointment
The judge, P. Kevin Castel, was baffled and disturbed by the occurrence and disappointed that the lawyers did not act quickly to correct the bogus legal citations when first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the fake case law in a March filing.
The Dangers of AI Technologies
The case demonstrates how the lawyers might not have understood how ChatGPT works because it tends to hallucinate, talking about fictional things in a manner that sounds realistic but is not. Daniel Shin, assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, claimed that the case highlights the dangers of using promising AI technologies without knowing the risks.
It’s not a surprise that hundreds of industry leaders signed a letter warning that mitigating the risks of extinction from AI should be a global priority alongside other societal-scale risks. ChatGPT’s success in demonstrating how artificial intelligence could change the way humans work and learn most certainly generates those fears.
Possible Sanctions
The judge will rule on sanctions at a later date.
Editor Notes
The use of AI-powered chatbots in the legal industry is revolutionary. However, it’s important to note that the integration of technology into the legal industry should always include a careful consideration of the risks involved. This story highlights the importance of using technology correctly, especially in sensitive fields such as the legal sector. To keep up with the latest news on AI, head over to GPT News Room.
from GPT News Room https://ift.tt/M0QyiTx
No comments:
Post a Comment