In a recent court case, a lawyer relied on ChatGPT for legal research, resulting in the submission of false information. The incident sheds light on the potential risks associated with AI in the legal field, including the propagation of misinformation.
The case revolved around a man suing an airline over an alleged personal injury. The plaintiff"s legal team submitted a brief citing several previous court cases to support their argument, seeking to establish a legal precedent for their claim. However, the airline"s lawyers discovered that some of the referenced cases did not exist and promptly alerted the presiding judge.
Judge Kevin Castel, presiding over the case, expressed astonishment at the situation, labeling it an "unprecedented circumstance." In an order, the judge demanded an explanation from the plaintiff"s legal team.
Steven Schwartz, a colleague of the lead attorney, confessed to utilizing ChatGPT to search for similar legal precedents. In a written statement, Schwartz expressed deep regret that he "had never previously used AI for legal research and was unaware that its content could be false."
Screenshots attached to the filing showed a conversation between Schwartz and ChatGPT. In the prompt, Schwartz asked if a specific case, Varghese v. China Southern Airlines Co Ltd, was genuine.
ChatGPT affirmed its authenticity, indicating that the case could be found in legal reference databases such as LexisNexis and Westlaw. However, subsequent investigations revealed that the case did not exist, leading to further doubts about the other cases provided by ChatGPT.
In light of this incident, both lawyers involved in the case, Peter LoDuca and Steven Schwartz from the law firm Levidow, Levidow & Oberman, have been summoned to an upcoming disciplinary hearing on June 8 to explain their actions.
This event has prompted discussions within the legal community regarding the appropriate use of AI tools in legal research and the need for comprehensive guidelines to prevent similar occurrences.
Source: NYT