US lawyer apologises after using fake cases made by ChatGPT

Avatar photo

By Emily Hinkley on

2

AI limitations exposed

A US lawyer has apologised for reportedly using ChatGPT to undertake legal research, leading to non-existent cases being submitted to the court.

The personal injury case in New York where a man was suing an airline was disrupted by the revelation that a member of the claimant’s legal team submitted a legal document containing several bogus cases.

It then emerged that the lawyer for the claimant, Peter LoDuca, had not prepared his own legal research but instead allowed his colleague, Steven A Schwartz, to prepare it for him, BBC News reports.

Despite ChatGPT’s user interface warnings that it can “produce inaccurate information”, Schwartz reportedly told the court he was “unaware that its content could be false”.

The 2023 Legal Cheek Firms Most List

In a written statement, Schwartz vowed never again to attempt to “supplement” his legal research using AI unless he had “absolute verification of its authenticity”. He exonerated his colleague LoDuca, saying he had no knowledge of how the research was carried out.

Screenshots attached to a further filing appear to show Schwarz asking the AI not whether a case it provided was real and what its sources were. The bot falsely responded that the case was real and could be found on legal reference databases such as LexisNexis and Westlaw.

Both lawyers are from NY firm Levidow, Levidow & Oberman, and have been called to a hearing on 8 June to establish whether they will face disciplinary action.

For all the latest commercial awareness info, news and careers advice:

Sign up to the Legal Cheek Newsletter

Related Stories