The use of ChatGPT by solicitors to assist with a case backfired catastrophically.

Photo of author

By Creative Media News

  • New York solicitors fined for submitting legal brief with fabricated case citations
  • Judge criticizes attorneys’ use of AI-generated content in court filing
  • Court imposes fine on law firm for “acts of deliberate avoidance and false and misleading statements”

OpenAI’s ChatGPT can write essays, speeches, read computer code, and make poetry, but it also lies.

Two New York solicitors were fined after submitting a legal brief with ChatGPT-generated bogus case citations.

Steven Schwartz of Levidow, Levidow & Oberman used a chatbot to review a client’s Avianca personal injury case brief.

He had used it to discover legal precedents supporting the case, but attorneys for the Colombian airline told the court they were unable to locate some of the examples cited, which was understandable given that they were almost entirely fictitious.

Several of them were wholly fabricated, while others involved misidentified judges or nonexistent airlines.

The use of chatgpt by solicitors to assist with a case backfired catastrophically.
The use of chatgpt by solicitors to assist with a case backfired catastrophically.

District Judge Peter Kevin Castel stated that Schwartz and colleague Peter LoDuca, who was listed on Schwartz’s brief, acted in poor faith and made “acts of deliberate avoidance and false and misleading statements to the court.”

The judge added that portions of the brief were “gibberish” and “nonsensical” and contained fabricated quotations.

While frequently impressive, generative AI such as OpenAI’s ChatGPT and Google’s Bard tends to “hallucinate” when responding, as they may not have a true understanding of the information they have been provided.

Disinformation is one of the concerns raised by those who are concerned about the potential of AI.

If asked to help write a legal brief, ChatGPT stated, “While I can provide general information and assistance. It is important to note that I am an AI language model and not a qualified legal professional.”

Judge Castel said using AI “for assistance” is “inherently improper” but advised attorneys to check their papers.

After being questioned by the court and the airline, he asserted that the attorneys “continued to uphold the false opinions.”

The court fined Schwartz, LoDuca, and their law firm a total of $5,000 (£3,926).

Levidow, Levidow & Oberman is contemplating whether to file an appeal, stating that they “made a mistake in good faith by failing to believe that a piece of technology could fabricate cases out of thin air.”

Read More

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Skip to content