A lawyer used ChatGPT to prepare a court filing. It went horribly awry.

lightbright

Master Pussy Poster
BGOL Investor
A lawyer who relied on ChatGPT to prepare a court filing on behalf of a man suing an airline is now all too familiar with the artificial intelligence tool's shortcomings — including its propensity to invent facts.

Roberto Mata sued Colombian airline Avianca last year, alleging that a metal food and beverage cart injured his knee on a flight to Kennedy International Airport in New York. When Avianca asked a Manhattan judge to dismiss the lawsuit based on the statute of limitations, Mata's lawyer, Steven A. Schwartz, submitted a brief based on research done by ChatGPT, Schwartz, of the law firm Levidow, Levidow & Oberman, said in an affidavit.

While ChatGPT can be useful to professionals in numerous industries, including the legal profession, it has proved itself to be both limited and unreliable. In this case, the AI invented court cases that didn't exist, and asserted that they were real.

The fabrications were revealed when Avianca's lawyers approached the case's judge, Kevin Castel of the Southern District of New York, saying they couldn't locate the cases cited in Mata's lawyers' brief in legal databases.

The made-up decisions included cases titled Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines.
"It seemed clear when we didn't recognize any of the cases in their opposition brief that something was amiss," Avianca's lawyer Bart Banino, of Condon & Forsyth, told CBS MoneyWatch. "We figured it was some sort of chatbot of some kind."

Schwartz responded in an affidavit last week, saying he had "consulted" ChatGPT to "supplement" his legal research, and that the AI tool was "a source that has revealed itself to be unreliable." He added that it was the first time he'd used ChatGPT for work and "therefore was unaware of the possibility that its content could be false."

He said he even pressed the AI to confirm that the cases it cited were real. ChatGPT confirmed it was. Schwartz then asked the AI for its source.

ChatGPT's response? "I apologize for the confusion earlier," it said. The AI then said the Varghese case could be located in the Westlaw and LexisNexis databases.

Judge Castel has set a hearing regarding the legal snafu for June 8 and has ordered Schwartz and the law firm Levidow, Levidow & Oberman to argue why they should not be sanctioned.

Levidow, Levidow & Oberman could not immediately be reached for comment.


 

A lawyer used ChatGPT to cite bogus cases. What are the ethics?

ZGSQRA7Y2JNLBOEYBWCKD3FTZM.jpg

ChatGPT logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023.

  • A New York lawyer submitted a brief citing six non-existent judicial decisions produced by ChatGPT
  • Lawyers must ensure competency and confidentiality when using AI, experts warn
May 30(Reuters) - A New York lawyer is facing potential sanctions over an error-riddled brief he drafted with help from ChatGPT.

It's a scenario legal ethics experts have warned about since ChatGPT burst onto the scene in November, marking a new era for AI that can produce human-like responses based on vast amounts of data.

Steven Schwartz of Levidow, Levidow & Oberman faces a June 8 sanctions hearing before U.S. District Judge P. Kevin Castel after he admitted to using ChatGPT for a brief in his client's personal injury case against Avianca Airlines. The brief cited six non-existent court decisions.

Schwartz said in a court filing that he "greatly regrets" his reliance on the technology and was "unaware of the possibility that its contents could be false."

Lawyers representing Avianca alerted the court to the non-existent cases cited by Schwartz, who did not respond to a request for comment Tuesday.

The American Bar Association’s Model Rules of Professional Conduct do not explicitly address artificial intelligence. But several existing ethics rules apply, experts say.

“You are ultimately responsible for the representations you make,” said Daniel Martin Katz, a professor at Chicago-Kent College of Law who teaches professional responsibility and studies artificial intelligence in the law. "It’s your bar card."

DUTY OF COMPETENCE​

This rule requires lawyers to provide competent representation and be up to date on current technology. They must ensure that the technology they use provides accurate information—a major concern given that tools such as ChatGPT have been found to make things up. And lawyers must not rely too heavily upon the tools lest they introduce mistakes.

“Blindly relying on generative AI to give you the text you use to provide services to your client is not going to pass muster,” said Suffolk University law dean Andrew Perlman, a leader in legal technology and ethics.

Perlman envisions duty of competence rules eventually requiring some level of proficiency in artificial intelligence technology. AI could revolutionize legal practice so significantly that someday not using it could be akin to not using computers for research, he said.

DUTY OF CONFIDENTIALITY​

This rule requires lawyers to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Lawyers who use programs like ChatGPT or Bing Chat risk giving AI companies their clients' data to train and improve their models, potentially violating confidentiality rules.

That’s one reason why some law firms have explicitly told lawyers not to use ChatGPT and similar programs on client matters, said Holland & Knight partner Josias Dewey, who has been working on developing internal artificial intelligence programs at his firm.

Some law-specific artificial intelligence programs, including CaseText’s CoCounsel and Harvey, address the confidentiality issue by keeping their data walled off from outside AI providers.

RESPONSIBILITIES REGARDING NONLAWYER ASSISTANCE​

Under this rule, lawyers must supervise lawyers and nonlawyers who assist them to ensure that their conduct complies with professional conduct rules. The ABA in 2012 clarified that the rule also applies to non-human assistance.

That means lawyers must supervise the work of AI programs and understand the technology well enough to make sure it meets the ethics standards that attorneys must uphold.

“You have to make reasonable efforts to ensure the technology you are using is consistent with your own ethical responsibility to your clients,” Perlman said.


 
Last edited:
Back
Top