Skip to main content

U.S. District Court for MD Addresses Hallucinated Cases

Page 1


U.S. District Court for Maryland Addresses Hallucinated Cases

Hallucinated cases are “inaccurate depictions of information from AI models that suffer from incomplete, biased, or otherwise flawed training data.”

RECENT RULINGS FROM the U.S. District Court for the District of Maryland have spotlighted a serious concern in legal practice: the submission of court filings permeated with fabricated or “hallucinated” cases and citations, generated by generative artificial intelligence (AI). In two separate matters, although involving the same lawyer, the court confronted legal pleadings in which counsel relied on non-existent authorities cited in the court filings, raising serious questions about diligence, ethics, and the integration of AI technologies into legal research and drafting. These incidents underscore the reality that lawyers continue to rely on AI technology without independently verifying its output. Lawyers and legal teams navigating AI technology should take note.

Hallucinated cases are “inaccurate depictions of information from AI models that suffer from incomplete, biased, or otherwise flawed training data.”1 These “hallucinations” are a travesty in the legal profession. They mislead the court and opposing counsel, waste judicial resources, and undermine the credibility of the legal profession. Across the nation,

lawyers have faced sanctions for submitting legal pleadings containing fabricated cases and citations. For instance, in a widely publicized and well-known case in New York, Mata v. Avianca , attorneys and their law firm were sanctioned after filing pleadings containing AI-generated hallucinated cases. Others have followed. These incidents highlight the risks of uncritical reliance on AI tools in legal practice.

Two Cases at Issue

Lafferty v. Theiss

In Lafferty v. Theiss, 2 the plaintiff initiated a lawsuit against a police officer and other government entities, alleging violations of civil rights through the use of excessive force and battery. The defendants responded by filing a motion to dismiss or, in the alternative, for summary judgment. Subsequently, the plaintiffs submitted a memorandum in opposition, to which the defendants replied.

In their reply memorandum, the defendants raised concerns regarding misquotations and mischaracterizations of case law within the plaintiffs’ response. On May 12, 2025, the court issued an order requiring plaintiffs’ counsel to address these allegations. In compliance, plaintiff’s counsel filed a response to the order on May 26, 2025. Plaintiff’s counsel acknowledged the citation errors, issued an apology to both the court and opposing counsel, and formally withdrew the inaccurate quotations and mischaracterizations.

Furthermore, plaintiffs’ counsel disclosed that the response memorandum had been prepared exclusively with the assistance of AI tools, without conducting an independent review to verify the accuracy of the information, as mandated by Fed. R. Civ. P. 11(b)(2) before signing the pleading.

Plaintiff’s counsel further represented that both counsel and the firm had taken concrete steps to prevent overreliance on AI in future pleadings. To avoid recurrence of the errors, the firm instituted several protocols. First, every quotation or parenthetical citation must now be substantiated by a PDF copy of the underlying judicial opinion, which is to be downloaded and thoroughly reviewed by the attorney responsible for the pleading. Additional measures include requiring each checker to independently verify all citations and quotations before filing, confirming the accuracy of citations through Shepardizing or KeyCite, retaining the verified authorities for potential court review, and mandating a secondary audit of all filings by another member of the firm to ensure citation accuracy.

2 Lafferty v. Theiss et al, No. 1:2024cv02642 - Document 29 (D. Md. 2025).

3 Neal et al v. Frayer et al, No. 8:2024cv00778 - Document 31 (D. Md. 2025).

Finally, counsel expressed a willingness to submit sworn declarations, participate in a Rule 11 conference, or take any further action deemed appropriate by the court.

The court entered an order on August 4, 2025, and acknowledged plaintiff’s counsel’s response to the order regarding the inaccuracy of the legal citations. Without commenting further on the inaccurate citations or plaintiff’s counsel’s explanations, the court proceeded to write an opinion granting defendant’s motion for summary judgment.

Christopher Neal v. Brian Frayer

In Neal v. Frayer, 3 plaintiffs sued Officer Frayer and other defendants for multiple torts, including excessive force in violation of the Fourth Amendment. Frayer and another defendant filed a motion to dismiss or, in the alternative, for summary judgment. Plaintiffs filed a response in opposition to the motions on March 26, 2025. In the written opinion filed on November 17, 2025, granting the defendants’ motions for summary judgment, the court addressed the issue of Fed. R. Civ. P. 11 sanctions against plaintiff’s counsel for submitting a response to the defendant’s motion for summary judgment with inaccurate legal citations. The court identified several instances where plaintiffs’ counsel cited cases that either did not exist or did not support the propositions for which they were cited. For example, plaintiffs’ counsel cited “Brown v. Daniel Realty Co., 922 A.2d 1146, 1155-56 (Md. Ct. Spec. App. 2007).” The citation appended to the Brown case is incorrect. Furthermore, an Appellate Court of Maryland opinion similar to the one cited does not stand for the proposition that the plaintiff cited to support his position. The court suspected that AI technology was used to produce the erroneous citations, as it found similar errors when testing several AI tools.

The court noted that plaintiff’s counsel had faced similar errors in a separate case, Lafferty v. Theiss. Despite the mistakes and errors in plaintiff’s counsel’s response in the Neal case, the court decided not to issue a show cause order or impose sanctions, citing counsel’s prior assurances in the Lafferty case that corrective measures had been implemented at his firm to prevent future errors. Notably, the court observed that plaintiff’s opposition to one of the defendant’s motions for summary judgment was filed on March 26, 2025—the exact date that plaintiff’s counsel filed his response to Judge Gallagher’s order in Lafferty, representing that his firm had implemented protocols to ensure that Rule 11 standards were met before filing future pleadings.

The court commented:

Though, unfortunately, counsel did not take the opportunity to revisit filings in other cases, to ensure that the same errors that plagued the filings in Lafferty were not present in other cases, the court assumes that the erroneous citations offered in Plaintiffs’ opposition here were a result of the prior practices detailed in Plaintiffs’ counsel’s response to Judge Gallagher’s order, and thus have been properly addressed by the steps noted in the response filed in Lafferty

While the court in Neal acknowledged the seriousness of the errors and the potential harm caused by inaccurate citations, it refrained from taking further action due to counsel’s prior corrective steps and assurance given to Judge Gallagher in the Lafferty case. However, the court warned that similar mistakes in future filings could result in sanctions under Fed. R. Civ. P. 11.

Conclusion

The Lafferty and Neal decisions serve as critical reminders of the potential pitfalls of relying on AI technology for legal research and drafting. While this great new technology can enhance efficiency, it is not infallible. It can and will produce “hallucinated” cases or inaccurate citations that undermine the credibility of legal arguments and the lawyers who file AI’s outputs without independently verifying the accuracy. Rigorous standards of verification and due diligence, as required by Fed. R. Civ. P. 11, should be observed. Lawyers also need to understand that the accuracy of legal filings cannot be delegated to AI or junior associates without proper oversight. By implementing robust protocols and exercising caution when using AI technology, legal professionals can harness its benefits while safeguarding the integrity of their practice and avoiding sanctions. The lessons from these decisions underscore the need for a balanced approach to integrating AI into the practice of law—one that prioritizes verification, accountability, and ethical conduct. A misuse of AI can greatly impair your reputation and impact your career. AN OVERVIEW OF ETHICAL CONSIDERATIONS FOR ATTORNEY USE OF GENERATIVE

U.S. District Court for MD Addresses Hallucinated Cases by Maryland Bar Journal - Issuu