MTC: AI Hallucinated Cases Are Now Shaping Court Decisions - What Every Lawyer, Legal Professional and Judge Must Know in 2025!

AL Hallucinated cases are now shaping court decisions - what every lawyer and judge needs to know in 2025.

Artificial intelligence has transformed legal research, but a threat is emerging from chambers: hallucinated case law. On June 30, 2025, the Georgia Court of Appeals delivered a landmark ruling in Shahid v. Esaam that should serve as a wake-up call to every member of the legal profession: AI hallucinations are no longer just embarrassing mistakes—they are actively influencing court decisions and undermining the integrity of our judicial system.

The Georgia Court of Appeals Ruling: A Watershed Moment

The Shahid v. Esaam decision represents the first documented case where a trial court's order was based entirely on non-existent case law, likely generated by AI tools. The Georgia Court of Appeals found that the trial court's order denying a motion to reopen a divorce case relied upon two fictitious cases, and the appellee's brief contained an astounding 11 bogus citations out of 15 total citations. The court imposed a $2,500 penalty on attorney Diana Lynch—the maximum allowed under GA Court of Appeals Rule 7(e)(2)—and vacated the trial court's order entirely.

What makes this case particularly alarming is not just the volume of fabricated citations, but the fact that these AI-generated hallucinations were adopted wholesale without verification by the trial court. The court specifically referenced Chief Justice John Roberts' 2023 warning that "any use of AI requires caution and humility".

The Explosive Growth of AI Hallucination Cases

The Shahid case is far from isolated. Legal researcher Damien Charlotin has compiled a comprehensive database tracking over 120 cases worldwide where courts have identified AI-generated hallucinations in legal filings. The data reveals an alarming acceleration: while there were only 10 cases documented in 2023, that number jumped to 37 in 2024, and an astounding 73 cases have already been reported in just the first five months of 2025.

Perhaps most concerning is the shift in responsibility. In 2023, seven out of ten cases involving hallucinations were made by pro se litigants, with only three attributed to lawyers. However, by May 2025, legal professionals were found to be at fault in at least 13 of 23 cases where AI errors were discovered. This trend indicates that trained attorneys—who should know better—are increasingly falling victim to AI's deceptive capabilities.

High-Profile Cases and Escalating Sanctions

Always check your research - you don’t want to get in trouble with your client, the judge or the bar!

The crisis has intensified with high-profile sanctions. In May 2025, a special master in California imposed a staggering $31,100 sanction against law firms K&L Gates and Ellis George for what was termed a "collective debacle" involving AI-generated research4. The case involved attorneys who used multiple AI tools including CoCounsel, Westlaw Precision, and Google Gemini to generate a brief, with approximately nine of the 27 legal citations proving to be incorrect.

Even more concerning was the February 2025 case involving Morgan & Morgan—the largest personal injury firm in the United States—where attorneys were sanctioned for a motion citing eight nonexistent cases. The firm subsequently issued an urgent warning to its more than 1,000 lawyers that using fabricated AI information could result in termination.

The Tech-Savvy Lawyer.Page: Years of Warnings

The risks of AI hallucinations in legal practice have been extensively documented by experts in legal technology. I’ve been sounding the alarm at The Tech-Savvy Lawyer.Page Blog and Podcast about these issues for years. In a blog post titled "Why Are Lawyers Still Failing at AI Legal Research? The Alarming Rise of AI Hallucinations in Courtrooms," the editorial detailed how even advanced legal AI platforms can generate plausible but fake authorities.

My comprehensive coverage has included reviews of specific platforms, such as the November 2024 analysis "Lexis+ AI™️ Falls Short for Legal Research," which documented how even purpose-built legal AI tools can cite non-existent legislation. The platform's consistent message has been clear: AI is a collaborator, not an infallible expert.

International Recognition of the Crisis

The problem has gained international attention, with the London High Court issuing a stark warning in June 2025 that attorneys who use AI to cite non-existent cases could face contempt of court charges or even criminal prosecution. Justice Victoria Sharp warned that "in the most severe instances, intentionally submitting false information to the court with the aim of obstructing the course of justice constitutes the common law criminal offense of perverting the course of justice".

The Path Forward: Critical Safeguards

Based on extensive research and mounting evidence, several key recommendations emerge for legal professionals:

For Individual Lawyers:

Lawyers need to be diligent and make sure their case citations are not only accurate but real!

  • Never use general-purpose AI tools like ChatGPT for legal research without extensive verification

  • Implement mandatory verification protocols for all AI-generated content

  • Obtain specialized training on AI limitations and best practices

  • Consider using only specialized legal AI platforms with built-in verification mechanisms

For Courts:

  • Implement consistent disclosure requirements for AI use in court filings

  • Develop verification procedures for detecting potential AI hallucinations

  • Provide training for judges and court staff on AI technology recognition

FINAL THOUGHTS

The legal profession is at a crossroads. AI can enhance efficiency, but unchecked use can undermine the integrity of the justice system. The solution is not to abandon AI, but to use it wisely with appropriate oversight and verification. The warnings from The Tech-Savvy Lawyer.Page and other experts have proven prescient—the question now is whether the profession will heed these warnings before the crisis deepens further.

MTC

Happy Lawyering!

🚨 BOLO: Android Ad Fraud Malware and Your ABA Ethical Duties – What Every Lawyer Must Know in 2025 🚨

Defend Client Data from Malware!

The discovery of the “Kaleidoscope” ad fraud malware targeting Android devices is a wake-up call for legal professionals. This threat, which bombards users with unskippable ads and exploits app permissions, is not just an annoyance - it is a direct risk to client confidentiality, law firm operations, and compliance with the ABA Model Rules of Professional Conduct. Lawyers must recognize that cybersecurity is not optional; it is an ethical mandate under the ABA Model Rules, including Rules 1.1, 1.3, 1.4, 1.6, 5.1, and 5.3.

Why the ABA Model Rules Matter

  • Rule 1.6 (Confidentiality): Lawyers must make reasonable efforts to prevent unauthorized disclosure of client information. A compromised device can leak confidential data, violating this core duty.

  • Rule 1.1 (Competence): Competence now includes understanding and managing technological risks. Lawyers must stay abreast of threats like Kaleidoscope and take appropriate precautions.

  • Rule 1.3 (Diligence): Prompt action is required to investigate and remediate breaches, protecting client interests.

  • Rule 1.4 (Communication): Lawyers must communicate risks and safeguards to clients, including the potential for data breaches and the steps being taken to secure information.

  • Rules 5.1 & 5.3 (Supervision): Law firm leaders must ensure all personnel, including non-lawyers, adhere to cybersecurity protocols.

Practical Steps for Lawyers – Backed by Ethics and The Tech-Savvy Lawyer.Page

Lawyers: Secure Your Practice Now!

  • Download Only from Trusted Sources: Only install apps from the Google Play Store, leveraging its built-in protections. Avoid third-party stores, the main source of Kaleidoscope infections.

  • Review App Permissions: Be vigilant about apps requesting broad permissions, such as “Display over other apps.” These can enable malware to hijack your device.

  • Secure Devices: Use strong, unique passwords, enable multi-factor authentication, and encrypt devices-simple but essential steps emphasized by our blog posts on VPNs and ABA guidance.

  • Update Regularly: Keep your operating system and apps up to date to patch vulnerabilities.

  • Educate and Audit: Train your team about mobile threats and run regular security audits, as highlighted in Cybersecurity Awareness Month posts on The Tech-Savvy Lawyer.Page.

  • Incident Response: Have a plan for responding to breaches, as required by ABA Formal Opinion 483 and best practices.

  • Communicate with Clients: Discuss with clients how their information is protected and notify them promptly in the event of a breach, as required by Rule 1.4 and ABA opinions.

  • Label Confidential Communications: Mark sensitive communications as “privileged” or “confidential,” per ABA guidance.

Advanced Strategies

Lawyers need to have security measures in place to protect client data!

  • Leverage AI-Powered Security: Use advanced tools for real-time threat detection, as recommended by The Tech-Savvy Lawyer.Page.

  • VPN and Secure Networks: Avoid public Wi-Fi. But if/when you do be sure to use VPNs (see The Tech-Savvy Lawyer.Page articles on VPNs) to protect data in transit.

  • Regular Backups: Back up data to mitigate ransomware and other attacks.

By following these steps, lawyers fulfill their ethical duties, protect client data, and safeguard their practice against evolving threats like Kaleidoscope.

🎙️ Ep. 109: Building Trust in Legal AI - Clearbrief's Jacqueline Schafer on Security, Citations in The Future of Law.

I'm joined by Jacqueline Schafer, Founder and CEO of Clearbrief.ai. Jacqueline shares key insights into how legal professionals can effectively leverage AI. She outlines three essential expectations from legal AI assistants: robust security, accurate and verifiable citations, and seamless integration into legal workflows. Jacqueline addresses common misconceptions about AI, encourages responsible use, and highlights Clearbrief's unique features, including its seamless integration with Microsoft Word and AI-driven document analysis. With a focus on ethics, usability, and innovation, Jacqueline also provides a clear vision for the future of AI in legal practice.

Join Jacqueline and me as we discuss the following three questions and more!

  1. What are the top three things lawyers should expect from their legal AI assistant?

  2. What are the top three ways clearbrief.ai differentiates from its competitors?

  3. Regardless of what a lawyer uses, what are the top three things lawyers need to be mindful of regarding their legal and ethical responsibilities?

In our conversation, we cover the following:

[01:09] Jacqueline's Tech Setup

[02:33] Top Three Expectations from Legal AI Assistants

[06:58] Top Three Things Lawyers Should Not Expect from Legal AI Assistants

[08:36] Clearbrief's Unique Features and Differentiators

[17:37] Ethical Responsibilities and Training Staff

Resources:

Connect with Jacqueline:

Mentioned in the episode

Software & Cloud Services mentioned in the conversation: