Why Are Lawyers Still Failing at AI Legal Research? The Alarming Rise of AI Hallucinations in Courtrooms 🚨⚖️

lawyers avoid sanctions - check your work!

The legal profession stands at a crossroads: Artificial intelligence (AI) offers unprecedented speed and efficiency in legal research, yet lawyers across the country (and even around the world, like our neighbor to the north) continue to make costly mistakes by over-relying on these tools. Despite years of warnings and mounting evidence, courts are now sanctioning attorneys for submitting briefs filled with fake citations and non-existent case law. Let’s examine where we are today:

The Latest AI Legal Research Failures: A Pattern, Not a Fluke

Within the last month, the legal world has witnessed a series of embarrassing AI-driven blunders:

  • $31,000 Sanction in California: Two major law firms, Ellis George LLP and K&L Gates LLP, were hit with a $31,000 penalty after submitting a brief with at least nine incorrect citations, including two to cases that do not exist. The attorneys used Google Gemini and Westlaw’s AI features but failed to verify the output-a mistake that Judge Michael Wilner called “inexcusable” for any competent attorney.

  • Morgan & Morgan’s AI Crackdown: After a Wyoming federal judge threatened sanctions over AI-generated, fictitious case law, the nation’s largest personal injury firm issued a warning: use AI without verification, and you risk termination.

  • Nationwide Trend: From Minnesota to Texas, courts are tossing filings and sanctioning lawyers for AI-induced “hallucinations”-the confident generation of plausible but fake legal authorities.

These are not isolated incidents. As covered in our recent blog post, “Generative AI vs. Traditional Legal Research Platforms: What Modern Lawyers Need to Know in 2025,” the risks of AI hallucinations are well-documented, and the consequences for ignoring them are severe.

The Tech-Savvy Lawyer.Page: Prior Warnings and Deep Dives

lawyers need to confirm all of their citations generative ai or not!

I’ve been sounding the alarm on these issues for some time. In our November 2024 review, “Lexis+ AI™️ Falls Short for Legal Research,” I detailed how even the most advanced legal AI platforms can cite non-existent legislation, misinterpret legal concepts, and confidently provide incorrect information. The post emphasized the need for human oversight and verification-a theme echoed in every major AI research failure since.

Our “Word of the Week” feature explained the phenomenon of AI “Hallucinations” in plain language: “The AI is making stuff up.” We warned attorneys that AI tools are not ready to write briefs without review and that those who fail to learn how to use AI properly will be replaced by those who do.

For a more in-depth discussion, listen to our podcast episode "From Chatbots to Generative AI – Tom Martin explores LawDroid's legal tech advancements with AI", where we explore how leading legal tech companies are addressing the reliability and security concerns of AI-driven research. Tom’s advice? Treat AI as a collaborator, not an infallible expert, and always manage your expectations about its capabilities.

Why Do These Mistakes Keep Happening? 🤔

  1. Overtrust in AI Tools
    Despite repeated warnings, lawyers continue to treat AI outputs as authoritative. As detailed in our November 2024 editorial, MTC/🚨BOLO🚨: Lexis+ AI™️ Falls Short for Legal Research!, and January 2025 roundup of AI legal research platforms, Shout Out to Robert Ambrogi: AI Legal Research Platforms - A Double-Edged Sword for Tech-Savvy Lawyers 🔍⚖️, even the best tools, e.g., Lexis+AI, Westlaw Precision AI, vLex's Vincent AI, produce inconsistent results and are prone to hallucinations. The myth of AI infallibility persists, leading to dangerous shortcuts.

  2. Lack of AI Literacy and Verification
    Many attorneys lack the technical skills to critically assess AI-generated research (yet have the legal research tools to check their work, i.e., legal citations). Our blog’s ongoing coverage stresses that AI tools are supplements, not replacements, for professional judgment. As we discussed in “Generative AI vs. Traditional Legal Research Platforms,” traditional platforms still offer higher reliability, especially for complex or high-stakes matters.

  3. Inadequate Disclosure and Collaboration
    Lawyers often share AI-generated drafts without disclosing their origin, allowing errors to propagate. This lack of transparency was a key factor in several recent sanctions and is a recurring theme in our blog postings and podcast interviews with legal tech innovators.

  4. AI’s Inability to Grasp Legal Nuance
    AI can mimic legal language but cannot truly understand doctrine or context. Our review of Lexis+ AI, see “MTC/🚨BOLO🚨: Lexis+ AI™️ Falls Short for Legal Research!," highlighted how the platform confused criminal and tort law concepts and cited non-existent statutes-clear evidence that human expertise remains essential.

The Real-World Consequences

lawyers don’t find yourself sanctioned or worse because you used unverified generative ai research!

  • Judicial Sanctions and Fines: Increasingly severe penalties, including the $31,000 sanction in California, are becoming the norm.

  • Professional Embarrassment: Lawyers risk public censure and reputational harm-outcomes we’ve chronicled repeatedly on The Tech-Savvy Lawyer.Page.

  • Client Harm: Submitting briefs with fake law can jeopardize client interests and lead to malpractice claims.

  • Loss of Trust: Repeated failures erode public confidence in the legal system.

What Needs to Change-Now

  1. Mandatory AI Verification Protocols
    Every AI-generated citation must be independently checked using trusted, primary sources. Our blog and podcast guests have consistently advocated for checklists and certifications to ensure research integrity.

  2. AI Literacy Training
    Ongoing education is essential. As we’ve reported, understanding AI’s strengths and weaknesses is now a core competency for all legal professionals.

  3. Transparent Disclosure
    Attorneys should disclose when AI tools are used in research or drafting. This simple step can prevent many of the cascading errors seen in recent cases.

  4. Responsible Adoption
    Firms must demand transparency from AI vendors and insist on evidence of reliability before integrating new tools. Our coverage of the “AI smackdown” comparison made clear that no platform is perfect-critical thinking is irreplaceable.

Final Thoughts 🧐: AI Is a Tool, Not a Substitute for Judgment

lawyers balance your legal research using generative ai with known, reliable legal resouirces!

Artificial intelligence can enhance legal research, but it cannot replace diligence, competence, or ethical responsibility. The recent wave of AI-induced legal blunders is a wake-up call: Technology is only as good as the professional who wields it. As we’ve said before on The Tech-Savvy Lawyer.Page, lawyers must lead with skepticism, verify every fact, and never outsource their judgment to a machine. The future of the profession-and the trust of the public-depends on it.

MTC: ⚖️ ChatGPT and the Supreme Court: Two Years of Progress in Legal AI ⚖️

What can we learn about the evolution of generative aI in its ever growing analysis of the supreme court?

Ed Bershitskiy’s recent SCOTUSblog article, “We’re not there to provide entertainment. We’re there to decide cases,” offers a compelling analysis of how ChatGPT has evolved since its launch in 2023, particularly in its application to Supreme Court-related questions. The article highlights both the successes and shortcomings of AI models, providing valuable insights for legal professionals navigating this rapidly advancing technology.

In 2023, the original ChatGPT model answered only 42% of Supreme Court-related questions correctly, often producing fabricated facts aka “hallucinations” and errors. Fast forward to 2025, newer models like GPT-4o, o3-mini, and o1 have demonstrated significant improvements. For instance, o1 answered an impressive 90% of questions correctly, showcasing enhanced accuracy and nuanced understanding of complex legal concepts such as non-justiciability and the counter-majoritarian difficulty. Krantz’s analysis also underscores the importance of verifying AI outputs, as even advanced models occasionally produce mistakes or hallucinations.

Always Check Your Work When Using Generative AI - It Can Create Hallucinations!

🚨

Always Check Your Work When Using Generative AI - It Can Create Hallucinations! 🚨

The article compares three distinct AI models: GPT-4o is detail-oriented but prone to overreach; o3-mini is concise but often incomplete; and o1 strikes a balance between depth and precision. This comparison is particularly relevant for legal professionals seeking tools tailored to their needs. For example, GPT-4o excels at generating detailed narratives and tables, while o1 is ideal for concise yet accurate responses.

Lawyers are not going to be replaced by ai but those lawyers who do not know how to use ai in their practice and mindful of its constant changes will be left behind!

Krantz also explores how the line between search engines and AI-powered tools is blurring. Unlike traditional search engines, these AI models analyze queries contextually, offering more comprehensive answers. However, legal practitioners must exercise caution when relying on AI for research or drafting to ensure ethical compliance and factual accuracy - in other words, always check your work when using AI!

As AI continues to evolve, its role in legal practice is becoming indispensable. By understanding its strengths and limitations, lawyers can leverage these tools effectively while safeguarding against potential risks. Krantz’s article provides a detailed roadmap for navigating this technological transformation in law.

PS: I can’t stress enough to always check your work when using AI!

Happy Lawyering!

MTC

Shout Out to Robert Ambrogi: AI Legal Research Platforms - A Double-Edged Sword for Tech-Savvy Lawyers 🔍⚖️

The use of ai is a great starting point - but always check your work (especially your citations)!

Robert Ambrogi's recent article on LawNext sheds light on a crucial development in legal tech: the comparison of AI-driven legal research platforms. This "AI smackdown" reveals both the potential and pitfalls of these tools, echoing concerns raised in our previous editorial about Lexis AI's shortcomings.

The Southern California Association of Law Libraries' panel, featuring expert librarians, put Lexis+AI, Westlaw Precision AI, and vLex's Vincent AI to the test. Their findings? While these platforms show promise in answering basic legal questions, they're not without flaws.

Each platform demonstrated unique strengths: Lexis+AI's integration with Shepard's, Westlaw Precision AI's KeyCite features, and Vincent AI's user control options. However, inconsistencies in responses to complex queries and recent legislation underscore a critical point: AI tools are supplements, not replacements, for thorough legal research.

This evaluation aligns with our earlier critique of Lexis AI, reinforcing the need for cautious adoption of AI in legal practice. As the technology evolves, so must our approach to using it.

Mark Gediman's wise words from Bob’s article serve as a fitting conclusion:

Whenever I give the results to an attorney, I always include a disclaimer that this should be the beginning of your research, and you should review the results for relevance and applicability prior to using it, but you should not rely on it as is.
— Mark Gediman

For tech-savvy lawyers, the message is clear: Embrace AI's potential, but never forget the irreplaceable value of human expertise and critical thinking in legal research. 🧠💼

MTC

MTC: AI in Legal Email - Balancing Innovation and Ethics 💼🤖

lawyers have an ethical duty when using ai in their work!

The integration of AI into lawyers' email systems presents both exciting opportunities and significant challenges. As legal professionals navigate this technological frontier, we must carefully weigh the benefits against potential ethical pitfalls.

Advantages of AI in Legal Email 📈

AI-powered email tools offer numerous benefits for law firms:

  • Enhanced efficiency through automation of routine tasks

  • Improved client service and satisfaction

  • Assistance in drafting responses and suggesting relevant case law

  • Flagging important deadlines

  • Improved accuracy in document review and contract analysis

These capabilities allow lawyers to focus on high-value work, potentially improving outcomes for clients and minimizing liabilities for law firms.

AI Email Assistants 🖥️

Several AI email assistants are available for popular email platforms:

  1. Microsoft Outlook:

    • Copilot for Outlook: Enhances email drafting, replying, and management using ChatGPT.

  2. Apple Mail:

  3. Gmail:

    • Gemini 1.5 Pro: Offers email summarization, contextual Q&A, and suggested replies.

  4. Multi-platform:

Always Proofread Your Work and Confirm Citations!

🚨

Always Proofread Your Work and Confirm Citations! 🚨

Ethical Considerations and Challenges 🚧

Confidentiality and Data Privacy

The use of AI in legal email raises several ethical concerns, primarily regarding the duty of confidentiality outlined in ABA Model Rule 1.6. Lawyers must ensure that AI systems do not compromise client information or inadvertently disclose sensitive data to unauthorized parties.

To address this:

lawyers should always check their work; especially when using AI!

  1. Implement robust data security measures

  2. Understand AI providers' data handling practices

  3. Review and retain copies of AI system privacy policies

  4. Make reasonable efforts to prevent unauthorized disclosure

Competence (ABA Model Rule 1.1)

ABA Model Rule 1.1, particularly Comment 8, emphasizes the need for lawyers to understand the benefits and risks associated with relevant technology. This includes:

  • Understanding AI capabilities and limitations

  • Appropriate verification of AI outputs (Check Your Work!)

  • Staying informed about changes in AI technology

  • Considering the potential duty to use AI when benefits outweigh risks

The ABA's Formal Opinion 512 further emphasizes the need for lawyers to understand the AI tools they use to maintain competence.

Client Communication

Maintaining the personal touch in client communications is crucial. While AI can streamline processes, it should not replace nuanced, empathetic interactions. Lawyers should:

  1. Disclose AI use to clients

  2. Address any concerns about privacy and security

  3. Consider including AI use disclosure in fee agreements or retention letters

  4. Read your AI-generated/assisted drafts

Striking the Right Balance ⚖️

To ethically integrate AI into legal email systems, firms should:

  1. Implement robust data security measures to protect client confidentiality

  2. Provide comprehensive training on AI tools to ensure competent use

  3. Establish clear policies on when and how AI should be used in client communications

  4. Regularly review and audit AI systems for accuracy and potential biases

  5. Maintain transparency with clients about the use of AI in their matters

  6. Verify that AI tools are not using email content to train or improve their algorithms

Ai is a tool for work - not a replacement for final judgment!

By carefully navigating ⛵️ these considerations, lawyers can harness the power of AI to enhance their practice while upholding their ethical obligations. The key lies in viewing AI as a tool to augment 🤖 human expertise, not replace it.

As the legal profession evolves, embracing AI in email and other systems will likely become essential for remaining competitive. However, this adoption must always be balanced against the core ethical principles that define the practice of law.

And Remember, Always Proofread Your Work and Confirm Citations BEFORE Sending Your E-mail (w Use of AI or Not)!!!

AI in Government 🇺🇸/🇨🇳: A Wake-Up Call for Lawyers on Client Data Protection 🚨

Lawyers need to be Tech-savvy and analyze AI risks, cybersecurity, and data protection!

The rapid advancement of artificial intelligence (AI) in government sectors, particularly in China🇨🇳 and the United States🇺🇸, raises critical concerns for lawyers regarding their responsibilities to protect client data. As The Tech-Savvy Lawyer.Page has long maintained, these developments underscore the urgent need for legal professionals to reassess their data protection strategies.

The AI Landscape: A Double-Edged Sword 🔪

China's DeepSeek and the U.S. government's adoption of ChatGPT for government agencies have emerged as formidable players in the AI arena[1]. These advancements offer unprecedented opportunities for efficiency and innovation. However, they also present significant risks, particularly in terms of data security and privacy.

The Perils of Government-Controlled AI 🕵️‍♂️

The involvement of government entities in AI development and deployment raises red flags for client data protection. As discussed in The Tech-Savvy Lawyer.Page Podcast 🎙️ Episode "67: Ethical considerations of AI integration with Irwin Kramer," lawyers have an ethical obligation to protect client information when using AI tools.

* Remember, as a lawyer, you personally do not need to be an expert on this topic - ask/hire someone who is! MRPC 1.1 and 1.1[8]

💡

* Remember, as a lawyer, you personally do not need to be an expert on this topic - ask/hire someone who is! MRPC 1.1 and 1.1[8] 💡

Lawyers' Responsibilities in the AI Era 📚

Legal professionals must recognize that the use of AI tools, particularly those with government connections, could inadvertently expose client information to unauthorized access or use. This risk is amplified when dealing with Personally Identifiable Information (PII), which requires stringent protection under various legal and ethical frameworks.

Key Concerns for Lawyers:

  • Data Privacy: Ensure that client PII is not inadvertently shared or stored on AI platforms that may have government oversight or vulnerabilities.

  • Ethical Obligations: Maintain compliance with ethical duties of confidentiality and competence when utilizing AI tools in legal practice, as emphasized in ABA Model Rule of Professional Conduct1.6.

  • Due Diligence: Thoroughly vet AI platforms and their data handling practices before incorporating them into legal workflows.

  • Informed Consent: Obtain explicit client consent for the use of AI tools, especially those with potential government connections.

  • Data Localization: Consider the implications of data being processed or stored in jurisdictions with different privacy laws or government access policies.

Proactive Measures for Legal Professionals 🛡️

Lawyers need to be discussing their firm’s AI, cybersecurity, and client data protection strategies!

To address these concerns, The Tech-Savvy Lawyer.Page suggests that lawyers should:

  1. Implement robust data encryption and access control measures.

  2. Regularly audit and update data protection policies and practices.

  3. Invest in secure, private AI solutions specifically designed for legal use.

  4. Educate staff on the risks associated with AI and government-controlled platforms.

  5. Stay informed about evolving AI technologies and their implications for client data protection.

Final Thoughts 🧐

The rise of government-controlled AI presents a critical juncture for legal professionals, demanding a reevaluation of data protection strategies and ethical obligations. As The Tech-Savvy Lawyer.Page has consistently emphasized, lawyers must strike a delicate balance between embracing AI's benefits and safeguarding client confidentiality, in line with ABA Model Rules of Professional Conduct and evolving technological landscapes. By staying informed (including following The Tech-Savvy Lawyer.Page Blog and Podcast! 🤗), implementing robust security measures and maintaining a critical eye on these issues, legal professionals can navigate the AI revolution while upholding our paramount duty to protect client interests.

MTC

🎙️ Ep. 102: From Chatbots to Generative AI – Tom Martin explores LawDroid's legal tech advancements with AI.

Welcome back previous podcast guest Tom Martin, the CEO and Founder of LawDroid, a legal tech pioneer revolutionizing law firms with AI-driven solutions!

Today, Tom explains how LawDroid has evolved from classical AI to incorporating natural language and generative models. He highlights its hybrid platform, AI receptionists, and automation features. He discusses AI-driven legal research and document management, stressing accuracy through retrieval-augmented generation. Tom advises lawyers to see AI as a collaborator, not an infallible tool, and to manage expectations about its capabilities.

Join Tom and me as we discuss the following three questions and more!

  1. What are the top three ways generative AI has transformed LawDroid's offerings and operations?

  2. What are the three most critical security concerns legal professionals should consider when using AI-integrated products like LawDroid? For each situation, provide strategies to address these concerns.

  3. What are the top three things lawyers should not expect from products like LawDroid?

In our conversation, we cover the following:

[01:31] Tom's Current Tech Setup

[05:59] LawDroid's Evolution and AI Integration

[08:36] AI-Driven Features in LawDroid

[09:47] Security Concerns in AI-Integrated Legal Products

[12:45] Addressing Security and Reliability in LawDroid

[16:33] LawDroid's Legal Research and Document Management

[18:21] Expectations and Limitations of Legal AI

[20:51] Contact Information

Resources:

Connect with Tom:

Software & Cloud Services mentioned in the conversation:

🚨BOLO: AI Malpractice🚨: Texas Lawyer Fined for AI-Generated Fake Citations! 😮

We’ve been reporting on lawyers incorrectly using AI in their work; but, the lesson has not yet reached all practicing lawyers: Here is another cautionary tale for legal professionals!

No lawyer wants to be disciplined for using generative ai incorrectly - check your work!

A Texas lawyer, Brandon Monk, has been fined $2,000 for using AI to generate fake case citations in a court filing. U.S. District Judge Marcia Crone of the Eastern District of Texas imposed the penalty and ordered Monk to complete a continuing legal education course on generative AI. This incident occurred in a wrongful termination case against Goodyear Tire & Rubber Co., where Monk submitted a brief containing non-existent cases and fabricated quotes. Concernedly, he was using Lexis AI function in his work - check out the report card a Canadian law professor gave Lexis+ AI in my editorial here. The case highlights the ethical challenges and potential pitfalls of using AI in legal practice.

The judge's ruling emphasizes that attorneys remain accountable for the accuracy of their submissions, regardless of the tools used.

Read the full article on Reuters for an in-depth look at this landmark case and its implications for the legal profession.

Be careful out there!

MTC/🚨BOLO🚨: Lexis+ AI™️ Falls Short for Legal Research!

As artificial intelligence rapidly transforms various industries, the legal profession is no exception. However, a recent evaluation of Lexis+ AI™️, a new "generative AI-powered legal assistant" from LexisNexis, raises serious concerns about its reliability and effectiveness for legal research and drafting.

Lexis+ AI™️ gets a failing grade!

In a comprehensive review, University of British Columbia, Peter A. Allard School of Law law Professor Benjamin Perrin put Lexis+ AI™️ through its paces, testing its capabilities across multiple rounds. The results were disappointing, revealing significant limitations that should give legal professionals pause before incorporating this tool into their workflow.

Key issues identified include:

  1. Citing non-existent legislation

  2. Verbatim reproduction of case headnotes presented as "summaries"

  3. Inaccurate responses to basic legal questions

  4. Inconsistent performance and inability to complete requested tasks

Perhaps most concerning was the AI's tendency to confidently provide incorrect information, a phenomenon known as "hallucination" that poses serious risks in the legal context. For example, when asked to draft a motion, Lexis+ AI™️ referenced a non-existent section of Canadian legislation. In another instance, it confused criminal and tort law concepts when explaining causation.

These shortcomings highlight the critical need for human oversight and verification when using AI tools in legal practice. While AI promises increased efficiency, the potential for errors and misinformation underscores that these technologies are not yet ready to replace traditional legal research methods or professional judgment.

For lawyers considering integrating AI into their practice, several best practices emerge:

lawyers need to be weary when using generative ai! 😮

  1. Understand the technology's limitations

  2. Verify all AI-generated outputs against authoritative sources

  3. Maintain client confidentiality by avoiding sharing sensitive information with AI tools

  4. Stay informed about AI developments and ethical guidelines

  5. Use AI as a supplement to, not a replacement for, human expertise

Just like in the United States, Canadian law societies and bar associations are beginning to address the ethical implications of AI use in legal practice. The Law Society of British Columbia has published guidelines emphasizing the importance of understanding AI technology, prioritizing confidentiality, and avoiding over-reliance on AI tools. Meanwhile, The Law Society of Ontario has set out its own set of similar guidelines. Canadian bar ethics codes may be structured somewhat differently than the ABA Model Rules of Ethics and some of the provisions may diverge from each other, the themes regarding the use of generative AI in the practice of law ring similar to each other.

Canadian law societies and bar associations, mirroring their U.S. counterparts, are actively addressing the ethical implications of AI in legal practice. The Law Society of British Columbia has issued comprehensive guidelines that underscore the critical importance of understanding AI technology, safeguarding client confidentiality, and cautioning against excessive reliance on AI tools. Similarly, the Law Society of Ontario has established its own set of guidelines, reflecting a growing consensus on the need for ethical AI use in the legal profession.

While the structure of Canadian bar ethics codes may differ from the ABA Model Rules of Ethics, and specific provisions may vary between jurisdictions, the overarching themes regarding the use of generative AI in legal practice are strikingly similar. These common principles include:

  1. Maintaining competence in AI technologies

  2. Ensuring client confidentiality when using AI tools

  3. Exercising professional judgment and avoiding over-reliance on AI

  4. Upholding the duty of supervision when delegating tasks to AI systems

  5. Addressing potential biases in AI-generated content

Hallucinations can end a lawyers career!

This alignment in ethical considerations across North American jurisdictions underscores the universal challenges and responsibilities that AI integration poses for the legal profession. As AI continues to evolve, ongoing collaboration between Canadian and American legal bodies will likely play a crucial role in shaping coherent, cross-border approaches to AI ethics in law.

It is crucial for legal professionals to approach these tools with a critical eye. AI has the potential to streamline certain aspects of legal work. But Professor Perrin’s review of Lexis+ AI™️ serves as a stark reminder that the technology is not yet sophisticated enough to be trusted without significant human oversight.

Ultimately, the successful integration of AI in legal practice will require a delicate balance – leveraging the efficiency gains offered by technology while upholding the profession's core values of accuracy, ethics, and client service. As we navigate this new terrain, ongoing evaluation and open dialogue within the legal community will be essential to ensure AI enhances, rather than compromises, the quality of legal services.

MTC

🎙️ Ep. 100: Guest Host Carolyn Elefant Catching Up with Your Tech-Savvy Lawyer Blogger And Podcaster!

In this special 100th Episode, guest host Carolyn Elefant catches up with your tech-savvy lawyer, blogger, and podcaster. We discuss my current tech setup, how technology is changing legal practice, and the impact of AI on client communication and law work. We also discuss practical tips for using tech tools effectively to improve efficiency and strengthen client relationships.

This milestone episode is full of insights for lawyers, judges and legal practitioners looking to stay ahead in legal tech!

Join Carolyn and me as we discuss the following three questions and more!

  1. How have some of the other legal tech tools that Michael uses transformed the way that he works with clients and deliver service to them since the 50th episode?

  2. What are the challenges as well as the opportunities legal professionals who use technologies like AI, like gen AI are having and what are the implications they have for client confidentiality and data security?

  3. What are the most significant challenges and opportunities Michael has observed for legal professionals using technology to enhance client confidentiality and data security?

In our conversation, we cover the following:

[01:03] Michael's Current Tech Setup

[04:13] Legal Tech Tools and Client Communication

[07:04] Evolution of AI in Legal Practice

[09:50] Challenges and Opportunities with AI

[15:40] Practical Advice for Tech Use in Legal Practice

[21:08] Connect with Carolyn

Connect with Carolyn:

Linkedin: linkedin.com/in/carolynelefant/

Website: myshingle.com/

Resources:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation:

MTC: Can Lawyers Ethically Use Generative AI with Public Documents? 🤔 Navigating Competence, Confidentiality, and Caution! ⚖️✨

Lawyers need to be concerned with their legal ethics requirements when using AI in their work!

After my recent interview with Jayne Reardon on The Tech-Savvy Lawyer.Page Podcast 🎙️ Episode 99, it made me think: “Can or can we not use public generative AI in our legal work for clients by only using publicly filed documents?” This question has become increasingly relevant as tools like ChatGPT, Google's Gemini, and Perplexity AI gain popularity and sophistication. While these technologies offer tantalizing possibilities for improving efficiency and analysis in legal practice, they also raise significant ethical concerns that lawyers must carefully navigate.

The American Bar Association (ABA) Model Rules of Professional Conduct (MRPC) provide a framework for considering the ethical implications of using generative AI in legal practice. Rule 1.1 on competence is particularly relevant, as it requires lawyers to provide competent representation to clients. Many state bar associations provide that lawyers should keep abreast of the benefits and risks associated with relevant technology. This scrutiny highlights AI’s growing importance in the legal profession.

However, the application of this rule to generative AI is not straightforward. On one hand, using AI tools to analyze publicly filed documents and assist in brief writing could be seen as enhancing a lawyer's competence by leveraging advanced technology to improve research and analysis. On the other hand, relying too heavily on AI without understanding its limitations and potential biases could be seen as a failure to provide competent representation.

The use of generative ai can have complex ethic's’ requirements.

The duty of confidentiality, outlined in 1.1, presents another significant challenge when considering the use of public generative AI tools. Lawyers must ensure that client information remains confidential, which can be difficult when using public AI platforms that may store or learn from the data input into them. As discussed in our October 29th editorial, The AI Revolution in Law: Adapt or Be Left Behind (& where the bar associations are on the topic), state bar associations are beginning (if not already begun) scrutinizing lawyers use of generative AI. Furthermore, as Jayne Reardon astutely pointed out in our recent interview, even if a lawyer anonymizes the client's personally identifiable information (PII), inputting the client's facts into a public generative AI tool may still violate the rule of confidentiality. This is because the public may be able to deduce that the entry pertains to a specific client based on the context and details provided, even if they are "whitewashed." This raises important questions about the extent to which lawyers can use public AI tools without compromising client confidentiality, even when taking precautions to remove identifying information.

State bar associations have taken varying approaches to these issues. For example, the Colorado Supreme Court has formed a subcommittee to consider recommendations for amendments to their Rules of Professional Conduct to address attorney use of AI tools. Meanwhile, the Iowa State Bar Association has published resources on AI for lawyers, emphasizing the need for safeguards and human oversight.

The potential benefits of using generative AI in legal practice are significant. As Troy Doucet discussed in 🎙️Episode 92 of The Tech-Savvy Lawyer.Page Podcast, AI-driven document drafting systems can empower attorneys to efficiently create complex legal documents without needing advanced technical skills. Similarly, Mathew Kerbis highlighted in 🎙️ Episode 85 how AI can be leveraged to provide more accessible legal services through subscription models.

Do you know what your generative ai program is sharing with the public?

However, the risks are equally significant. AI hallucinations - where the AI generates false or misleading information - have led to disciplinary actions against lawyers who relied on AI-generated content without proper verification. See my editorial post My Two Cents: If you are going to use ChatGTP and its cousins to write a brief, Shepardize!!! Chief Justice John Roberts warned in his 2023 Year-End Report on the Federal Judiciary that "any use of AI requires caution and humility".

Given these considerations, a balanced approach to using generative AI in legal practice is necessary. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but with several important caveats:

1. Verification: All AI-generated content must be thoroughly verified for accuracy. Lawyers cannot abdicate their professional responsibility to ensure the correctness of legal arguments and citations.

2. Confidentiality: Extreme caution must be exercised to ensure that no confidential client information is input into public AI platforms.

3. Transparency: Lawyers should consider disclosing their use of AI tools to clients and courts, as appropriate.

The convergence of ai, its use in the practice of law, and legal ethics is here now1

4. Understanding limitations: Lawyers must have a solid understanding of the capabilities and limitations of the AI tools they use.

5. Human oversight: AI should be used as a tool to augment human expertise, not replace it.

This blog and podcast has consistently emphasized the importance of these principles. In our discussion with Katherine Porter in 🎙️ Episode 88, we explored how to maximize legal tech while avoiding common pitfalls. In my various posting, there has always been an emphasis on the need for critical thinking and careful consideration before adopting new AI tools.

It's worth noting that the legal industry is still in the early stages of grappling with these issues. As Jayne Reardon explored in 🎙️ Episode 99 of our podcast, the ethical concerns surrounding lawyers' use of AI are complex and evolving. The legal profession will need to continue to adapt its ethical guidelines as AI technology advances.

While generative AI tools offer exciting possibilities for enhancing legal practice, their use must be carefully balanced against ethical obligations. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but they must do so with a clear understanding of the risks and limitations involved. As the technology evolves, so too must our approach to using it ethically and effectively in legal practice.

MTC