šŸ“– Word of the Week: RAG (Retrieval-Augmented Generation) - The Legal AI Breakthrough Eliminating Hallucinations. šŸ“šāš–ļø

What is RAG?

USEd responsibly, rag can be a great tool for lawyers!

Retrieval-Augmented Generation (RAG) is a groundbreaking artificial intelligence technique that combines information retrieval with text generation. Unlike traditional AI systems that rely solely on pre-trained data, RAG dynamically retrieves relevant information from external legal databases before generating responses.

Why RAG Matters for Legal Practice

RAG addresses the most significant concern with legal AI: fabricated citations and "hallucinations." By grounding AI responses in verified legal sources, RAG systems dramatically reduce the risk of generating fictional case law. Recent studies show RAG-powered legal tools produce hallucination rates comparable to human-only work.

Key Benefits

RAG technology offers several advantages for legal professionals:

Enhanced Accuracy: RAG systems pull from authoritative legal databases, ensuring responses are based on actual statutes, cases, and regulations rather than statistical patterns.

Real-Time Updates: Unlike static AI models, RAG can access current legal information, making it valuable for rapidly evolving areas of law.

Source Attribution: RAG provides clear citations and references, enabling attorneys to verify and build upon AI-generated research.

Practical Applications

lawyers who don’t use ai technology like rag will be replaced those who do!

Law firms are implementing RAG for case law research, contract analysis, and legal memo drafting. The technology excels at tasks requiring specific legal authorities and performs best when presented with clearly defined legal issues.

Professional Responsibility Under ABA Model Rules

ABA Model Rule 1.1 (Competence): Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." This mandates understanding RAG capabilities and limitations before use.

ABA Model Rule 1.6 (Confidentiality): Lawyers must "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." When using RAG systems, attorneys must verify data security measures and understand how client information is processed and stored.

ABA Model Rule 5.3 (Supervision of Nonlawyer Assistants): ABA Formal Opinion 512 clarifies that AI tools may be considered "nonlawyer assistants" requiring supervision. Lawyers must establish clear policies for RAG usage and ensure proper training on ethical obligations.

ABA Formal Opinion 512: This 2024 guidance emphasizes that lawyers cannot abdicate professional judgment to AI systems. While RAG systems offer improved reliability over general AI tools, attorneys remain responsible for verifying outputs and maintaining competent oversight.

Final Thoughts: Implementation Considerations

lawyers must consider their ethical responsibilities when using generative ai, large language models, and rag.

While RAG significantly improves AI reliability, attorneys must still verify outputs and exercise professional judgment. The technology enhances rather than replaces legal expertise. Lawyers should understand terms of service, consult technical experts when needed, and maintain "human-in-the-loop" oversight consistent with professional responsibility requirements.

RAG represents a crucial step toward trustworthy legal AI, offering attorneys powerful research capabilities while maintaining the accuracy standards essential to legal practice and compliance with ABA Model Rules. Just make sure you use it correctly and check your work!

ILTACON 2025: Legal AI Revolution Accelerates as Major Providers Unveil Next-Generation Platforms

Lexis, vlex, westlaw highlight their newest ai functions!

The International Legal Technology Association’s 2025 annual conference (#ILTACON2025) in the National Harbor just outside of Washington, DC, became the epicenter of legal AI innovation as Thomson Reuters, LexisNexis, and vLex/Fastcase showcased their most advanced artificial intelligence platforms. Each provider demonstrated distinct approaches to solving the legal profession's technology challenges, with announcements that signal a fundamental shift from experimental AI tools to enterprise-ready systems capable of autonomous legal workflows.

Thomson Reuters Launches CoCounsel Legal with Groundbreaking Deep Research

Thomson Reuters made headlines with the launch of CoCounsel Legal, featuring what the company positions as industry-leading Agentic AI capabilities. This launch represents a fundamental evolution from AI assistants that respond to prompts toward intelligent systems that can plan, reason, and execute complex multi-step workflows autonomously.

The platform's flagship innovation is Deep Research, an AI feature that conducts comprehensive legal research by leveraging Westlaw Advantage’s proprietary research tools and expert legal content. According to Thomson Reuters, CoCounsel Legal combines advanced generative models with the exclusive resources of Westlaw and Practical Law, aiming to deliver trusted, up-to-date, and relevant legal analysis for practitioners. The company emphasizes that its Agentic AI operates directly within Westlaw, making use of the platform’s curated research toolset and authoritative content to enhance accuracy and reliability in legal workflows.

Thomson Reuters Launches CoCounsel Legal with Groundbreaking Deep Research

Key capabilities include guided workflows for drafting privacy policies, employee policies, complaints, and discovery requests, with Thomson Reuters planning incremental releases of new workflows. The platform addresses the critical challenge of document management system integration through federated search technology, which leverages existing Document Management System (DMS) search systems while applying AI for re-ranking and summarization.

The company also introduced Westlaw Advantage on August 13, 2025, positioned as the final versioned release of Westlaw, with future improvements delivered through continuous updates rather than new license agreements. This shift to a traditional Software-as-a-Service (aka SaaS) delivery model includes multi-year subscriptions with automatic upgrades at no additional cost.

Thomson Reuters has invested $10 billion in transforming legal technology foundations, with over $200 million annually dedicated specifically to integrating AI into flagship products. The platform already serves over 20,000 law firms and corporate legal departments, including the majority of AmLaw 100 firms.

LexisNexis Introduces ProtƩgƩ General AI with Industry-First Voice Capabilities

LexisNexis announced on August 11, 2025, the preview launch of ProtƩgƩ General AI, expanding its personalized AI assistant to include secure access to general-purpose AI models alongside legal-specific tools. This development builds on the company's March 2025 launch of the legal industry's first voice-enabled AI assistant for complex legal work. This voice feature allows users to interact naturally with the platform, guiding legal research and drafting by issuing spoken requests. The tool is designed to help legal practitioners streamline routine workflows, surface key insights, and perform drafting and search tasks hands-free, all within a secure and integrated environment.

LexisNexis Introduces ProtƩgƩ General AI with Industry-First Voice Capabilities

ProtƩgƩ's key differentiator lies in its toggle functionality, allowing users to switch between authoritative legal AI (grounded in LexisNexis content) and general-purpose AI models including GPT-5*, GPT-4o, GPT-o3, and Claude Sonnet 4. This eliminates the need to switch between different AI tools while maintaining enterprise-grade security.

The platform processes documents up to 300 pages long (a 250% increase over previous limits) and offers unprecedented personalization capabilities. It learns individual user workflows, preferences, writing styles, and jurisdictions to deliver customized responses. The system integrates with document management systems to ground responses in firm-specific knowledge while maintaining strict security controls.

Approximately 200 law firms, corporate legal departments, and law schools are participating in the customer preview program, with general availability expected later in 2025.

vLex Showcases Vincent AI Spring '25 with Studio Workflow Creation

vLex presented its Vincent AI Spring '25 Release at ILTACON 2025, highlighting enhanced agentic capabilities and the introduction of Studio, a platform allowing users to create custom workflows without coding. The company emphasized its data-centric approach, leveraging its billion-document global legal database spanning over 100 countries.

vLex Showcases Vincent AI Spring '25 with Studio Workflow Creation

vLex’s Spring ’25 release also emphasizes its Vincent Tables feature, which allows users to extract and compare key data points across large sets of documents and generate structured outputs like memos. Their General Assist capability supports drafting tasks—such as composing emails and summarizing meeting notes—within Vincent’s secure, enterprise-grade environment. Overall, vLex positions Vincent AI as a comprehensive workflow platform that delivers consistent, authoritative legal insights powered by a global database of over one billion documents from more than 100 jurisdictions.

During ILTACON, vLex also announced the 2025 Fastcase 50 awards, recognizing legal innovation leaders who are "engineering the future of legal practice". The company positioned itself as serving the "engineering minds and visionary leaders driving the legal profession's transformation".

šŸ”Ž Feature Comparison: How the Big Three Actually Stack Up

Market Positioning and Strategic Differentiation

The three providers have established distinct market positions based on their 2025 announcements. Thomson Reuters targets enterprise-level implementations, evidenced by multi-year contracts with the U.S. Federal Courts system, including the U.S. Supreme Court, and a focus on consistent, reliable workflows for large-scale legal operations.

LexisNexis emphasizes user experience and personalization, with ProtƩgƩ designed to understand individual lawyer preferences and adapt to different work styles. The voice interface represents a significant advancement in accessibility and usability, particularly valuable for lawyers with physical accessibility needs or those who prefer natural language interaction.

vLex positions itself as serving both mid-size firms and AmLaw 100 practices, emphasizing comprehensive workflow solutions and global legal coverage. The Studio platform addresses the growing demand for customizable AI workflows tailored to specific practice requirements.

Final Thoughts: Industry Impact and Measurable Results

ILTACON was a great experience - I learned and hope to share a lot!

These ILTACON 2025 announcements demonstrate the maturation of legal AI from experimental tools to platforms delivering measurable business value. Case studies reveal significant cost savings, with startups like OMNIUX reporting monthly savings of $15,000 to $20,000 in legal fees using CoCounsel.

Independent analysis shows that contract review tasks, which previously required two to two and a half hours, can now be completed in 10 minutes, representing productivity improvements of over 90%. Legal professionals report that document analysis tasks requiring days of manual work can now be completed in under an hour.

The competitive landscape now features three mature approaches: Thomson Reuters' enterprise-focused agentic workflows with deep legal research integration, LexisNexis's personalized voice-enabled AI with comprehensive model flexibility, and vLex's comprehensive workflow platform with global legal intelligence.

As legal professionals evaluate these platforms, selection criteria should include firm size, practice areas, existing technology infrastructure, required customization levels, and specific workflow requirements. The legal profession's digital transformation has clearly accelerated beyond the experimental phase, with AI becoming essential infrastructure for competitive legal practice.

But what does this mean for the solo, small-, and medium-size law forms? Stay Tuned as my analysis on that will be posted soon!

Happy Lawyering!

* (Note, the original launch was supposed to include GPT-5 but it has been pulled pending resolution of issues in its program - see MTC: Why "Newer" AI Models Aren't Always Better: The ChatGPT-5 and Apple Intelligence Reality Check for Legal Professionals! for reference).

MTC: AI Governance Crisis - What Every Law Firm Must Learn from 1Password's Eye-Opening Security Research

The legal profession stands at a crossroads. Recent research commissioned by 1Password reveals four critical security challenges that should serve as a wake-up call for every law firm embracing artificial intelligence. With 79% of legal professionals now using AI tools in some capacity while only 10% of law firms have formal AI governance policies, the disconnect between adoption and oversight has created unprecedented vulnerabilities that could compromise client confidentiality and professional liability.

The Invisible AI Problem in Law Firms

The 1Password study's most alarming finding mirrors what law firms are experiencing daily: only 21% of security leaders have full visibility into AI tools used in their organizations. This visibility gap is particularly dangerous for law firms, where attorneys and staff may be uploading sensitive client information to unauthorized AI platforms without proper oversight.

Dave Lewis, Global Advisory CISO at 1Password, captured the essence of this challenge perfectly: "We have closed the door to AI tools and projects, but they keep coming through the window!" This sentiment resonates strongly with legal technology experts who observe attorneys gravitating toward consumer AI tools like ChatGPT for legal research and document drafting, often without understanding the data security implications.

The parallel to law firm experiences is striking. Recent Stanford HAI research revealed that even professional legal AI tools produce concerning hallucination rates—Westlaw AI-Assisted Research showed a 34% error rate, while Lexis+ AI exceeded 17%. (Remember my editorial/bolo MTC/🚨BOLO🚨: Lexis+ AIā„¢ļø Falls Short for Legal Research!) These aren't consumer chatbots but professional tools marketed to law firms as reliable research platforms.

Four Critical Lessons for Legal Professionals

First, establish comprehensive visibility protocols. The 1Password research shows that 54% of security leaders admit their AI governance enforcement is weak, with 32% believing up to half of employees continue using unauthorized AI applications. Law firms must implement SaaS governance tools to identify AI usage across their organization and document how employees are actually using AI in their workflows.

Second, recognize that good intentions create dangerous exposures. The study found that 63% of security leaders believe the biggest internal threat is employees unknowingly giving AI access to sensitive data. For law firms handling privileged attorney-client communications, this risk is exponentially greater. Staff may innocently paste confidential case details into AI tools, potentially violating client confidentiality rules and creating malpractice liability.

Third, address the unmanaged AI crisis immediately. More than half of security leaders estimate that 26-50% of their AI tools and agents are unmanaged. In legal practice, this could mean AI agents are interacting with case management systems, client databases, or billing platforms without proper access controls or audit trails—a compliance nightmare waiting to happen.

Fourth, understand that traditional security models are inadequate. The research emphasizes that conventional identity and access management systems weren't designed for AI agents. Law firms must evolve their access governance strategies to include AI tools and create clear guidelines for how these systems should be provisioned, tracked, and audited.

Beyond Compliance: Strategic Imperatives

The American Bar Association's Formal Opinion 512 established clear ethical frameworks for AI use, but compliance requires more than policy documents. Law firms need proactive strategies that enable AI benefits while protecting client interests.

Effective AI governance starts with education. Most legal professionals aren't thinking about AI security risks in these terms. Firms should conduct workshops and tabletop exercises to walk through potential scenarios and develop incident response protocols before problems arise.

The path forward doesn't require abandoning AI innovation. Instead, it demands extending trust-based security frameworks to cover both human and machine identities. Law firms must implement guardrails that protect confidential information without slowing productivity—user-friendly systems that attorneys will actually follow.

Final Thoughts: The Competitive Advantage of Responsible AI Adoption

Firms that proactively address these challenges will gain significant competitive advantages. Clients increasingly expect their legal counsel to use technology responsibly while maintaining the highest security standards. Demonstrating comprehensive AI governance builds trust and differentiates firms in a crowded marketplace.

The research makes clear that security leaders are aware of AI risks but under-equipped to address them. For law firms, this awareness gap represents both a challenge and an opportunity. Practices that invest in proper AI governance now will be positioned to leverage these powerful tools confidently while their competitors struggle with ad hoc approaches.

The legal profession's relationship with AI has fundamentally shifted from experimental adoption to enterprise-wide transformation. The 1Password research provides a roadmap for navigating this transition securely. Law firms that heed these lessons will thrive in the AI-augmented future of legal practice.

MTC

MTC: Why Courts Hesitate to Adopt AI - A Crisis of Trust in Legal Technology

Despite facing severe staffing shortages and mounting operational pressures, America's courts remain cautious about embracing artificial intelligence technologies that could provide significant relief. While 68% of state courts report staff shortages and 48% of court professionals lack sufficient time to complete their work, only 17% currently use generative AI tools. This cautious approach reflects deeper concerns about AI reliability, particularly in light of recent (and albeit unnecessarily continuing) high-profile errors by attorneys using AI-generated content in court documents.

The Growing Evidence of AI Failures in Legal Practice

Recent cases demonstrate why courts' hesitation may be justified. In Colorado, two attorneys representing MyPillow CEO Mike Lindell were fined $3,000 each after submitting a court filing containing nearly 30 AI-generated errors, including citations to nonexistent cases and misquoted legal authorities. The attorneys admitted to using artificial intelligence without properly verifying the output, violating Federal Rule of Civil Procedure 11.

Similarly, a federal judge in California sanctioned attorneys from Ellis George LLP and K&L Gates LLP $31,000 after they submitted briefs containing fabricated citations generated by AI tools including CoCounsel, Westlaw Precision, and Google Gemini. The attorneys had used AI to create an outline that was shared with colleagues who incorporated the fabricated authorities into their final brief without verification.

These incidents are part of a broader pattern of AI hallucinations in legal documents. The June 16, 2025, Order to Show Cause from the Oregon federal court case Sullivan v. Wisnovsky, No. 1:21-cv-00157-CL, D. Or. (June 16, 2025) demonstrates another instance where plaintiffs cited "fifteen non-existent cases and misrepresented quotations from seven real cases" after relying on what they claimed was "an automated legal citation tool". The court found this explanation insufficient to avoid sanctions.

The Operational Dilemma Facing Courts

LAWYERS NEED TO BalancE Legal Tradition with Ethical AI Innovation

The irony is stark: courts desperately need technological solutions to address their operational challenges, yet recent AI failures have reinforced their cautious approach. Court professionals predict that generative AI could save them an average of three hours per week initially, growing to nearly nine hours within five years. These time savings could be transformative for courts struggling with increased caseloads and staff shortages.

However, the profession's experience with AI-generated hallucinations has created significant trust issues. Currently, 70% of courts prohibit employees from using AI-based tools for court business, and 75% have not provided any AI training to their staff. This reluctance stems from legitimate concerns about accuracy, bias, and the potential for AI to undermine the integrity of judicial proceedings.

The Technology Adoption Paradox

Courts have successfully adopted other technologies, with 86% implementing case management systems, 85% using e-filing, and 88% conducting virtual hearings. This suggests that courts are not inherently resistant to technology. But they are specifically cautious about AI due to its propensity for generating false information.

The legal profession's relationship with AI reflects broader challenges in implementing emerging technologies. While 55% of court professionals recognize AI as having transformational potential over the next five years, the gap between recognition and adoption remains significant. This disconnect highlights the need for more reliable AI systems and better training for legal professionals.

The Path Forward: Measured Implementation

The solution is not to abandon AI but to implement it more carefully. Legal professionals must develop better verification protocols. As one expert noted, "AI verification isn't optional—it's a professional obligation." This means implementing systematic citation checking, mandatory human review, and clear documentation of AI use in legal documents. Lawyers must stay up to date on the technology available to them, as required by the American Bar Association Model Rule of Professional Conduct 1.1[8], including the expectation that they use the best available technology currently accessible. Thus, courts too need comprehensive governance frameworks that address data handling, disclosure requirements, and decision-making oversight before evaluating AI tools. The American Bar Association's Formal Opinion 512 on Generative Artificial Intelligence Tools provides essential guidance, emphasizing that lawyers must fully consider their ethical obligations when using AI.

Final Thoughts

THE Future of Law: AI and Justice in Harmony!

Despite the risks, courts and legal professionals cannot afford to ignore AI indefinitely. The technology's potential to address staffing shortages, reduce administrative burdens, and improve access to justice makes it essential for the future of the legal system. However, successful implementation requires acknowledging AI's limitations while developing robust safeguards to prevent the types of errors that have already damaged trust in the technology.

The current hesitation reflects a profession learning to balance innovation with reliability. As AI systems improve and legal professionals develop better practices for using them, courts will likely become more willing to embrace these tools. Until then, the cautious approach may be prudent, even if it means forgoing potential efficiency gains.

The legal profession's experience with AI serves as a reminder that technological adoption in critical systems requires more than just recognizing potential benefits—it demands building the infrastructure, training, and governance necessary to use these powerful tools responsibly.

MTC

šŸŽ™ļø Ep. 114: Unlocking Legal Innovation: AI And IP With Matthew Veale of Patsnap

Our next guest is Matthew Veale, a European patent attorney and Patsnap's Professional Systems team member. He introduces the AI-powered innovation intelligence platform, Patsnap. Matthew explains how Patsnap supports IP and R&D professionals through tools for patent analytics, prior art searches, and strategic innovation mapping.

Furthermore, Matthew highlights Patsnap's AI-driven capabilities, including semantic search and patent drafting support, while emphasizing its adherence to strict data security and ISO standards. He outlines three key ways lawyers can leverage AI—note-taking, document drafting, and creative ideation—while warning of risks like data quality, security, and transparency.

Join Matthew and me as we discuss the following three questions and more!

  1. What are the top three ways IP and R&D lawyers can use Patsnap's AI to help them with their work?

  2. What are the top three ways lawyers can use AI in their day-to-day work, regardless of the practice area?

  3. What are the top three issues lawyers should be wary of when using AI?

In our conversation, we covered the following:

[01:07] Matthew Tech Setup

[04:43] Introduction to Pat Snap and Its Features

[13:17] Top Three Ways Lawyers Can Use AI in Their Work

[17:29] Ensuring Confidentiality and Security in AI Tools

[19:24] Transparency and Ethical Use of AI in Legal Practice

[22:13] Contact Information

Resources:

Connect with Matthew:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation: