ILTACON 2025: Legal AI Revolution Accelerates as Major Providers Unveil Next-Generation Platforms

Lexis, vlex, westlaw highlight their newest ai functions!

The International Legal Technology Association’s 2025 annual conference (#ILTACON2025) in the National Harbor just outside of Washington, DC, became the epicenter of legal AI innovation as Thomson Reuters, LexisNexis, and vLex/Fastcase showcased their most advanced artificial intelligence platforms. Each provider demonstrated distinct approaches to solving the legal profession's technology challenges, with announcements that signal a fundamental shift from experimental AI tools to enterprise-ready systems capable of autonomous legal workflows.

Thomson Reuters Launches CoCounsel Legal with Groundbreaking Deep Research

Thomson Reuters made headlines with the launch of CoCounsel Legal, featuring what the company positions as industry-leading Agentic AI capabilities. This launch represents a fundamental evolution from AI assistants that respond to prompts toward intelligent systems that can plan, reason, and execute complex multi-step workflows autonomously.

The platform's flagship innovation is Deep Research, an AI feature that conducts comprehensive legal research by leveraging Westlaw Advantage’s proprietary research tools and expert legal content. According to Thomson Reuters, CoCounsel Legal combines advanced generative models with the exclusive resources of Westlaw and Practical Law, aiming to deliver trusted, up-to-date, and relevant legal analysis for practitioners. The company emphasizes that its Agentic AI operates directly within Westlaw, making use of the platform’s curated research toolset and authoritative content to enhance accuracy and reliability in legal workflows.

Thomson Reuters Launches CoCounsel Legal with Groundbreaking Deep Research

Key capabilities include guided workflows for drafting privacy policies, employee policies, complaints, and discovery requests, with Thomson Reuters planning incremental releases of new workflows. The platform addresses the critical challenge of document management system integration through federated search technology, which leverages existing Document Management System (DMS) search systems while applying AI for re-ranking and summarization.

The company also introduced Westlaw Advantage on August 13, 2025, positioned as the final versioned release of Westlaw, with future improvements delivered through continuous updates rather than new license agreements. This shift to a traditional Software-as-a-Service (aka SaaS) delivery model includes multi-year subscriptions with automatic upgrades at no additional cost.

Thomson Reuters has invested $10 billion in transforming legal technology foundations, with over $200 million annually dedicated specifically to integrating AI into flagship products. The platform already serves over 20,000 law firms and corporate legal departments, including the majority of AmLaw 100 firms.

LexisNexis Introduces Protégé General AI with Industry-First Voice Capabilities

LexisNexis announced on August 11, 2025, the preview launch of Protégé General AI, expanding its personalized AI assistant to include secure access to general-purpose AI models alongside legal-specific tools. This development builds on the company's March 2025 launch of the legal industry's first voice-enabled AI assistant for complex legal work. This voice feature allows users to interact naturally with the platform, guiding legal research and drafting by issuing spoken requests. The tool is designed to help legal practitioners streamline routine workflows, surface key insights, and perform drafting and search tasks hands-free, all within a secure and integrated environment.

LexisNexis Introduces Protégé General AI with Industry-First Voice Capabilities

Protégé's key differentiator lies in its toggle functionality, allowing users to switch between authoritative legal AI (grounded in LexisNexis content) and general-purpose AI models including GPT-5*, GPT-4o, GPT-o3, and Claude Sonnet 4. This eliminates the need to switch between different AI tools while maintaining enterprise-grade security.

The platform processes documents up to 300 pages long (a 250% increase over previous limits) and offers unprecedented personalization capabilities. It learns individual user workflows, preferences, writing styles, and jurisdictions to deliver customized responses. The system integrates with document management systems to ground responses in firm-specific knowledge while maintaining strict security controls.

Approximately 200 law firms, corporate legal departments, and law schools are participating in the customer preview program, with general availability expected later in 2025.

vLex Showcases Vincent AI Spring '25 with Studio Workflow Creation

vLex presented its Vincent AI Spring '25 Release at ILTACON 2025, highlighting enhanced agentic capabilities and the introduction of Studio, a platform allowing users to create custom workflows without coding. The company emphasized its data-centric approach, leveraging its billion-document global legal database spanning over 100 countries.

vLex Showcases Vincent AI Spring '25 with Studio Workflow Creation

vLex’s Spring ’25 release also emphasizes its Vincent Tables feature, which allows users to extract and compare key data points across large sets of documents and generate structured outputs like memos. Their General Assist capability supports drafting tasks—such as composing emails and summarizing meeting notes—within Vincent’s secure, enterprise-grade environment. Overall, vLex positions Vincent AI as a comprehensive workflow platform that delivers consistent, authoritative legal insights powered by a global database of over one billion documents from more than 100 jurisdictions.

During ILTACON, vLex also announced the 2025 Fastcase 50 awards, recognizing legal innovation leaders who are "engineering the future of legal practice". The company positioned itself as serving the "engineering minds and visionary leaders driving the legal profession's transformation".

🔎 Feature Comparison: How the Big Three Actually Stack Up

Market Positioning and Strategic Differentiation

The three providers have established distinct market positions based on their 2025 announcements. Thomson Reuters targets enterprise-level implementations, evidenced by multi-year contracts with the U.S. Federal Courts system, including the U.S. Supreme Court, and a focus on consistent, reliable workflows for large-scale legal operations.

LexisNexis emphasizes user experience and personalization, with Protégé designed to understand individual lawyer preferences and adapt to different work styles. The voice interface represents a significant advancement in accessibility and usability, particularly valuable for lawyers with physical accessibility needs or those who prefer natural language interaction.

vLex positions itself as serving both mid-size firms and AmLaw 100 practices, emphasizing comprehensive workflow solutions and global legal coverage. The Studio platform addresses the growing demand for customizable AI workflows tailored to specific practice requirements.

Final Thoughts: Industry Impact and Measurable Results

ILTACON was a great experience - I learned and hope to share a lot!

These ILTACON 2025 announcements demonstrate the maturation of legal AI from experimental tools to platforms delivering measurable business value. Case studies reveal significant cost savings, with startups like OMNIUX reporting monthly savings of $15,000 to $20,000 in legal fees using CoCounsel.

Independent analysis shows that contract review tasks, which previously required two to two and a half hours, can now be completed in 10 minutes, representing productivity improvements of over 90%. Legal professionals report that document analysis tasks requiring days of manual work can now be completed in under an hour.

The competitive landscape now features three mature approaches: Thomson Reuters' enterprise-focused agentic workflows with deep legal research integration, LexisNexis's personalized voice-enabled AI with comprehensive model flexibility, and vLex's comprehensive workflow platform with global legal intelligence.

As legal professionals evaluate these platforms, selection criteria should include firm size, practice areas, existing technology infrastructure, required customization levels, and specific workflow requirements. The legal profession's digital transformation has clearly accelerated beyond the experimental phase, with AI becoming essential infrastructure for competitive legal practice.

But what does this mean for the solo, small-, and medium-size law forms? Stay Tuned as my analysis on that will be posted soon!

Happy Lawyering!

* (Note, the original launch was supposed to include GPT-5 but it has been pulled pending resolution of issues in its program - see MTC: Why "Newer" AI Models Aren't Always Better: The ChatGPT-5 and Apple Intelligence Reality Check for Legal Professionals! for reference).

Word of the Week: Synthetic Data 🧑‍💻⚖️

What Is Synthetic Data?

Synthetic data is information that is generated by algorithms to mimic the statistical properties of real-world data, but it contains no actual client or case details. For lawyers, this means you can test software, train AI models, or simulate legal scenarios without risking confidential information or breaching privacy regulations. Synthetic data is not “fake” in the sense of being random or useless—it is engineered to be realistic and valuable for analysis.

How Synthetic Data Applies to Lawyers

  • Privacy Protection: Synthetic data allows law firms to comply with strict privacy laws like GDPR and CCPA by removing any real personal identifiers from the datasets used in legal tech projects.

  • AI Training: Legal AI tools need large, high-quality datasets to learn and improve. Synthetic data fills gaps when real data is scarce, sensitive, or restricted by regulation.

  • Software Testing: When developing or testing new legal software, synthetic data lets you simulate real-world scenarios without exposing client secrets or sensitive case details.

  • Cost and Efficiency: It is often faster and less expensive to generate synthetic data than to collect, clean, and anonymize real legal data.

Lawyers know your data source; your license could depend on it!

📢

Lawyers know your data source; your license could depend on it! 📢

Synthetic Data vs. Hallucinations

  • Synthetic Data: Created on purpose, following strict rules to reflect real-world patterns. Used for training, testing, and developing legal tech tools. It is transparent and traceable; you know how and why it was generated.

  • AI Hallucinations: Occur when an AI system generates information that appears plausible but is factually incorrect or entirely fabricated. In law, this can mean made-up case citations, statutes, or legal arguments. Hallucinations are unpredictable and can lead to serious professional risks if not caught.

Key Difference: Synthetic data is intentionally crafted for safe, ethical, and lawful use. Hallucinations are unintentional errors that can mislead and cause harm.

Why Lawyers Should Care

  • Compliance: Using synthetic data helps you stay on the right side of privacy and data protection laws.

  • Risk Management: It reduces the risk of data breaches and regulatory penalties.

  • Innovation: Enables law firms to innovate and improve processes without risking client trust or confidentiality.

  • Professional Responsibility: Helps lawyers avoid the dangers of relying on unverified AI outputs, which can lead to sanctions or reputational damage.

Lawyers know your data source; your license could depend on it!

ILTACON 2025 Opening: Navigating the Legal Tech Treasure Trove ⚓

Get your legal tech plunder at #ILTACON2025

Ahoy, legal tech voyagers! ⛵ ILTACON 2025 has officially set sail at the magnificent Gaylord National Resort & Convention Center in National Harbor, Maryland, and what a spectacular opening it's been. From August 10-14, over 4,000 legal professionals interested in legal technology are charting their course through the most comprehensive bounty of legal tech innovations ever assembled.

This year's pirate theme couldn't be more fitting. Legal professionals have truly become modern-day treasure hunters, seeking out the digital gold that will transform their practices. The opening reception on Monday morning perfectly captured this spirit, with maritime merriment setting the tone for what promises to be an extraordinary week of discovery.

Among the distinguished crew of attendees, we spotted previous podcast guest Stephen Embry, the brilliant mind behind the TechLaw Crossroads blog and former chair of the American Bar Association’s Law Practice Division. His insights on artificial intelligence adoption and legal technology competency continue to guide practitioners navigating the choppy waters of digital transformation. Also making waves is Brett Burney, Vice President of NextPoint Law Group, whose expertise in bridging the chasm between legal and technology frontiers has made him a sought-after guide for firms embracing Discovery solutions.

The exhibit hall, themed as the "Pirate's Bounty," features over 225 vendors displaying their technological treasures. From AI-powered legal research tools to advanced case management systems, the bounty available to legal professionals has never been more abundant. The challenge isn't finding technology—it's selecting the right tools that will genuinely enhance practice efficiency without overwhelming existing workflows.

What makes ILTACON unique is its peer-driven approach to education. Unlike vendor-heavy conferences, ILTACON sessions are crafted by practitioners who have firsthand experience with the challenges facing legal technology professionals. This year's 80+ educational sessions span eight focus areas, ensuring every legal professional finds relevant insights to take back to their firm.

For firms with limited to moderate technology skills, ILTACON provides the perfect environment to learn from peers who have successfully navigated similar challenges. The networking opportunities alone justify the investment, as connections made here often lead to solutions for specific practice challenges.

The pirate theme extends beyond mere decoration—it represents the adventurous spirit required to succeed in today's legal technology landscape. Legal professionals must be willing to explore uncharted territories, test new solutions, and occasionally take calculated risks to discover the innovations that will give their practices a competitive edge.

#ILTACON2025

As we sail through this week of discovery, remember that the real treasure isn't the technology itself—it's the enhanced client service, improved efficiency, and competitive advantages these tools provide when properly implemented.

May fair winds fill your sails as you navigate this legal tech treasure trove! ⚓

#ILTACON2025

MTC: Why Courts Hesitate to Adopt AI - A Crisis of Trust in Legal Technology

Despite facing severe staffing shortages and mounting operational pressures, America's courts remain cautious about embracing artificial intelligence technologies that could provide significant relief. While 68% of state courts report staff shortages and 48% of court professionals lack sufficient time to complete their work, only 17% currently use generative AI tools. This cautious approach reflects deeper concerns about AI reliability, particularly in light of recent (and albeit unnecessarily continuing) high-profile errors by attorneys using AI-generated content in court documents.

The Growing Evidence of AI Failures in Legal Practice

Recent cases demonstrate why courts' hesitation may be justified. In Colorado, two attorneys representing MyPillow CEO Mike Lindell were fined $3,000 each after submitting a court filing containing nearly 30 AI-generated errors, including citations to nonexistent cases and misquoted legal authorities. The attorneys admitted to using artificial intelligence without properly verifying the output, violating Federal Rule of Civil Procedure 11.

Similarly, a federal judge in California sanctioned attorneys from Ellis George LLP and K&L Gates LLP $31,000 after they submitted briefs containing fabricated citations generated by AI tools including CoCounsel, Westlaw Precision, and Google Gemini. The attorneys had used AI to create an outline that was shared with colleagues who incorporated the fabricated authorities into their final brief without verification.

These incidents are part of a broader pattern of AI hallucinations in legal documents. The June 16, 2025, Order to Show Cause from the Oregon federal court case Sullivan v. Wisnovsky, No. 1:21-cv-00157-CL, D. Or. (June 16, 2025) demonstrates another instance where plaintiffs cited "fifteen non-existent cases and misrepresented quotations from seven real cases" after relying on what they claimed was "an automated legal citation tool". The court found this explanation insufficient to avoid sanctions.

The Operational Dilemma Facing Courts

LAWYERS NEED TO BalancE Legal Tradition with Ethical AI Innovation

The irony is stark: courts desperately need technological solutions to address their operational challenges, yet recent AI failures have reinforced their cautious approach. Court professionals predict that generative AI could save them an average of three hours per week initially, growing to nearly nine hours within five years. These time savings could be transformative for courts struggling with increased caseloads and staff shortages.

However, the profession's experience with AI-generated hallucinations has created significant trust issues. Currently, 70% of courts prohibit employees from using AI-based tools for court business, and 75% have not provided any AI training to their staff. This reluctance stems from legitimate concerns about accuracy, bias, and the potential for AI to undermine the integrity of judicial proceedings.

The Technology Adoption Paradox

Courts have successfully adopted other technologies, with 86% implementing case management systems, 85% using e-filing, and 88% conducting virtual hearings. This suggests that courts are not inherently resistant to technology. But they are specifically cautious about AI due to its propensity for generating false information.

The legal profession's relationship with AI reflects broader challenges in implementing emerging technologies. While 55% of court professionals recognize AI as having transformational potential over the next five years, the gap between recognition and adoption remains significant. This disconnect highlights the need for more reliable AI systems and better training for legal professionals.

The Path Forward: Measured Implementation

The solution is not to abandon AI but to implement it more carefully. Legal professionals must develop better verification protocols. As one expert noted, "AI verification isn't optional—it's a professional obligation." This means implementing systematic citation checking, mandatory human review, and clear documentation of AI use in legal documents. Lawyers must stay up to date on the technology available to them, as required by the American Bar Association Model Rule of Professional Conduct 1.1[8], including the expectation that they use the best available technology currently accessible. Thus, courts too need comprehensive governance frameworks that address data handling, disclosure requirements, and decision-making oversight before evaluating AI tools. The American Bar Association's Formal Opinion 512 on Generative Artificial Intelligence Tools provides essential guidance, emphasizing that lawyers must fully consider their ethical obligations when using AI.

Final Thoughts

THE Future of Law: AI and Justice in Harmony!

Despite the risks, courts and legal professionals cannot afford to ignore AI indefinitely. The technology's potential to address staffing shortages, reduce administrative burdens, and improve access to justice makes it essential for the future of the legal system. However, successful implementation requires acknowledging AI's limitations while developing robust safeguards to prevent the types of errors that have already damaged trust in the technology.

The current hesitation reflects a profession learning to balance innovation with reliability. As AI systems improve and legal professionals develop better practices for using them, courts will likely become more willing to embrace these tools. Until then, the cautious approach may be prudent, even if it means forgoing potential efficiency gains.

The legal profession's experience with AI serves as a reminder that technological adoption in critical systems requires more than just recognizing potential benefits—it demands building the infrastructure, training, and governance necessary to use these powerful tools responsibly.

MTC