MTC: Why Courts Hesitate to Adopt AI - A Crisis of Trust in Legal Technology

Despite facing severe staffing shortages and mounting operational pressures, America's courts remain cautious about embracing artificial intelligence technologies that could provide significant relief. While 68% of state courts report staff shortages and 48% of court professionals lack sufficient time to complete their work, only 17% currently use generative AI tools. This cautious approach reflects deeper concerns about AI reliability, particularly in light of recent (and albeit unnecessarily continuing) high-profile errors by attorneys using AI-generated content in court documents.

The Growing Evidence of AI Failures in Legal Practice

Recent cases demonstrate why courts' hesitation may be justified. In Colorado, two attorneys representing MyPillow CEO Mike Lindell were fined $3,000 each after submitting a court filing containing nearly 30 AI-generated errors, including citations to nonexistent cases and misquoted legal authorities. The attorneys admitted to using artificial intelligence without properly verifying the output, violating Federal Rule of Civil Procedure 11.

Similarly, a federal judge in California sanctioned attorneys from Ellis George LLP and K&L Gates LLP $31,000 after they submitted briefs containing fabricated citations generated by AI tools including CoCounsel, Westlaw Precision, and Google Gemini. The attorneys had used AI to create an outline that was shared with colleagues who incorporated the fabricated authorities into their final brief without verification.

These incidents are part of a broader pattern of AI hallucinations in legal documents. The June 16, 2025, Order to Show Cause from the Oregon federal court case Sullivan v. Wisnovsky, No. 1:21-cv-00157-CL, D. Or. (June 16, 2025) demonstrates another instance where plaintiffs cited "fifteen non-existent cases and misrepresented quotations from seven real cases" after relying on what they claimed was "an automated legal citation tool". The court found this explanation insufficient to avoid sanctions.

The Operational Dilemma Facing Courts

LAWYERS NEED TO BalancE Legal Tradition with Ethical AI Innovation

The irony is stark: courts desperately need technological solutions to address their operational challenges, yet recent AI failures have reinforced their cautious approach. Court professionals predict that generative AI could save them an average of three hours per week initially, growing to nearly nine hours within five years. These time savings could be transformative for courts struggling with increased caseloads and staff shortages.

However, the profession's experience with AI-generated hallucinations has created significant trust issues. Currently, 70% of courts prohibit employees from using AI-based tools for court business, and 75% have not provided any AI training to their staff. This reluctance stems from legitimate concerns about accuracy, bias, and the potential for AI to undermine the integrity of judicial proceedings.

The Technology Adoption Paradox

Courts have successfully adopted other technologies, with 86% implementing case management systems, 85% using e-filing, and 88% conducting virtual hearings. This suggests that courts are not inherently resistant to technology. But they are specifically cautious about AI due to its propensity for generating false information.

The legal profession's relationship with AI reflects broader challenges in implementing emerging technologies. While 55% of court professionals recognize AI as having transformational potential over the next five years, the gap between recognition and adoption remains significant. This disconnect highlights the need for more reliable AI systems and better training for legal professionals.

The Path Forward: Measured Implementation

The solution is not to abandon AI but to implement it more carefully. Legal professionals must develop better verification protocols. As one expert noted, "AI verification isn't optional—it's a professional obligation." This means implementing systematic citation checking, mandatory human review, and clear documentation of AI use in legal documents. Lawyers must stay up to date on the technology available to them, as required by the American Bar Association Model Rule of Professional Conduct 1.1[8], including the expectation that they use the best available technology currently accessible. Thus, courts too need comprehensive governance frameworks that address data handling, disclosure requirements, and decision-making oversight before evaluating AI tools. The American Bar Association's Formal Opinion 512 on Generative Artificial Intelligence Tools provides essential guidance, emphasizing that lawyers must fully consider their ethical obligations when using AI.

Final Thoughts

THE Future of Law: AI and Justice in Harmony!

Despite the risks, courts and legal professionals cannot afford to ignore AI indefinitely. The technology's potential to address staffing shortages, reduce administrative burdens, and improve access to justice makes it essential for the future of the legal system. However, successful implementation requires acknowledging AI's limitations while developing robust safeguards to prevent the types of errors that have already damaged trust in the technology.

The current hesitation reflects a profession learning to balance innovation with reliability. As AI systems improve and legal professionals develop better practices for using them, courts will likely become more willing to embrace these tools. Until then, the cautious approach may be prudent, even if it means forgoing potential efficiency gains.

The legal profession's experience with AI serves as a reminder that technological adoption in critical systems requires more than just recognizing potential benefits—it demands building the infrastructure, training, and governance necessary to use these powerful tools responsibly.

MTC

🎙️ TSL Labs: Listen to June 30, 2025, TSL editorial as Discussed by two AI-Generated Podcast Hosts Turn Editorial Into Engaging Discussion for Busy Legal Professionals!

🎧 Can't find time to read lengthy legal tech editorials? We've got you covered.

As part of our Tech Savvy Lawyer Labs initiative, I've been experimenting with cutting-edge AI to make legal content more accessible. This bonus episode showcases how Notebook.AI can transform written editorials into engaging podcast discussions.

Our latest experiment takes the editorial "AI and Legal Research: The Existential Threat to Lexis, Westlaw, and Fastcase" and converts it into a compelling conversation between two AI hosts who discuss the content as if they've thoroughly analyzed the piece.

This Labs experiment demonstrates how AI can serve as a time-saving alternative for legal professionals who prefer audio learning or lack time for extensive reading. The AI hosts engage with the material authentically, providing insights and analysis that make complex legal tech topics accessible to practitioners at all technology skill levels.

🚀 Perfect for commutes, workouts, or multitasking—get the full editorial insights without the reading time.

Enjoy!

MTC: AI and Legal Research: The Existential Threat to Lexis, Westlaw, and Fastcase.

How does this ruling for anthropic change the business models legal information providers operate under?

MTC: The legal profession faces unprecedented disruption as artificial intelligence reshapes how attorneys access and analyze legal information. A landmark federal ruling combined with mounting evidence of AI's devastating impact on content providers signals an existential crisis for traditional legal databases.

The Anthropic Breakthrough

Judge William Alsup's June 25, 2025 ruling in Bartz v. Anthropic fundamentally changed the AI landscape. The court found that training large language models on legally acquired copyrighted books constitutes "exceedingly transformative" fair use under copyright law. This decision provides crucial legal clarity for AI companies, effectively creating a roadmap for developing sophisticated legal AI tools using legitimately purchased content.

The ruling draws a clear distinction: while training on legally acquired materials is permissible, downloading pirated content remains copyright infringement. This clarity removes a significant barrier that had constrained AI development in the legal sector.

Google's AI Devastates Publishers: A Warning for Legal Databases

The news industry's experience with Google's AI features provides a sobering preview of what awaits legal databases. Traffic to the world's 500 most visited publishers has plummeted 27% year-over-year since February 2024, losing an average of 64 million visits per month. Google's AI Overviews and AI Mode have created what industry experts call "zero-click searches," where users receive information without visiting original sources.

The New York Times saw its share of organic search traffic fall from 44% in 2022 to just 36.5% in April 2025. Business Insider experienced devastating 55% traffic declines and subsequently laid off 21% of its workforce. Major outlets like HuffPost and The Washington Post have lost more than half their search traffic.

This pattern directly threatens legal databases operating on similar information-access models. If AI tools can synthesize legal information from multiple sources without requiring expensive database subscriptions, the fundamental value proposition of Lexis, WestLaw, and Fastcase erodes dramatically.

The Rise of Vincent AI and Legal Database Alternatives

The threat is no longer theoretical. Vincent AI, integrated into vLex Fastcase, represents the emergence of sophisticated legal AI that challenges traditional database dominance. The platform offers comprehensive legal research across 50 states and 17 countries, with capabilities including contract analysis, argument building, and multi-jurisdictional comparisons—all often available free through bar association memberships.

Vincent AI recently won the 2024 New Product Award from the American Association of Law Libraries. The platform leverages vLex's database of over one billion legal documents, providing multimodal capabilities that can analyze audio and video files while generating transcripts of court proceedings. Unlike traditional databases that added AI as supplementary features, Vincent AI integrates artificial intelligence throughout its core functionality.

Stanford University studies reveal the current performance gaps: Lexis+ AI achieved 65% accuracy with 17% hallucination rates, while Westlaw's AI-Assisted Research managed only 42% accuracy with 33% hallucination rates. However, AI systems improve rapidly, and these quality gaps are narrowing.

Economic Pressures Intensify

Can traditional legal resources protect their proprietary information from AI?

Goldman Sachs research indicates 44% of legal work could be automated by emerging AI tools, targeting exactly the functions that justify expensive database subscriptions. The legal research market, worth $68 billion globally, faces dramatic cost disruption as AI platforms provide similar capabilities at fractions of traditional pricing.

The democratization effect is already visible. Vincent AI's availability through over 80 bar associations provides enterprise-level capabilities to solo practitioners and small firms previously unable to afford comprehensive legal research tools. This accessibility threatens the pricing power that has sustained traditional legal database business models.

The Information Ecosystem Transformation

The parallel between news publishers and legal databases extends beyond surface similarities. Both industries built their success on controlling access to information and charging premium prices for that access. AI fundamentally challenges this model by providing synthesized information that reduces the need to visit original sources.

AI chatbots have provided only 5.5 million additional referrals per month to publishers, a fraction of the 64 million monthly visits lost to AI-powered search features. This stark imbalance demonstrates that AI tools are net destroyers of traffic to content providers—a dynamic that threatens any business model dependent on information access.

Publishers describe feeling "betrayed" by Google's shift toward AI-powered search results that keep users within Google's ecosystem rather than sending them to external sites. Legal databases face identical risks as AI tools become more capable of providing comprehensive legal analysis without requiring expensive subscriptions.

Quality and Professional Responsibility Challenges

Despite AI's advancing capabilities, significant concerns remain around accuracy and professional responsibility. Legal practice demands extremely high reliability standards, and current AI tools still produce errors that could have serious professional consequences. Several high-profile cases involving lawyers submitting AI-generated briefs with fabricated case citations have heightened awareness of these risks.

However, platforms like Vincent AI address many concerns through transparent citation practices and hybrid AI pipelines that combine generative and rules-based AI to increase reliability. The platform provides direct links to primary legal sources and employs expert legal editors to track judicial treatment and citations.

Adaptation Strategies and Market Response

Is AI the beginning for the end of Traditional legal resources?

Traditional legal database providers have begun integrating AI capabilities, but this strategy faces inherent limitations. By incorporating AI into existing platforms, these companies risk commoditizing their own products. If AI can provide similar insights using publicly available information, proprietary databases lose their exclusivity advantage regardless of AI integration.

The more fundamental challenge is that AI's disruptive potential extends beyond individual products to entire business models. The emergence of comprehensive AI platforms like Vincent AI demonstrates this disruption is already underway and accelerating.

Looking Forward: Scenarios and Implications

Several scenarios could emerge from this convergence of technological and economic pressures. Traditional databases might successfully maintain market position through superior curation and reliability, though the news industry's experience suggests this is challenging without fundamental business model changes.

Alternatively, AI-powered platforms could continue gaining market share by providing comparable functionality at significantly lower costs, forcing traditional providers to dramatically reduce prices or lose market share. The rapid adoption of vLex Fastcase by bar associations suggests this disruption is already underway.

A hybrid market might develop where different tools serve different needs, though economic pressures favor comprehensive, cost-effective solutions over specialized, expensive ones.

Preparing for Transformation

The confluence of the Anthropic ruling, advancing AI capabilities, evidence from news industry disruption, and sophisticated legal AI platforms creates a perfect storm for the legal information industry. Legal professionals must develop AI literacy while implementing robust quality control processes and maintaining ethical obligations.

For legal database providers, the challenge is existential. The news industry's experience shows traffic declines of 50% or more would be catastrophic for subscription-dependent businesses. The rapid development of comprehensive AI legal research platforms suggests this disruption may occur faster than traditional providers anticipate.

The legal profession's relationship with information is fundamentally changing. The Anthropic ruling removed barriers to AI development, news industry data shows the potential scale of disruption, and platforms like Vincent AI demonstrate achievable sophistication. The race is now on to determine who will control the future of legal information access.

MTC