MTC (Bonus): The Critical Importance of Source Verification When Using AI in Legal Practice 📚⚖️

The Fact-Checking Lawyer vs. AI Errors!

Legal professionals face an escalating verification crisis as AI tools proliferate throughout the profession. A recent conversation I had with an AI research assistant about AOL's dial-up internet shutdown perfectly illustrates why lawyers must rigorously fact-check AI outputs. In preparing my editorial for earlier today (see here), I came across a glaring error.  And when I corrected the AI's repeated date errors—it incorrectly cited 2024 instead of 2025 for AOL's September 30 shutdown—this highlighted the dangerous gap between AI confidence and AI accuracy that has resulted in over 410 documented AI hallucination cases worldwide. (You can also see my previous discussions on the topic here).

This verification imperative extends beyond simple date corrections. Stanford University research reveals troubling accuracy rates across legal AI tools, with some systems producing incorrect information over 34% of the time, while even the best-performing specialized legal AI platforms still generate false information approximately 17% of the time. These statistics underscore a fundamental truth: AI tools are powerful research assistants, not infallible oracles.

AI Hallucinations in the Courtroom are not a good thing!

Editor's Note: The irony was not lost on me that while writing this editorial about AI accuracy problems, I had to correct the AI assistant multiple times for contradictory statements about error rates in this very paragraph. The AI initially claimed Westlaw had 34% errors while specialized legal platforms had only 17% errors—ignoring that Westlaw IS a specialized legal platform. This real-time experience of catching AI logical inconsistencies while drafting an article about AI verification perfectly demonstrates the critical need for human oversight that this editorial advocates.

The consequences of inadequate verification are severe and mounting. Courts have imposed sanctions ranging from $2,500 to $30,000 on attorneys who submitted AI-generated fake cases. Recent cases include Morgan & Morgan lawyers sanctioned $5,000 for citing eight nonexistent cases, and a California attorney fined $10,000 for submitting briefs where "nearly all legal quotations ... [were] fabricated". These sanctions reflect judicial frustration with attorneys who fail to fulfill their gatekeeping responsibilities.

Legal professionals face implicit ethical obligations that demand rigorous source verification when using AI tools. ABA Model Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including AI's propensity for hallucinations. Rule 3.4 (Fairness to Opposing Party and Tribunal) prohibits knowingly making false statements of fact or law to courts. Rule 5.1 (Responsibilities Regarding Nonlawyer Assistance) extends supervisory duties to AI tools, requiring lawyers to ensure AI work product meets professional standards. Courts consistently emphasize that "existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings".

The Tech-Savvy Lawyer should have AI Verification Protocols.

The legal profession must establish verification protocols that treat AI as sophisticated but fallible technology requiring human oversight (perhaps a comment to Rule 1.1(8). This includes cross-referencing AI citations against authoritative databases, validating factual claims through independent sources, and maintaining detailed records of verification processes. Resources like The Tech-Savvy Lawyer blog and podcast provide valuable guidance for implementing these best practices. As one federal judge warned, "the duty to check their sources and make a reasonable inquiry into existing law remains unchanged" in the age of AI.

Attorneys who embrace AI without implementing robust verification systems risk professional sanctions, client harm, and reputational damage that could have been prevented through diligent fact-checking practices.  Simply put - check your work when using AI.

MTC

How to Ask AI "Are You Sure?" for Better Legal Research Accuracy!

Lawyers need to be “sure” their AI use is accurate

Legal professionals increasingly rely on AI tools like ChatGPT, Claude, and Google Gemini for research and document preparation. However, these powerful tools can produce inaccurate information or "hallucinations" — fabricated facts, citations, or legal precedents that appear credible but don't exist. A simple yet effective technique is asking AI systems "Are you sure?" or requesting verification of their responses.

The "Are You Sure?" Technique:

When you ask ChatGPT, Claude, or similar AI tools "Are you sure about this information?" they often engage in a second review process. This prompt triggers the AI to:

  • Re-examine the original question more carefully

  • Cross-reference information internally

  • Flag potential uncertainties in their responses

  • Provide additional context about confidence levels

For example, after receiving an AI response about case law, follow up with: "Are you sure this case citation is accurate? Please double-check the details." This often reveals when the AI is uncertain or has potentially fabricated information.

Other AI Verification Features

Google Gemini offers a built-in "double-check" feature that uses Google Search to verify responses against web sources. However, this feature can make mistakes and may show contradictory information.

Claude AI focuses on thorough reasoning and can be prompted to verify complex legal analysis through step-by-step breakdowns.

ChatGPT can be instructed to provide sources and verify information when specifically requested, though it requires explicit prompting for verification.

Essential Legal Practice Reminders 

While AI verification techniques help identify potential inaccuracies, they never replace the fundamental duty of legal professionals to verify all citations, case law, and factual claims. Recent court cases have imposed sanctions on attorneys who submitted AI-generated content without proper verification. If you don’t, you run the risk of running afoul of the ABA Model Rules of Professional Conduct — including Rule 1.1 (Competence), which requires the legal knowledge, skill, and thoroughness reasonably necessary for representation; Rule 1.1, Comment 8, which stresses that competent representation includes keeping abreast of the benefits and risks associated with relevant technology; Rule 1.3 (Diligence), which obligates attorneys to act with commitment and promptness; and Rule 3.3 (Candor Toward the Tribunal), which prohibits attorneys from knowingly making false statements or failing to correct false material before the court.

Best practices for legal AI use include:

  • Always verify AI-generated citations against primary sources

  • Never submit AI content without human review

  • Maintain clear policies about AI use in your practice

  • Understand that professional responsibility remains with the attorney, not the AI tool

The "Are you sure?" technique serves as a helpful first-line check when you notice something seems off in AI responses, but thorough legal research and verification remain your professional responsibility. Your reputation and bar license could depend on it.