MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️

My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy.

Join Justin and me as we discuss the following three questions and more!

  1. What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?

  2. What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk? 

  3. Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? 

In our conversation, we cover the following

  • 00:00 – Welcome and guest introduction

    • Justin joins the show and shares his current tech setup at his desk. 

  • 00:00–01:00 – Justin’s current tech stack

    • Lenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.

    • Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making.

  • 01:00–02:00 – Android vs. iPhone for AI use

    • Why Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability.

  • 02:00–05:30 – Q1: Top three ways litigators should be using AI right now

    • Using AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.

    • Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.

    • Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks.

  • 05:30–07:30 – StrongSuit vs. basic tools like Word grammar check

    • How StrongSuit aims to “up-level” a lawyer’s writing, not just catch typos.

    • Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents.

  • 06:00–08:00 – AI context limits and scaling doc review

    • Constraints of large models’ context windows (around ~1M tokens ≈ ~750 pages).

    • How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights.

  • 08:00–09:00 – Handling tens of thousands of documents

    • How StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters.

  • 09:00–11:30 – Origin story of StrongSuit

    • Why Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.

    • StrongSuit’s focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step.

  • 11:30–13:30 – From intake to brief drafting in minutes

    • Generating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.

    • StrongSuit’s long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment.

  • 12:00–14:30 – How StrongSuit tackles hallucinations

    • Building a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.

    • Validating citations by checking whether the Bluebook citation actually exists in StrongSuit’s case database before surfacing it to the user.

    • Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations.

  • 14:30–16:30 – Coverage and jurisdictions

    • Coverage of all U.S. jurisdictions, federal and state, focused on precedential cases.

    • Handling most regulations from administrative agencies, and limits around local ordinances.

    • Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows.

  • 15:00–17:00 – Security and confidentiality for litigators

    • SOC 2 compliance and industry-standard encryption at rest and in transit.

    • No model training on user data.

    • Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys.

  • 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigation

    • Mistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.

    • Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract drafting).

    • How to shortlist tools: look for SOC 2, real product depth, awards, and a focus on your specific workflows.

    • Mistake #2: Expecting immediate mastery instead of moving through predictable adoption stages—from learning the tool, to daily use, to stringing workflows together.

  • 20:30–22:30 – Building firm-wide AI workflows over time

    • Moving from isolated experiments to integrated, low-friction workflows, such as automatic intake-to-research pipelines.

    • Using client intake audio or transcripts to automatically extract facts, issues, and research paths.

  • 22:30–24:30 – Time constraints and “no-time” lawyers

    • Why lawyers don’t need to be “technical” to use StrongSuit.

    • Reframing AI as text-based tools where lawyers’ writing skills and analytical thinking are assets, not obstacles. 

  • 24:00–26:00 – Practical workflows beyond intake

    • Using AI to prepare for expert depositions, including reviewing valuation analyses, flagging departures from market consensus, and generating targeted questions.

    • Reinforcing the value of AI-enhanced legal research and drafting as core litigation workflows.

  • 26:00–29:30 – Q3: 2026 and beyond – AI-driven workflows every litigator should master

    • Rapid improvement of baseline models (e.g., jumping from single-digit to high double-digit performance on difficult benchmarks year over year). 

    • The idea of “tipping points,” where small performance gains turn AI from marginally useful to essential in specific tasks.

    • Why legal research is a great training ground for understanding where AI excels, where it falls short, and how to divide labor between human and machine.

    • The value of learning basic prompting skills to get more from AI systems, even when platforms offer visual workflows.

  • 29:30–32:30 – Will workflows actually change—or just get better?

    • Why Justin expects familiar litigation workflows (doc review, research, drafting) to remain structurally similar, but become far faster and more sophisticated.

    • AI agents handling the grind work while lawyers focus on synthesis, judgment, and strategy.

    • A future where “AI + lawyer vs. AI + lawyer” resembles high-level chess: same rules, but much deeper thinking on both sides.

  • 32:30–End – Where to find Justin and StrongSuit

    • How to connect with Justin and learn more about StrongSuit’s litigation tools.

Resources

Connect with Justin

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

Word of the Week: Deepfakes: How Lawyers Can Spot Fake Digital Evidence and Avoid ABA Model Rule Violations ⚖️

A Tech-Savvy Lawyer needs to be able to spot Deepfakes Before Courtroom Ethics Violations!

“Deepfakes” are AI‑generated or heavily manipulated audio, video, or images that convincingly depict people saying or doing things that never happened.🧠 They are moving from internet novelty to everyday litigation risk, especially as parties try to slip fabricated “evidence” into the record.📹

Recent cases and commentary show courts will not treat deepfakes as harmless tech problems. Judges have dismissed actions outright and imposed severe sanctions when parties submit AI‑generated or altered media, because such evidence attacks the integrity of the judicial process itself.⚖️ At the same time, courts are wary of lawyers who cry “deepfake” without real support, since baseless challenges can look like gamesmanship rather than genuine concern about authenticity.

For practicing lawyers, deepfakes are first and foremost a professional responsibility issue. ABA Model Rule 1.1 (Competence) now clearly includes a duty to understand the benefits and risks of relevant technology, which includes generative AI tools that create or detect deepfakes. You do not need to be an engineer, but you should recognize common red flags, know when to request native files or metadata, and understand when to bring in a qualified forensic expert.

Deepfakes in Litigation: Detect Fake Evidence, Protect Your License!

Deepfakes also implicate Model Rule 3.3 (Candor to the tribunal) and Model Rule 3.4 (Fairness to opposing party and counsel). If you knowingly offer manipulated media, or ignore obvious signs of fabrication in your client’s “evidence,” you risk presenting false material to the court and obstructing access to truthful proof. Courts have made clear that submitting fake digital evidence can justify terminating sanctions, fee shifting, and referrals for disciplinary action.

Model Rule 8.4(c), which prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation, sits in the background of every deepfake decision. A lawyer who helps create, weaponize, or strategically “look away” from deepfake evidence is not just making a discovery mistake; they may be engaging in professional misconduct. Likewise, a lawyer who recklessly accuses an opponent of using deepfakes without factual grounding risks violating duties of candor and professionalism.

Practically, you can start protecting your clients with a few repeatable steps. Ask early in the case what digital media exists, how it was created, and who controlled the devices or accounts.🔍 Build authentication into your discovery plan, including requests for original files, device logs, and platform records that can help confirm provenance. When the stakes justify it, consult a forensic expert rather than relying on “gut feel” about whether a recording “looks real.”

lawyers need to know Deepfakes, Metadata, and ABA Ethics Rules!

Finally, talk to clients about deepfakes before they become a problem. Explain that altering media or using AI to “clean up” evidence is dangerous, even if they believe they are only fixing quality.📲 Remind them that courts are increasingly sophisticated about AI and that discovery misconduct in this area can destroy otherwise strong cases. Treat deepfakes as another routine topic in your litigation checklist, alongside spoliation and privilege, and you will be better prepared for the next “too good to be true” video that lands in your inbox.