MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖.  Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool’s Terms of Use can trigger a privilege waiver, and what “tech competence” really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff’s wake-up-call analysis of confidentiality and third-party disclosure risk.

🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters.

In our conversation, we cover the following

  • 00:00 — The “superhuman assistant” promise, and the procedural nightmare risk. 🧠⚖️

  • 00:01 — The core warning: AI use can “blow a hole” in privilege.

  • 00:02 — Editorial overview: “The AI Privilege Trap” by Michael D.J. Eisenberg.

  • 00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.

  • 00:03 — Why Judge Jed Rakoff’s opinion gets attention (tech-literate, influential).

  • 00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.

  • 00:04 — The court’s conclusion: no attorney-client privilege, no work product protection.

  • 00:05 — Privilege basics applied to AI: “confidential + lawyer” and why AI fails that test.

  • 00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾

  • 00:07 — The “stranger on the street” analogy: you can’t retroactively make it confidential.

  • 00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.

  • 00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.

  • 00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.

  • 00:10 — “Reasonable safeguards”: read policies, adjust settings, and know training/logging.

  • 00:11 — Public vs. enterprise AI: why contracts and “walled gardens” matter.

  • 00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.

  • 00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.

  • 00:13 — Redefining “tech-savvy lawyer” in 2026: judgment and restraint. 🧭

  • 00:14 — The “straight-face test”: could you defend confidentiality after a judge reads the policy?

  • 00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.

  • 00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

⭐ First Five-Star Amazon Review for “The Lawyer’s Guide to Podcasting” – Why Tech-Savvy Lawyers Should Care About ABA Ethics, Client Trust, and Smart Marketing 🎙️⚖️

“The Lawyer’s Guide to Podcasting” by your favorite blogger/podcaster just earned its first five-star Amazon review, and it’s a milestone worth your attention. 🎉📘 The reviewer highlights what many of us in legal tech have been saying: podcasting is no longer a fringe hobby; it is a strategic, ethics-aware marketing channel for modern law practice. 🎙️

For lawyers with limited to moderate tech skills, this book demystifies microphones, workflows, and publishing tools without assuming you want to become an engineer. Instead, it walks you through practical steps to share your expertise in a format today’s clients already trust—long-form, authentic audio. 🔊

From a professional responsibility perspective, the guidance aligns with ABA Model Rule 1.1 on technology competence and Model Rule 1.6 on confidentiality by emphasizing the use of secure platforms, thoughtful content planning, and careful handling of client-identifying details. The book reinforces that podcasting can showcase your substantive knowledge while staying within the guardrails of Model Rule 7.1, avoiding misleading claims about your services. ⚖️

QR Code for Amazon book link

The first five-star review underlines two themes: listeners want real conversations, and they quickly recognize when a lawyer respects both the audience’s time and the profession’s ethical duties. That is exactly the posture this book encourages—credible, compliant, and client-centered. 🌟

If you are ready to build authority, differentiate your practice, and satisfy your tech-competence obligations without drowning in jargon, now is the perfect time to get your copy of “The Lawyer’s Guide to Podcasting” on Amazon and start planning your first ethically sound episode. 🚀

MTC: AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖

SDNY Heppner Ruling: Public AI Use Breaks Attorney-Client PrivilegE!

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that documents a criminal defendant generated with a publicly accessible AI tool and later sent to his lawyers were not protected by either attorney‑client privilege or the work‑product doctrine. That decision should be a wake‑up call for every lawyer who has ever dropped client facts into a public chatbot.

The court’s analysis followed traditional privilege principles rather than futuristic AI theory. Privilege requires confidential communication between a client and a lawyer made for the purpose of obtaining legal advice. In Heppner, the AI tool was “obviously not an attorney,” and there was no “trusting human relationship” with a licensed professional who owed duties of loyalty and confidentiality. Moreover, the platform’s privacy policy disclosed that user inputs and outputs could be collected and shared with third parties, undermining any reasonable expectation of confidentiality. In short, the defendant’s AI‑generated drafts looked less like protected client notes and more like research entrusted to a third‑party service.

For sometime now, I’ve warned on The Tech‑Savvy Lawyer.Page has warned practitioners not to paste client PII or case‑specific facts into generative AI tools, particularly public models whose terms of use and training practices erode confidentiality. We have consistently framed AI as an extension of a lawyer’s existing ethical duties, not a shortcut around them. I have encouraged readers to treat these systems like any other non‑lawyer vendor that must be vetted, contractually constrained, and configured before use. That perspective aligns squarely with Heppner’s outcome: once you treat a public AI as a casual brainstorming partner, you risk treating your client’s confidences as discoverable data.

A Tech-Savvy Lawyer Avoids AI Privilege Waiver With Confidentiality Safeguards!

For lawyers, this has immediate implications under the ABA Model Rules. Model Rule 1.1 on competence now explicitly includes understanding the “benefits and risks associated” with relevant technology, and recent ABA guidance on generative AI emphasizes that uncritical reliance on these tools can breach the duty of competence. A lawyer who casually uses public AI tools with client facts—without reading the terms of use, configuring privacy, or warning the client—may fail the competence test in both technology and privilege preservation. The Tech‑Savvy Lawyer.Page repeatedly underscores this point, translating dense ethics opinions into practical checklists and workflows so that even lawyers with only moderate tech literacy can implement safer practices.

Model Rule 1.6 on confidentiality is equally implicated. If a lawyer discloses client confidential information to a public AI platform that uses data for training or reserves broad rights to disclose to third parties, that disclosure can be treated like sharing with any non‑necessary third party, risking waiver of privilege. Ethical guidance stresses that lawyers must understand whether an AI provider logs, trains on, or shares client data and must adopt reasonable safeguards before using such tools. That means reading privacy policies, toggling enterprise settings, and, in many cases, avoiding consumer tools altogether for client‑specific prompts.

Does a private, paid AI make a difference? Possibly, but only if it is structured like other trusted legal technology. Enterprise or legal‑industry tools that contractually commit not to train on user data and to maintain strict confidentiality can better support privilege claims, because confidentiality and reasonable expectations are preserved. Tools like Lexis‑style or Westlaw‑style AI offerings, deployed under robust business associate and security agreements, look more like traditional research platforms or litigation support vendors within Model Rules 5.1 and 5.3, which govern supervisory duties over non‑lawyer assistants. The Tech‑Savvy Lawyer.Page has emphasized this distinction, encouraging lawyers to favor vetted, enterprise‑grade solutions over consumer chatbots when client information is involved.

Enterprise AI Vetting Checklist for Lawyers: Contracts, NDA, No Training

The tech‑savvy lawyer in 2026 is not the one who uses the most AI; it is the one who knows when not to use it. Before entering client facts into any generative AI, lawyers should ask: Is this tool configured to protect client confidentiality? Have I satisfied my duties of competence and communication by explaining the risks to my client (Model Rules 1.1 and 1.4)? And if a court reads this platform’s privacy policy the way Judge Rakoff did, will I be able to defend my privilege claims with a straight face to a court or to a disciplinary bar?

AI may be a powerful drafting partner, but it is not your co‑counsel and not your client’s confidant. The tech‑savvy lawyer—of the sort championed by The Tech‑Savvy Lawyer.Page—treats it as a tool: carefully vetted, contractually constrained, and ethically supervised, or not used at all. 🔒🤖

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

🎙️ My Law School Library Adds The Lawyer’s Guide to Podcasting to Empower Ethical, Tech-Savvy Attorneys ⚖️

https://law-capital.libguides.com/SpecialCollections/NewBooks

I’m thrilled to share that my alma mater, Capital University Law School, has added my book, The Lawyer’s Guide to Podcasting, to its Law Library Special Collections. 🎉📚 Seeing this guide on the same shelves where I learned to think like a lawyer underscores how central ethical technology use has become to modern advocacy. 🎙️ Written for attorneys with limited to moderate tech skills, it walks readers through planning, recording, and promoting a law‑firm podcast while honoring ABA Model Rules on technology competence, confidentiality, and attorney advertising, helping you communicate confidently, credibly, and compliantly. ⚖️🚀

You can pick up your copy on Amazon Today!

🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️

My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy.

Join Justin and me as we discuss the following three questions and more!

  1. What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?

  2. What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk? 

  3. Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? 

In our conversation, we cover the following

  • 00:00 – Welcome and guest introduction

    • Justin joins the show and shares his current tech setup at his desk. 

  • 00:00–01:00 – Justin’s current tech stack

    • Lenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.

    • Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making.

  • 01:00–02:00 – Android vs. iPhone for AI use

    • Why Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability.

  • 02:00–05:30 – Q1: Top three ways litigators should be using AI right now

    • Using AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.

    • Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.

    • Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks.

  • 05:30–07:30 – StrongSuit vs. basic tools like Word grammar check

    • How StrongSuit aims to “up-level” a lawyer’s writing, not just catch typos.

    • Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents.

  • 06:00–08:00 – AI context limits and scaling doc review

    • Constraints of large models’ context windows (around ~1M tokens ≈ ~750 pages).

    • How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights.

  • 08:00–09:00 – Handling tens of thousands of documents

    • How StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters.

  • 09:00–11:30 – Origin story of StrongSuit

    • Why Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.

    • StrongSuit’s focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step.

  • 11:30–13:30 – From intake to brief drafting in minutes

    • Generating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.

    • StrongSuit’s long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment.

  • 12:00–14:30 – How StrongSuit tackles hallucinations

    • Building a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.

    • Validating citations by checking whether the Bluebook citation actually exists in StrongSuit’s case database before surfacing it to the user.

    • Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations.

  • 14:30–16:30 – Coverage and jurisdictions

    • Coverage of all U.S. jurisdictions, federal and state, focused on precedential cases.

    • Handling most regulations from administrative agencies, and limits around local ordinances.

    • Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows.

  • 15:00–17:00 – Security and confidentiality for litigators

    • SOC 2 compliance and industry-standard encryption at rest and in transit.

    • No model training on user data.

    • Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys.

  • 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigation

    • Mistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.

    • Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract drafting).

    • How to shortlist tools: look for SOC 2, real product depth, awards, and a focus on your specific workflows.

    • Mistake #2: Expecting immediate mastery instead of moving through predictable adoption stages—from learning the tool, to daily use, to stringing workflows together.

  • 20:30–22:30 – Building firm-wide AI workflows over time

    • Moving from isolated experiments to integrated, low-friction workflows, such as automatic intake-to-research pipelines.

    • Using client intake audio or transcripts to automatically extract facts, issues, and research paths.

  • 22:30–24:30 – Time constraints and “no-time” lawyers

    • Why lawyers don’t need to be “technical” to use StrongSuit.

    • Reframing AI as text-based tools where lawyers’ writing skills and analytical thinking are assets, not obstacles. 

  • 24:00–26:00 – Practical workflows beyond intake

    • Using AI to prepare for expert depositions, including reviewing valuation analyses, flagging departures from market consensus, and generating targeted questions.

    • Reinforcing the value of AI-enhanced legal research and drafting as core litigation workflows.

  • 26:00–29:30 – Q3: 2026 and beyond – AI-driven workflows every litigator should master

    • Rapid improvement of baseline models (e.g., jumping from single-digit to high double-digit performance on difficult benchmarks year over year). 

    • The idea of “tipping points,” where small performance gains turn AI from marginally useful to essential in specific tasks.

    • Why legal research is a great training ground for understanding where AI excels, where it falls short, and how to divide labor between human and machine.

    • The value of learning basic prompting skills to get more from AI systems, even when platforms offer visual workflows.

  • 29:30–32:30 – Will workflows actually change—or just get better?

    • Why Justin expects familiar litigation workflows (doc review, research, drafting) to remain structurally similar, but become far faster and more sophisticated.

    • AI agents handling the grind work while lawyers focus on synthesis, judgment, and strategy.

    • A future where “AI + lawyer vs. AI + lawyer” resembles high-level chess: same rules, but much deeper thinking on both sides.

  • 32:30–End – Where to find Justin and StrongSuit

    • How to connect with Justin and learn more about StrongSuit’s litigation tools.

Resources

Connect with Justin

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC

There Is an App for That: How the Transit App Helps Lawyers Meet ABA Tech Competence and Protect Client Service 🚆⚖️

Smart Transit Planning Helps Lawyers Stay Punctual, Professional, and ABA-Compliant!

For many lawyers, the most stressful part of the day is not the hearing itself but getting to the courthouse, client site, or arbitration on time. 🚇 In transit‑dependent cities, delays can threaten punctuality, strain client relationships, and create avoidable risk, yet the tools to manage this are already in your pocket.

The Transit app is built on a mix of official agency feeds, crowdsourced rider reports, and its own ETA prediction engine, which gives it unusually accurate real‑time arrival data across many cities worldwide. It is designed to be a “one‑stop” view of buses, subways, trains, ferries, bikeshare, scooters, and even some rideshare options, so you are not juggling multiple agency apps or websites to understand how to get from your office to court.🚌 For a practitioner trying to manage hearings in different venues, this unified view reduces friction and the risk of missing a critical transfer.

Key features go well beyond static schedules. You can plan the fastest route from A to B, compare options that mix modes (for example, bus + rail or scooter + metro), save frequent destinations such as courthouses or jails, and receive disruption alerts when there are delays, detours, or service changes. The GO navigation feature adds departure alarms, “time to get off” alerts, and real‑time progress so you are less likely to overshoot your stop while reviewing notes or answering emails. For lawyers new to a region, live crowding reports and rider tips can also help you choose safer, more predictable paths, which ties back to your duty to reasonably safeguard your own security and your clients’ matters under the ABA Model Rules.⚖️

Reliability is strengthened by the app’s business model and data philosophy. Transit is free to use, supported by a paid option called Royale and by partnerships with transit agencies, rather than by selling riders’ personal data. The developer states that it does not link your location history to identifiable personal data and does not sell your data, which reduces some privacy concerns lawyers may have about location tracking. From an ABA Model Rule 1.6 perspective, this kind of transparent, limited‑use data practice is easier to justify than tools that depend heavily on advertising and profiling.🔐

BE the hero! use Transit Apps to help Manage Delays, Deadlines, and Ethical Duties!

From a practical standpoint, Transit runs on iPhone, Apple Watch, and Android, making it accessible for most modern devices in a law office. The core app is free, which means solo and small‑firm lawyers can test it with no up‑front cost; optional Royale subscriptions are available on a monthly or annual basis, adding cosmetic perks and advanced features while leaving the essential planning tools available to everyone. This combination of broad city coverage, accurate multi‑modal data, and a privacy‑conscious, low‑cost model makes Transit a defensible choice when you explain how you are using accessible technology to manage foreseeable transit risks in line with ABA Model Rules 1.1, 1.3, and 1.6.

For solo and small‑firm lawyers, where every hour is critical, this kind of lightweight technology can be as impactful as more expensive practice‑management platforms. It improves reliability, supports ABA Model Rule compliance, and signals professionalism to clients and courts alike. In an era where judges, clients, and opposing counsel expect you to manage foreseeable risks, there truly is an app for that—and it may keep your practice on track in more ways than one. 🚍