TSL Labs 🧪 Bonus: Deep Dive on our April 27, 2026, Editorial, MTC: Smart Recording, Client Secrets, and HeyPocket: What Every Lawyer Needs to Know in 2026 📱⚖️

📌 To Busy to Read This Week’s Editorial?

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we unpack how AI note takers and “always-listening” devices can quietly route client secrets to third-party vendors, why that matters under the ABA Model Rules, and how a 2026 federal decision out of the Southern District of New York turned one defendant’s AI chats into discoverable evidence. Whether you are a solo practitioner, in-house counsel, or a tech-curious professional in another field, this conversation will help you balance convenience with confidentiality and avoid turning your favorite AI assistant into your biggest evidentiary risk.

👉 Before your next client meeting, listen to this episode, check out our editorial, and run your current AI tools through the checklist we outline—then subscribe and share with a colleague who is still “just trusting the app.” 🎧

In our conversation, we cover the following:

  • 00:00 – The “ambient microphone” problem: phones, smart speakers, wearables, and connected cars as a continuous surveillance layer around client conversations.

  • 01:00 – How technology competence has shifted from locking file cabinets to understanding data custody, cloud routing, and API-driven services.

  • 02:30 – What makes AI note takers like HeyPocket different from passive telemetry and why capturing the spoken “payload” changes the threat model.

  • 04:00 – The invisible “third party in the room”: routing privileged audio through external AI models and the malpractice risk of default “Allow” clicks.

  • 05:30 – Applying ABA Model Rules 1.1 and 1.6 to AI workflows: competence, confidentiality, and “reasonable efforts” in a world of automated transcription.

  • 07:00 – Risk-based analysis from ABA Formal Opinions 477R and 498: weighing sensitivity, likelihood of disclosure, and available safeguards before using AI.

  • 08:30 – Why secretly recording clients or opponents with AI tools can implicate Rule 8.4(c), even in one‑party consent jurisdictions.

  • 10:00 – Inside United States v. Heppner (SDNY 2026): how public generative AI platforms destroyed privilege and work-product protections for a criminal defendant.

  • 12:00 – How AI training and tokenization work, why “military‑grade encryption” does not save privilege if terms of service allow internal data use.

  • 14:00 – Treating every AI note taker like an outsourced e‑discovery vendor: NDAs, retention policies, security audits, and data destruction timelines.

  • 16:00 – Practical minimization strategies: defaulting to no recording, segmenting AI-generated content by matter, and restricting access via role‑based controls.

  • 17:30 – Establishing bright-line “no‑AI” categories (criminal defense, internal investigations, sensitive family/immigration, high‑value trade secrets).

  • 18:30 – Counseling clients not to “prep their case” with public chatbots after Heppner and why this is now part of competent representation.

  • 19:30 – Building a simple vendor-vetting checklist for law firms and professional practices adopting AI note takers.

  • 20:00 – Looking ahead: when failure to use secure, vetted AI may itself become a competence issue due to inefficiency and overbilling.

  • 21:00 – Rethinking privilege in a world where an algorithmic “third party” is always in the room and devices are never truly off

RESOURCES

Mentioned in the episode

When AI Falls Short - What Legal Professionals Must Know Before Relying on Microsoft Copilot and Similar Embedded AIs.

AI Errors in Legal Practice Demand Vigilant Attorney Oversight!

Any reader of my blog should realize by now that artificial intelligence is no longer a novelty in law practice; it is embedded in research platforms, document automation, e‑discovery, and now in tools like Microsoft Copilot that appear inside the same Microsoft 365 ecosystem lawyers already live in. Yet Copilot’s own terms of use long described it as being “for entertainment purposes only,” while Microsoft has simultaneously marketed it as an enterprise‑grade productivity assistant and is now backing away from prominent Copilot buttons in several Windows 11 apps. For lawyers who must live under the ABA Model Rules of Professional Conduct, this tension is not an amusing footnote; it is an ethics problem waiting to happen. 

Microsoft’s Copilot terms have advised that the service “can make mistakes,” “may not work as intended,” and should not be relied on for important advice. At the same time, Microsoft has begun removing or rebranding Copilot buttons from Notepad, Snipping Tool, Photos, and Widgets in Windows 11, framing this move as an effort to reduce “unnecessary Copilot entry points” and be “more intentional” about where AI shows up. The features, or at least the underlying AI, are not disappearing entirely; they are simply becoming less conspicuous. For the practicing lawyer, the message is clear: powerful AI is being woven into everyday tools, but its creators still do not want you to rely on it the way you rely on a human associate. 🤖

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators.

⚠️

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. ⚠️

That is precisely where the ABA Model Rules step in. Model Rule 1.1 requires competent representation and, through Comment 8, includes a duty to keep abreast of the benefits and risks of relevant technology. Using AI in law practice is increasingly seen as part of that competence obligation, but competence does not mean blind trust in unvetted outputs from a system whose own terms warn you not to rely on it. A lawyer who treats Copilot’s draft as a finished research memo, brief, or contract without independent verification risks violating the duty of competence every bit as much as a lawyer who never learned to use electronic research tools in the first place.

Model Rule 1.6 on confidentiality presents a second, and in many ways more pressing, concern. Generative AI systems may store, log, or otherwise use prompt content for analysis and improvement, which means uncritical copying and pasting of confidential client information into Copilot can create a non‑trivial risk of exposure. The ABA and commentators have emphasized that before entering client data into a generative AI tool, lawyers must assess whether that data could be disclosed or accessed by others, including through unintended re‑use in future outputs to different users. That risk analysis is not optional; it is part of your obligation to make reasonable efforts to prevent unauthorized access or disclosure.

Fake Citations from AI Tools can Threaten Accuracy and Legal Ethics!

Model Rules 5.1 and 5.3, which govern the responsibilities of partners, managers, supervisory lawyers, and non‑lawyer assistants, also apply to AI use. When you deploy Copilot in your firm, you are functionally introducing a new category of “assistant” whose work product must be supervised like that of a junior lawyer or paralegal. Policies, training, and review procedures are needed so that AI‑drafted content is consistently checked for accuracy, bias, hallucinations, and improper legal conclusions before it ever reaches a client, court, or counterparty. Ignoring Copilot’s disclaimers and Microsoft’s own hedging around reliability is, in effect, ignoring red flags that any reasonable supervising attorney would address.

Model Rule 1.4 on communication adds yet another dimension: transparency with clients about how you are using AI in their matters. Authorities interpreting the Model Rules have stressed that lawyers should keep clients reasonably informed, which includes explaining when and how AI tools are utilized to assist in their cases. This is particularly important where AI may affect cost, turnaround time, or the nature of the work performed, such as using Copilot to generate a first draft instead of assigning that task to an associate. Engagement letters and fee agreements are increasingly incorporating language about AI use, both to set expectations and to align with evolving ethical guidance.

The “for entertainment purposes only” language is more than a curiosity; it is a signal about allocation of risk. Microsoft’s disclaimer mirrors language historically used by psychic hotlines and other services seeking to avoid responsibility for inaccurate advice. When such a disclaimer is attached to a tool you might be tempted to use for legal analysis, the tool is telling you that you assume the risks of errors. Under the Model Rules, those risks ultimately translate into potential malpractice, sanctions, or disciplinary action if AI‑generated errors make their way into filed documents or client counseling.

Recent real‑world incidents involving lawyers who submitted briefs containing AI‑fabricated citations demonstrate how quickly misuse of generative AI can cross ethical lines. In those cases, the core problem was not that AI was used; it was that the lawyers failed to verify the content and then misrepresented fictitious cases as genuine authority to the court. That behavior implicates Model Rules 3.3 (candor toward the tribunal) and 8.4 (misconduct) along with competence. Copilot’s warnings about possible mistakes do not excuse a lawyer from the duty to check every citation, quote, and legal conclusion that AI produces before relying on it.

lawyers must assess whether that data could be disclosed or accessed by others

⚠️

lawyers must assess whether that data could be disclosed or accessed by others ⚠️

For practitioners with limited to moderate technology skills, the answer is not to abandon AI entirely, but to approach it with structured safeguards. A practical workflow might involve using Copilot to outline a research plan or draft a first pass at a contract clause, followed by standard legal research in trusted databases and rigorous review by a human lawyer before anything is finalized. Firms should configure Copilot and other AI tools in ways that minimize data exposure, such as disabling cross‑tenant learning, a feature that lets the system learn from patterns across multiple organizations’ environments, where possible, and restricting which matters and users can access certain features. Training sessions can focus less on technical jargon and more on concrete do’s and don’ts tied directly to the Model Rules, which is the language most lawyers already speak. 🧠

alawys Protect Client Confidentiality When Using AI in Modern Law Practice!

Governance is also essential. Written AI policies should address acceptable use cases, prohibited content for prompts, mandatory review standards, logging and auditing of AI‑assisted work, and incident response if an AI‑related error is discovered. These policies should be backed by regular training and by leadership that models appropriate use, rather than quietly delegating AI experimentation to the most tech‑savvy associates. Vendors’ evolving terms of use—including Microsoft’s move to revise its “entertainment purposes” language and adjust Copilot integration in Windows—should be monitored and incorporated into risk assessments over time.

In short, when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. Copilot and similar tools can be valuable allies in a modern legal practice, but only if they are treated as fallible assistants whose work must be checked, not as oracles. The ABA Model Rules already provide the framework: competence, confidentiality, supervision, and honest communication. The task for today’s legal professionals is to apply that framework thoughtfully to AI, recognizing both its promise and its very real limitations before letting it anywhere near client work or court filings. ⚖️🤖

📢 Your Tech-Savvy Lawyer Blogger and Podcaster, Michael D.J. Eisenberg, Announces His Upcoming Talk on Ethical AI Use in Legal Practice at the 2026 AI Legal Practice Summit!

Saturday, April 18, 2026 | Capital University Law School

As technology continues to transform legal practice, I’m honored to announce that I’ll be speaking at the 2026 AI Legal Practice Summit, hosted by my alma mater, Capital University Law School, in Columbus, Ohio. This event brings together attorneys, educators, and technologists to explore how artificial intelligence is reshaping the legal field — not just operationally, but ethically and professionally as well.

My presentation, “Smart Practice, Smarter Ethics: Navigating AI Tools Under the ABA Model Rules,” focuses on a topic that’s both timely and critically important: how lawyers can use emerging AI technologies responsibly while meeting their professional obligations under the ABA Model Rules of Professional Conduct.

👉 Learn more and view the full schedule at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
🎟️ Register today through Eventbrite: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

Through my work on The Tech-Savvy Lawyer.Page blog and podcast, I’ve had countless conversations with practitioners who want to use AI to streamline tasks such as research, document drafting, and client management — yet remain uncertain about compliance, bias, and confidentiality. Law practice is evolving rapidly, but our ethical foundations must remain strong.

In my session, I’ll walk through key aspects of how the ABA Model Rules, including Rules 1.1 (Competence), 1.6 (Confidentiality of Information), and 5.3 (Responsibilities Regarding Nonlawyer Assistance), apply in an age of intelligent automation. These rules guide us in assessing not just what technology can do, but how and when it should be used.

Your faculty!

We’ll discuss:

  • Reviewing the tech stack you already own;

  • How to vet and implement AI-powered tools while maintaining confidentiality.

  • Questions to ask vendors about data handling and bias;

  • How to document best practices for firm-wide ethical compliance;

  • Ways to blend human legal judgment with algorithmic assistance; and

  • Managing client expectations about AI-enabled legal work.

My goal is to help attorneys approach technology with confidence — to experiment, adopt, and adapt responsibly. Being a “tech‑savvy lawyer” isn’t about mastering every gadget or platform; it’s about understanding how technology fits within the ethical framework of our profession.

The conversation around technological competence has matured since Comment 8 to Rule 1.1 was introduced. It’s no longer optional. Attorneys must understand the benefits, risks, and limitations of relevant technology to provide competent representation. Artificial intelligence highlights that reality better than any emerging tool before it.

Whether you’re a solo practitioner looking to automate administrative tasks, working for a government agency, or part of a large firm implementing AI-assisted legal research or document review, I’ll share specific practices you can adopt immediately.

If you’re attending and seeking Ohio CLE credit, please contact Jenny Wondracek at jwondracek@law.capital.edu for details.

PRogram description of my presentation.

The 2026 AI Legal Practice Summit will feature leading scholars, ethics experts, and seasoned practitioners. I’m looking forward to exchanging ideas, testing assumptions, and continuing a dialogue that helps ensure AI becomes a responsible partner—never a replacement—in the practice of law.

Let’s move forward together, with competence, curiosity, and care.

Learn more about the Summit at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
Register today: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

I look forward to seeing you there! ⚖️

Word(s) of the Week: Understanding the Evolution of Artificial Intelligence: From AI to Generative AI to AI LLMs — and Why It Matters for Today’s Legal Professionals ⚖️🤖

lawyers need to understand what AL LLM can and can’t do!

Artificial Intelligence (AI) is transforming the legal industry, yet confusion still exists about what different terms mean — and why they matter. Terms like AI, Generative AI, and AI LLM (Large Language Model) are often used interchangeably, but they describe very different levels of capability. Understanding these distinctions is essential for attorneys navigating new professional responsibilities and compliance expectations under the ABA Model Rules. Let’s break down what each term means, why the progression matters, and what the next step—AI LLMs—means for legal practice.

AI: The Foundation of Machine Intelligence

Traditional AI refers to systems designed to perform tasks that require human-like intelligence. These tasks include pattern recognition, data sorting, predictive analytics, and document classification. For example, early e-discovery tools that identify relevant documents in large datasets use AI algorithms to flag patterns.

In legal practice, this type of AI boosted efficiency but remained narrow in function. Lawyers controlled the inputs and closely supervised the outcomes. Under ABA Model Rule 1.1 (Competence), using such tools responsibly required understanding their purpose and reliability, not their coding. Attorneys had to ensure that outputs were accurate and ethically sound.

Generative AI: Creating, Not Just Sorting

As technology evolved, so did AI’s capabilities. Generative AI differs from basic AI because it creates content instead of just classifying it. These models generate text, images, code, and even legal-style drafts based on training data. Tools like ChatGPT, which fall under this category, can draft letters, summarize cases, or brainstorm argument strategies.

Generative AI introduces profound efficiency benefits. A solo practitioner, for example, can use AI to prepare first drafts of client letters or marketing content quickly. The risk, however, is accuracy. Because these models generate content probabilistically, they can “hallucinate” — producing incorrect or fabricated information that sounds authoritative.

Generative ai is great at creating contENt - just watch out for hallucinations!

Under ABA Model Rule 5.3 (Supervision of Nonlawyer Assistants), lawyers must exercise oversight over tools like these since they function similarly to an assistant. Lawyers must verify all AI-generated output before use, maintaining professional independence and ethical standards.

AI LLMs: The Next Step in Practice Intelligence

AI LLMs — large language models — represent the next and most transformative step. Unlike earlier forms of AI, LLMs process massive datasets and can understand nuance, intent, and context in human language. This allows them to perform legal research, summarize filings, analyze contracts, and even simulate case strategies.

The key difference is scale and sophistication. LLMs learn not only from pre-set instructions but also by understanding the relationships between words and concepts. This contextual learning enables attorneys to interact with these systems conversationally. For example, an LLM-based research assistant can respond to a query such as, “Find Illinois cases interpreting non-compete clauses after 2023,” and then produce accurate summaries or citations.

Yet with great capability comes heightened responsibility. ABA Model Rule 1.6 (Confidentiality) applies when attorneys input client data into online tools. If the platform is public or cloud-based, lawyers must assess data handling, encryption, and privacy policies. Additionally, per Model Rule 1.1, competence now includes understanding how LLMs generate and manage information.

Why the Distinction Matters

The distinction between AI, Generative AI, and AI LLMs matters because it affects how attorneys use the technology within ethical, secure boundaries. A misstep in understanding can result in breached confidentiality, inaccurate filings, or ethical violations.

✅ AI assists.
✅ Generative AI creates.
✅ AI LLMs reason and interact.

In practical terms, lawyers need to update policies, train staff, and disclose use of these tools when appropriate. Law firms that adopt LLM-based platforms responsibly will gain a competitive advantage through increased efficiency and improved client service — without compromising professional duties.

Looking Ahead

Lawyers who use ai llms can save hours of menial work - always check your work!

AI LLMs are not replacing lawyers; they are amplifying their insight and reach. Attorneys who stay informed and practice technological competence will thrive in this next phase of digital legal service. The evolution from AI to Generative AI to LLMs represents not just a technological shift, but a professional one — requiring careful balance between innovation, ethics, and human judgment. ⚖️

🎙️ Ep. #134 — AI-Powered Legal Writing: How BriefCatch Helps Lawyers Write Smarter, Not Harder with Ross Guberman.

My next guest is Ross Guberman — founder of BriefCatch, nationally recognized legal writing trainer, and author of several acclaimed books on persuasive legal writing. Ross has trained thousands of lawyers and judges across the country. After years of teaching the craft of legal writing, he channeled that expertise into building BriefCatch — a purpose-built AI writing tool that lives right inside Microsoft Word and Outlook, scanning your legal documents using roughly 17,000 rules to help you write cleaner, sharper, and more persuasive work product. Whether you're a solo practitioner or part of a large firm, Ross brings insights that are immediately practical — no matter your tech comfort level. 🚀

Join Ross Guberman and me as we discuss the following three questions and more!

  1. 🏆 From your vantage point — having trained thousands of lawyers and judges and now running BriefCatch — what are the top three ways lawyers can leverage AI-driven writing tools like BriefCatch inside Word and Outlook to measurably improve the quality and persuasiveness of their briefs without sacrificing their own voice or judgment?

  2. ⚖️ For a tech-curious but time-strapped practitioner, what are the top three everyday workflows beyond traditional brief writing where lawyers are leaving the most value on the table by not using tools like BriefCatch and other legal tech?

  3. 🔮 Looking ahead five years, what are the top three technology competencies every lawyer must develop — not just "nice to have" skills — to collaborate effectively with AI, stay ethically compliant, and turn technology into a genuine competitive advantage rather than a source of risk?

In our conversation, we cover the following:

  • [00:30] 💻 Ross's current tech setup — MacBook Pro M4 Max, macOS, and iPhone 16

  • [01:30] 🔄 Why keeping your OS updated matters — security and performance

  • [03:00] 🖥️ External monitors, portable screens, and traveling with tech

  • [07:00] 📱 Using your iPad as an external monitor via Apple Sidecar

  • [08:30] 🎪 Bonus Question #1 - Ross’s experience in the ABA TECHSHOW Startup Alley

  • [11:00] ✍️ Question #1 — Top 3 ways to use AI writing tools to improve briefs without losing your voice

  • [12:00] 🧑‍⚖️ Using AI to role-play as a skeptical judge or opposing counsel to pressure-test your brief

  • [13:00] 📊 Transforming fact sections into timelines and case law into comparison charts

  • [14:00] 📝 Using AI as a self-check for hyperbole, redundancy, and tone

  • [15:30] 📲 How judges now read briefs on iPads — and what that means for your writing style

  • [17:00] 📂 Using Text Expander to store and deploy your best prompts

  • [18:30] 🎙️ Google Notebook LLM as a learning and podcast creation tool

  • [20:00] 🧩 Bonus Question #2 — What is BriefCatch and why use purpose-built legal AI over general tools?

  • [21:00] 🚀 The origin story of BriefCatch — from side hustle in 2018 to funded legal tech startup

  • [22:30] ⚙️ Workflow, ethics rules, and attorney-specific conventions — why legal-specific AI wins

  • [24:30] 📋 Question #2 — Top 3 underused everyday workflows for lawyers using AI

  • [25:00] 📧 Using AI with your email to surface unanswered messages and unresolved threads

  • [25:45] 📁 Mining your past work product for patterns, style, and reusable language

  • [26:30] 📅 Having AI review your calendar and correspondence for efficiency insights

  • [27:00] 🔒 Data privacy, security settings, and the risks of default AI configurations

  • [28:30] 🏛️ New York State's data protection approach and what more states should do

  • [29:30] 🤖 Question #3 — Top 3 technology competencies every lawyer must master in the next five years

  • [30:00] 🧠 Understanding how LLMs actually "think" — reading the AI's reasoning chain

  • [30:45] 🖊️ Making AI output sound like you — the human voice in an AI-generated world

  • [31:30] 🔧 Integrating AI into your daily workflow while preserving human judgment

  • [32:00] 👏 Closing thoughts and where to find Ross and BriefCatch

RESOURCES

🔗 Connect with Ross Guberman

  • 📧 Email: ross@briefcatch.com

  • 🌐 Website: https://www.briefcatch.com

  • 💼 LinkedIn: Search "Ross Guberman" on LinkedIn at https://www.linkedin.com

📌 Mentioned in the Episode

🖥️ Hardware Mentioned in the Conversation

☁️ Software & Cloud Services Mentioned in the Conversation

TSL.P Labs 🧪 Initiative: Why 96% AI Accuracy Still Fails Lawyers: Ethics, Hallucinations, and the Future of the Billable Hour ⚖️🤖

📌 To Busy to Read This Week’s Editorial?

Welcome to the TSL Lab’s Initiative. 🤖 This weeks episode builds on my March 3rd, 2026, editorial “Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖” is a misleading comfort blanket for lawyers, and how ABA Model Rules on confidentiality, competence, diligence, candor, supervision, and client communication must govern every AI prompt you run. Our Google LLM Notebook hosts translate the theory into practical workflows you can implement today—from document grounding and tokenization to vendor due diligence and line‑by‑line verification—so you can leverage AI confidently without sacrificing ethics, privilege, or your professional license.

You will hear how document grounding changes what LLMs actually do, why uploading active case files to cloud AI tools can quietly trigger Rule 1.6 problems, and how cross‑border data flows, vendor training rights, and retention policies can erode privilege if you do not negotiate them carefully. 🔐 We also unpack practical safeguards like tokenization, internal sandbox testing, and bright‑line “danger zones” where AI must never operate unsupervised—especially on open‑ended research, choice of law, and any task that turns statistical text into real‑world legal risk.

Finally, we confront the economic paradox: when AI can compress 100 hours of document review into seconds, but partners must still verify every line to protect their licenses, what exactly are clients paying for—and how does the billable hour survive? 💼

In our conversation, we cover the following

  • 00:00 – Why “96% fewer hallucinations” is still not good enough in law ⚖️

  • 01:00 – How the remaining 4% error rate can trigger malpractice, sanctions, and ethics violations

  • 02:00 – From IT issue to ethics issue: ABA Model Rules as the real constraint on AI adoption

  • 03:00 – Document grounding 101: turning a free‑floating LLM into a reading‑comprehension engine

  • 04:00 – The hidden danger of “just upload the file”: how Rule 1.6 confidentiality is instantly implicated

  • 05:00 – Cloud AI architecture, cross‑border data transfers, GDPR, and privilege risk 🌐

  • 06:00 – Model training nightmares: when your client’s trade secrets leak back out through someone else’s prompt

  • 07:00 – Negotiating no‑training clauses and ring‑fencing vendor data use (before you upload anything)

  • 08:00 – Tokenization explained: turning John Doe into “Plaintiff 01” without losing legal meaning 🔐

  • 09:00 – What AI does well today: grounded summarization, clause extraction, and playbook‑based redlines

  • 10:00 – The “danger zone” of tasks: open‑ended research, choice of law, and abstract legal reasoning

  • 11:00 – Phantom case law: how LLMs manufacture perfect‑looking but fake citations (and Rule 3.3 candor)

  • 12:00 – Sandboxing AI tools internally and measuring real‑world failure rates against known outcomes 🧪

  • 13:00 – Building bright‑line firm policies around forbidden AI use cases

  • 14:00 – Verification as a workflow, not a suggestion: what Model Rules 5.1 and 5.3 demand from supervisors

  • 15:00 – The efficiency paradox: when partner‑level verification erases associate‑level time savings ⏱️

  • 16:00 – Making AI verification as routine as a conflict check in your practice

  • 17:00 – Falling hallucination rates, rising risk: why better AI can still make lawyers more vulnerable

  • 18:00 – Client communication under Rule 1.4: when and why clients may be entitled to know you used AI

  • 19:00 – “You can delegate the task, not the liability”: Rule 1.2 and ultimate responsibility for AI‑assisted work

  • 20:00 – Treating every AI prompt and ToS as a potential ethics document

  • 📝21:00 – The existential question: if AI drafts in seconds, what exactly are clients paying lawyers for?

👉 Tune in now to learn how to stay tech‑forward without becoming the next ethics cautionary tale, and start designing AI policies that actually protect your clients, your firm, and your bar license.

MTC: AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖

SDNY Heppner Ruling: Public AI Use Breaks Attorney-Client PrivilegE!

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that documents a criminal defendant generated with a publicly accessible AI tool and later sent to his lawyers were not protected by either attorney‑client privilege or the work‑product doctrine. That decision should be a wake‑up call for every lawyer who has ever dropped client facts into a public chatbot.

The court’s analysis followed traditional privilege principles rather than futuristic AI theory. Privilege requires confidential communication between a client and a lawyer made for the purpose of obtaining legal advice. In Heppner, the AI tool was “obviously not an attorney,” and there was no “trusting human relationship” with a licensed professional who owed duties of loyalty and confidentiality. Moreover, the platform’s privacy policy disclosed that user inputs and outputs could be collected and shared with third parties, undermining any reasonable expectation of confidentiality. In short, the defendant’s AI‑generated drafts looked less like protected client notes and more like research entrusted to a third‑party service.

For sometime now, I’ve warned on The Tech‑Savvy Lawyer.Page has warned practitioners not to paste client PII or case‑specific facts into generative AI tools, particularly public models whose terms of use and training practices erode confidentiality. We have consistently framed AI as an extension of a lawyer’s existing ethical duties, not a shortcut around them. I have encouraged readers to treat these systems like any other non‑lawyer vendor that must be vetted, contractually constrained, and configured before use. That perspective aligns squarely with Heppner’s outcome: once you treat a public AI as a casual brainstorming partner, you risk treating your client’s confidences as discoverable data.

A Tech-Savvy Lawyer Avoids AI Privilege Waiver With Confidentiality Safeguards!

For lawyers, this has immediate implications under the ABA Model Rules. Model Rule 1.1 on competence now explicitly includes understanding the “benefits and risks associated” with relevant technology, and recent ABA guidance on generative AI emphasizes that uncritical reliance on these tools can breach the duty of competence. A lawyer who casually uses public AI tools with client facts—without reading the terms of use, configuring privacy, or warning the client—may fail the competence test in both technology and privilege preservation. The Tech‑Savvy Lawyer.Page repeatedly underscores this point, translating dense ethics opinions into practical checklists and workflows so that even lawyers with only moderate tech literacy can implement safer practices.

Model Rule 1.6 on confidentiality is equally implicated. If a lawyer discloses client confidential information to a public AI platform that uses data for training or reserves broad rights to disclose to third parties, that disclosure can be treated like sharing with any non‑necessary third party, risking waiver of privilege. Ethical guidance stresses that lawyers must understand whether an AI provider logs, trains on, or shares client data and must adopt reasonable safeguards before using such tools. That means reading privacy policies, toggling enterprise settings, and, in many cases, avoiding consumer tools altogether for client‑specific prompts.

Does a private, paid AI make a difference? Possibly, but only if it is structured like other trusted legal technology. Enterprise or legal‑industry tools that contractually commit not to train on user data and to maintain strict confidentiality can better support privilege claims, because confidentiality and reasonable expectations are preserved. Tools like Lexis‑style or Westlaw‑style AI offerings, deployed under robust business associate and security agreements, look more like traditional research platforms or litigation support vendors within Model Rules 5.1 and 5.3, which govern supervisory duties over non‑lawyer assistants. The Tech‑Savvy Lawyer.Page has emphasized this distinction, encouraging lawyers to favor vetted, enterprise‑grade solutions over consumer chatbots when client information is involved.

Enterprise AI Vetting Checklist for Lawyers: Contracts, NDA, No Training

The tech‑savvy lawyer in 2026 is not the one who uses the most AI; it is the one who knows when not to use it. Before entering client facts into any generative AI, lawyers should ask: Is this tool configured to protect client confidentiality? Have I satisfied my duties of competence and communication by explaining the risks to my client (Model Rules 1.1 and 1.4)? And if a court reads this platform’s privacy policy the way Judge Rakoff did, will I be able to defend my privilege claims with a straight face to a court or to a disciplinary bar?

AI may be a powerful drafting partner, but it is not your co‑counsel and not your client’s confidant. The tech‑savvy lawyer—of the sort championed by The Tech‑Savvy Lawyer.Page—treats it as a tool: carefully vetted, contractually constrained, and ethically supervised, or not used at all. 🔒🤖

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

Words of the Week: “ANTHROPIC” VS. “AGENTIC”: UNDERSTANDING THE DISTINCTION IN LEGAL TECHNOLOGY 🔍

lawyers need to know the difference anthropic v. agentic

The terms "Anthropic" and "agentic" circulate frequently in legal technology discussions. They sound similar. They appear in the same articles. Yet they represent fundamentally different concepts. Understanding the distinction matters deeply for legal practitioners seeking to leverage artificial intelligence effectively.

Anthropic is a company—specifically, an AI safety-focused organization that develops large language models, most notably Claude. Think of Anthropic as a technology provider. The company pioneered "Constitutional AI," a training methodology that embeds explicit principles into AI systems to guide their behavior toward helpfulness, harmlessness, and honesty. When you use Claude for legal research or document drafting, you are using a product built by Anthropic.

Agentic describes a category of AI system architecture and capability—not a company or product. Agentic systems operate autonomously, plan multi-step tasks, make decisions dynamically, and execute workflows with minimal human intervention. An agentic system can break down complex assignments, gather information, refine outputs, and adjust its approach based on changing circumstances. It exercises judgment about which tools to deploy and when to escalate matters to human oversight.

"Constitutional AI" is an ai training methodology promoting helpfulness, harmlessness, and honesty in ai programing

The relationship between these concepts becomes clearer through a practical scenario. Imagine you task an AI system with analyzing merger agreements from a target company. A non-agentic approach requires you to provide explicit instructions for each step: search the database, extract key clauses, compare terms against templates, and prepare a summary. You guide the process throughout. An agentic approach allows you to assign a goal—Review these contracts, flag risks, and prepare a risk summary—and the AI system formulates its own research plan, prioritizes which documents to examine first, identifies gaps requiring additional information, and works through the analysis independently, pausing only when human judgment becomes necessary.

Anthropic builds AI models capable of agentic behavior. Claude, Anthropic's flagship model, can function as an agentic system when configured appropriately. However, Anthropic's models can also operate in simpler, non-agentic modes. You might use Claude to answer a direct question or draft a memo without any agentic capability coming into play. The capability exists within Anthropic's models, but agentic functionality remains optional depending on your implementation.

They work together as follows: Anthropic provides the underlying AI model and the training methodology emphasizing constitutional principles. That foundation becomes the engine powering agentic systems. The Constitutional AI approach matters specifically for agentic applications because autonomous systems require robust safeguards. As AI systems operate more independently, explicit principles embedded during training help ensure they remain aligned with human values and institutional requirements. Legal professionals cannot simply deploy an autonomous AI agent without trust in its underlying decision-making framework.

Agentic vs. Anthropic: Know the Difference. Shape the Future of Law!

For legal practitioners, the distinction carries practical implications. You evaluate Anthropic as a vendor when selecting which AI provider's tools to adopt. You evaluate agentic architecture when deciding whether your specific use case requires autonomous task execution or whether simpler, more directed AI assistance suffices. Many legal workflows benefit from direct AI support without requiring full autonomy. Others—such as high-volume contract analysis during due diligence—leverage agentic capabilities to move work forward rapidly.

Both elements represent genuine advances in legal technology. Recognizing the difference positions you to make informed decisions about tool adoption and appropriate implementation for your practice. ✅