📖 Word of the Week: “Cross‑Tenant” Learning in Legal Practice

Cross-tenant learning helps law firms improve AI tools without exposing data

If your firm uses cloud‑based tools, you are already living in a multi‑tenant world. In that world, cross‑tenant learning is quickly becoming a key concept that every lawyer and legal operations professional should understand. 🧠⚖️

In simple terms, a “tenant” is your firm’s logically separate space inside a cloud platform: your own users, matters, documents, and settings, isolated from everyone else’s. Cross‑tenant learning refers to techniques in which a vendor’s system learns from patterns across multiple tenants (for example, many law firms) to improve its features—such as search, drafting suggestions, or document classification—without exposing any other firm’s confidential data to you or yours to them.

Why cross‑tenant learning matters for law firms

Cross‑tenant learning is especially relevant as generative AI and machine‑learning tools become embedded in e‑discovery platforms, contract review tools, legal research systems, and practice‑management software. Vendors may use aggregated and anonymized usage data to:

  • Improve relevance of search results and recommendations.

  • Enhance clause and issue spotting in contracts and briefs.

  • Reduce false positives in e‑discovery or compliance alerts.

  • Optimize workflows based on how similar firms use the product.

For lawyers, the value proposition is straightforward: your tools can become “smarter” faster, based on lessons learned across many organizations, not just your own firm’s experience. Done properly, cross‑tenant learning can raise the baseline quality and efficiency of technology available to your practice. ⚙️📈

ABA Model Rules: Confidentiality and Competence

Any discussion of cross‑tenant learning for law firms must start with confidentiality and competence.

  • Model Rule 1.6 (Confidentiality of Information) requires lawyers to safeguard information relating to the representation of a client. That obligation extends to how your vendors collect, store, and use your data. You must understand whether and how client data may be used for cross‑tenant learning and ensure that any such use preserves confidentiality through anonymization, aggregation, and strong technical and contractual controls. 🔐

  • Model Rule 1.1 (Competence), including Comment 8, emphasizes that lawyers should keep abreast of the benefits and risks associated with relevant technology. Understanding cross‑tenant learning is now part of that duty. You do not need to become a data scientist, but you should be comfortable asking vendors precise questions and recognizing red flags.

  • Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) applies when you rely on vendors as nonlawyer assistants. You must make reasonable efforts to ensure that their conduct is compatible with your professional obligations, including how they use your data for cross‑tenant learning. 🧾

Key questions to ask your vendors

ABA Model Rules guide ethical use of cross-tenant learning technologies

When evaluating a product that relies on cross‑tenant learning, consider asking:

  1. What data is used?

    • Is it only metadata or usage logs, or are actual document contents included?

    • Is the data aggregated and anonymized before it is used to train shared models?

  1. How is confidentiality protected?

    • Can other tenants ever see prompts, documents, or client‑identifying information from our firm?

    • What technical measures (encryption, access controls, tenant isolation) are in place?

  1. Can cross‑tenant learning be limited or disabled?

    • Do we have opt‑out or configuration controls?

    • Is there a dedicated model or environment for our firm if needed?

  1. What do the contract and policies say?

    • Does the MSA or DPA clearly limit use of client data to defined purposes?

    • How long is data retained, and how is it deleted if we leave?

These questions are not merely IT concerns; they go directly to your obligations under the ABA Model Rules and your firm’s risk profile.

Practical examples in law practice

Consider a cloud‑based contract‑analysis platform used by hundreds of firms. Over time, the provider can see which clauses lawyers routinely flag as risky, which edits are typically made, and what becomes the “preferred” language for certain issues. Through cross‑tenant learning, the system can use that aggregated knowledge to highlight problematic clauses and suggest alternatives more accurately for everyone.

Another example is an e‑discovery platform that uses cross‑tenant learning to distinguish between truly relevant documents and common “noise” such as automatically generated emails. The more matters the system processes across different tenants, the better it gets at ranking documents and reducing review burdens. This can be a material efficiency gain for litigation teams. ⚖️💼

In both scenarios, your ethical comfort depends on whether underlying data is appropriately anonymized, compartmentalized, and contractually protected.

Governance steps for your firm

To align cross‑tenant learning with professional obligations, firms can:

  • Update vendor‑due‑diligence checklists to include explicit questions about cross‑tenant learning, training data use, and model isolation.

  • Involve a cross‑functional team—lawyers, IT, information security, and risk management—in vendor selection and review.

  • Document your analysis of vendor practices and how they satisfy confidentiality, competence, and supervision obligations under the ABA Model Rules.

  • Educate lawyers and staff about how AI‑enabled tools work, what kinds of data they send into the system, and how to avoid unnecessary exposure of client‑identifying details.

Takeaway for busy practitioners

Smart vendor questions reduce risk in cross-tenant legal technology adoption

You do not need to reject cross‑tenant learning to protect your clients. Instead, you should approach it as a powerful capability that demands informed oversight. When well‑implemented, cross‑tenant learning can help your firm deliver faster, more consistent, and more cost‑effective legal services, while still honoring confidentiality and ethical duties. When poorly explained or loosely governed, it becomes an unnecessary and avoidable risk.

Understanding how your tools learn—and from whom—is now part of competent, modern legal practice. ⚖️💡

📢 Your Tech-Savvy Lawyer Blogger and Podcaster, Michael D.J. Eisenberg, Announces His Upcoming Talk on Ethical AI Use in Legal Practice at the 2026 AI Legal Practice Summit!

Saturday, April 18, 2026 | Capital University Law School

As technology continues to transform legal practice, I’m honored to announce that I’ll be speaking at the 2026 AI Legal Practice Summit, hosted by my alma mater, Capital University Law School, in Columbus, Ohio. This event brings together attorneys, educators, and technologists to explore how artificial intelligence is reshaping the legal field — not just operationally, but ethically and professionally as well.

My presentation, “Smart Practice, Smarter Ethics: Navigating AI Tools Under the ABA Model Rules,” focuses on a topic that’s both timely and critically important: how lawyers can use emerging AI technologies responsibly while meeting their professional obligations under the ABA Model Rules of Professional Conduct.

👉 Learn more and view the full schedule at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
🎟️ Register today through Eventbrite: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

Through my work on The Tech-Savvy Lawyer.Page blog and podcast, I’ve had countless conversations with practitioners who want to use AI to streamline tasks such as research, document drafting, and client management — yet remain uncertain about compliance, bias, and confidentiality. Law practice is evolving rapidly, but our ethical foundations must remain strong.

In my session, I’ll walk through key aspects of how the ABA Model Rules, including Rules 1.1 (Competence), 1.6 (Confidentiality of Information), and 5.3 (Responsibilities Regarding Nonlawyer Assistance), apply in an age of intelligent automation. These rules guide us in assessing not just what technology can do, but how and when it should be used.

Your faculty!

We’ll discuss:

  • Reviewing the tech stack you already own;

  • How to vet and implement AI-powered tools while maintaining confidentiality.

  • Questions to ask vendors about data handling and bias;

  • How to document best practices for firm-wide ethical compliance;

  • Ways to blend human legal judgment with algorithmic assistance; and

  • Managing client expectations about AI-enabled legal work.

My goal is to help attorneys approach technology with confidence — to experiment, adopt, and adapt responsibly. Being a “tech‑savvy lawyer” isn’t about mastering every gadget or platform; it’s about understanding how technology fits within the ethical framework of our profession.

The conversation around technological competence has matured since Comment 8 to Rule 1.1 was introduced. It’s no longer optional. Attorneys must understand the benefits, risks, and limitations of relevant technology to provide competent representation. Artificial intelligence highlights that reality better than any emerging tool before it.

Whether you’re a solo practitioner looking to automate administrative tasks, working for a government agency, or part of a large firm implementing AI-assisted legal research or document review, I’ll share specific practices you can adopt immediately.

If you’re attending and seeking Ohio CLE credit, please contact Jenny Wondracek at jwondracek@law.capital.edu for details.

PRogram description of my presentation.

The 2026 AI Legal Practice Summit will feature leading scholars, ethics experts, and seasoned practitioners. I’m looking forward to exchanging ideas, testing assumptions, and continuing a dialogue that helps ensure AI becomes a responsible partner—never a replacement—in the practice of law.

Let’s move forward together, with competence, curiosity, and care.

Learn more about the Summit at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
Register today: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

I look forward to seeing you there! ⚖️

MTC: Are Lawyers Really Ready for a Wallet‑Free Future? Digital Wallets, ABA Ethics, and the Reality of Going Fully Cashless 💳⚖️

Tech-savvy lawyers should not leave their physical wallets at home, BUT YOU CAN PROBABLY pare THEM down some.

When previous podcast guest David Sparks over at MacSparky shared his recent post about accidentally going out without his physical wallet—and still making it through the day just fine on his iPhone and Apple Wallet—it captured a quiet shift many of us in the legal profession are grappling with. He walked into his appointment armed only with a digital ID, digital insurance card, and Apple Pay, and everything worked. For a growing number of professionals, that is the new normal. The question for lawyers is more specific: not can we go wallet‑free, but should we—ethically, practically, and professionally—given our obligations under the ABA Model Rules?

Digital wallets are no longer niche tools reserved for tech enthusiasts. Apple Wallet and similar platforms have matured into robust ecosystems that can store payment cards, IDs, insurance cards, transit passes, and even car keys. They sit at the intersection of convenience, security, and risk. As attorneys, we have to examine that intersection with greater rigor than the average consumer, because our technology choices are framed by duties of competence, confidentiality, and client service.

The promise of a wallet‑free practice

On paper, the case for a full digital wallet is compelling. Digital payments can reduce friction at the courthouse café, client lunches, and bar events. Digital IDs eliminate worries about misplacing a physical card. Many platforms add layers of biometric security that traditional wallets can’t match. David notes that Apple Wallet has “been quietly getting better for years,” allowing storage of physical card numbers behind Face ID and making peer‑to‑peer payments a tap‑away. For a solo or small‑firm lawyer, that friction reduction compounds over time into real efficiency.

From a malpractice‑avoidance standpoint, a digital wallet can be safer than a billfold. Losing a traditional wallet means scrambling to cancel credit cards, monitoring for identity theft, and possibly dealing with unauthorized use of your bar ID or access cards. A lost phone, by contrast, can be located, remotely wiped, or locked with strong authentication. Properly configured, it can reduce risk rather than increase it.

This is where ABA Model Rule 1.1 on competence, particularly Comment 8, becomes relevant. The Comment notes that competent representation includes understanding “the benefits and risks associated with relevant technology.” A digital wallet is very much “relevant technology” for a modern practitioner. Choosing not to understand or use it, especially when it offers better security and traceability than analog methods, may itself become a competence question as the bar’s expectations evolve.

The gaps: cash, IDs, and access to justice

There are plenty of reasons not to go “cashless” when leaving home or the office.

Still, David’s hesitation—“there’s a part of me that still feels compelled to carry a small wallet with my driver’s license in it”—should resonate with lawyers. There are pockets of our professional lives where the ecosystem is not ready, and those pockets matter.

First, cash. Many lawyers still tip courthouse staff, parking attendants, baristas near the courthouse, and others in cash—including, in my case, using $2 bills (yes, they are still produced, still accepted, and can be obtained at many banks across the U.S. [At least as of the time of this posting]. I almost always get an excited smile when I tip my barista for his/her work with a $2 bill). Cash remains the lowest‑friction, most universally accepted “protocol” for small-scale human interactions. Refusing to carry any cash at all can put you in awkward social and professional situations, especially in older courthouses or local establishments that either do not take cards or resent micro‑transactions by card. For those committed to cash tipping as a personal or professional habit, a purely digital wallet is not yet a substitute.

Second, physical IDs. While TSA and some states are piloting and accepting digital IDs, acceptance is not universal, and the rules are in flux. David notes he has a state digital ID that “shows up nicely” in Apple Wallet. That is great—until you encounter an agency, judge, clerk, or officer who simply will not accept it. Not all jurisdictions recognize mobile driver’s licenses or digital IDs, and some procedures (e.g., certain filings or in‑person notarizations) still presume a physical, inspectable card. The risk is not hypothetical: show up with the wrong form of ID for a flight or a court security checkpoint, and you may face delay, additional fees, or outright denial of entry.

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.”

✈️ 🌎 ‼️

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.” ✈️ 🌎 ‼️

For lawyers, this is not just an inconvenience—it is a competence and diligence issue under Model Rules 1.1 and 1.3. If your failure to carry an accepted ID means you miss a hearing, delay a filing, or cannot visit a client, you have a professional problem, not just a tech annoyance. Likewise, local court rules and security policies may require a specific bar card or government‑issued ID to enter restricted areas. A digital ID on your phone will not help if the sheriff’s deputy at the door has not been trained or authorized to accept it.

Third, connectivity. A digital wallet that is fully dependent on live internet access is a fragile tool in old courthouses with thick stone walls, in rural jurisdictions, or during emergencies. Many modern digital wallets do allow offline transactions at NFC terminals using stored tokens, but not all. If your payment method, ID, or membership pass depends on a cloud verification step and you are in a dead zone—or your battery dies—you effectively have no wallet. Lawyers who rely on public transit, rideshares, or mobile office setups need to consider this in contingency planning, particularly when punctuality is essential.

Digital wallets and legal ethics

From an ethics perspective, digital wallets intersect with several core duties.

Under Model Rule 1.6, protecting client confidentiality extends to how you pay for and manage client‑related expenses. If you are using peer‑to‑peer payment apps or storing client‑related account details in a digital wallet, you must understand their privacy and data‑sharing practices. Some services expose transaction histories, social feeds, or metadata that could inadvertently reveal client relationships or matter details. Configuring strict privacy settings and separating personal from firm accounts is not optional; it is part of your duty of confidentiality.

Model Rule 1.15 on safekeeping property also comes into play if you ever use digital tools to handle client funds, reimbursements, or settlement distributions. While most bars still require traditional trust accounts and closely regulate payment processors, the trend toward digital payments will continue. Using any digital payment or wallet solution around client funds requires careful vetting, written policies, and—ideally—consultation with your malpractice carrier and bar ethics guidance.

Finally, Model Rule 5.3 on responsibilities regarding nonlawyer assistance extends to IT providers and wallet platforms. If your firm relies on third‑party providers to manage mobile device management (MDM), security, or payment integrations, you must make reasonable efforts to ensure their conduct aligns with your professional obligations. Managing digital wallets on firm‑owned or BYOD devices should be governed by a clear policy that addresses encryption, remote wipe, lock‑screen settings, and acceptable use.

Practical guidance: a hybrid, not a cliff

As advanced as our digital wallets are, the legal professional should carry a combination of digital and physical identification, means of payment, and cash!

Given these realities, are we “truly there” yet for lawyers to go fully wallet‑free? Not quite. For most practitioners, the prudent path is a hybrid approach:

  • Carry a slim physical wallet with a government‑issued ID, bar card (if used locally), a minimal backup payment card, and a small amount of cash for tipping and edge cases.

  • Use a digital wallet as your primary payment and convenience layer, especially in environments where it is well‑supported and secure.

  • Confirm, in advance, what IDs your courthouse, correctional facilities, and agencies accept, and do not assume your digital ID will suffice.

  • Harden your digital wallet: enable strong biometrics, ensure a reputable MDM or security solution manages any firm devices, and separate personal from professional payment flows where possible.

This hybrid approach aligns with Model Rule 1.1’s requirement to understand and responsibly adopt relevant technology while honoring the practical demands of courtroom work and client service. It allows you to benefit from the security and efficiency of digital wallets without betting your professional obligations on the most fragile parts of the ecosystem: universal acceptance and ubiquitous connectivity.

David ends his reflection by asking whether he will ever “truly go out knowingly wallet‑free” and whether he is alone in his hesitation. Lawyers should feel no pressure to be first in line to abandon physical wallets entirely. Our job is to advocate, counsel, and appear—on time, properly identified, and fully prepared. That may mean, for the foreseeable future, living comfortably in both worlds: with a well‑tuned digital wallet in your hand and a minimal, carefully curated physical wallet in your pocket.

MTC

⭐ First Five-Star Amazon Review for “The Lawyer’s Guide to Podcasting” – Why Tech-Savvy Lawyers Should Care About ABA Ethics, Client Trust, and Smart Marketing 🎙️⚖️

“The Lawyer’s Guide to Podcasting” by your favorite blogger/podcaster just earned its first five-star Amazon review, and it’s a milestone worth your attention. 🎉📘 The reviewer highlights what many of us in legal tech have been saying: podcasting is no longer a fringe hobby; it is a strategic, ethics-aware marketing channel for modern law practice. 🎙️

For lawyers with limited to moderate tech skills, this book demystifies microphones, workflows, and publishing tools without assuming you want to become an engineer. Instead, it walks you through practical steps to share your expertise in a format today’s clients already trust—long-form, authentic audio. 🔊

From a professional responsibility perspective, the guidance aligns with ABA Model Rule 1.1 on technology competence and Model Rule 1.6 on confidentiality by emphasizing the use of secure platforms, thoughtful content planning, and careful handling of client-identifying details. The book reinforces that podcasting can showcase your substantive knowledge while staying within the guardrails of Model Rule 7.1, avoiding misleading claims about your services. ⚖️

QR Code for Amazon book link

The first five-star review underlines two themes: listeners want real conversations, and they quickly recognize when a lawyer respects both the audience’s time and the profession’s ethical duties. That is exactly the posture this book encourages—credible, compliant, and client-centered. 🌟

If you are ready to build authority, differentiate your practice, and satisfy your tech-competence obligations without drowning in jargon, now is the perfect time to get your copy of “The Lawyer’s Guide to Podcasting” on Amazon and start planning your first ethically sound episode. 🚀

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

🎙️ My Law School Library Adds The Lawyer’s Guide to Podcasting to Empower Ethical, Tech-Savvy Attorneys ⚖️

https://law-capital.libguides.com/SpecialCollections/NewBooks

I’m thrilled to share that my alma mater, Capital University Law School, has added my book, The Lawyer’s Guide to Podcasting, to its Law Library Special Collections. 🎉📚 Seeing this guide on the same shelves where I learned to think like a lawyer underscores how central ethical technology use has become to modern advocacy. 🎙️ Written for attorneys with limited to moderate tech skills, it walks readers through planning, recording, and promoting a law‑firm podcast while honoring ABA Model Rules on technology competence, confidentiality, and attorney advertising, helping you communicate confidently, credibly, and compliantly. ⚖️🚀

You can pick up your copy on Amazon Today!

MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC

Word of the Week: Deepfakes: How Lawyers Can Spot Fake Digital Evidence and Avoid ABA Model Rule Violations ⚖️

A Tech-Savvy Lawyer needs to be able to spot Deepfakes Before Courtroom Ethics Violations!

“Deepfakes” are AI‑generated or heavily manipulated audio, video, or images that convincingly depict people saying or doing things that never happened.🧠 They are moving from internet novelty to everyday litigation risk, especially as parties try to slip fabricated “evidence” into the record.📹

Recent cases and commentary show courts will not treat deepfakes as harmless tech problems. Judges have dismissed actions outright and imposed severe sanctions when parties submit AI‑generated or altered media, because such evidence attacks the integrity of the judicial process itself.⚖️ At the same time, courts are wary of lawyers who cry “deepfake” without real support, since baseless challenges can look like gamesmanship rather than genuine concern about authenticity.

For practicing lawyers, deepfakes are first and foremost a professional responsibility issue. ABA Model Rule 1.1 (Competence) now clearly includes a duty to understand the benefits and risks of relevant technology, which includes generative AI tools that create or detect deepfakes. You do not need to be an engineer, but you should recognize common red flags, know when to request native files or metadata, and understand when to bring in a qualified forensic expert.

Deepfakes in Litigation: Detect Fake Evidence, Protect Your License!

Deepfakes also implicate Model Rule 3.3 (Candor to the tribunal) and Model Rule 3.4 (Fairness to opposing party and counsel). If you knowingly offer manipulated media, or ignore obvious signs of fabrication in your client’s “evidence,” you risk presenting false material to the court and obstructing access to truthful proof. Courts have made clear that submitting fake digital evidence can justify terminating sanctions, fee shifting, and referrals for disciplinary action.

Model Rule 8.4(c), which prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation, sits in the background of every deepfake decision. A lawyer who helps create, weaponize, or strategically “look away” from deepfake evidence is not just making a discovery mistake; they may be engaging in professional misconduct. Likewise, a lawyer who recklessly accuses an opponent of using deepfakes without factual grounding risks violating duties of candor and professionalism.

Practically, you can start protecting your clients with a few repeatable steps. Ask early in the case what digital media exists, how it was created, and who controlled the devices or accounts.🔍 Build authentication into your discovery plan, including requests for original files, device logs, and platform records that can help confirm provenance. When the stakes justify it, consult a forensic expert rather than relying on “gut feel” about whether a recording “looks real.”

lawyers need to know Deepfakes, Metadata, and ABA Ethics Rules!

Finally, talk to clients about deepfakes before they become a problem. Explain that altering media or using AI to “clean up” evidence is dangerous, even if they believe they are only fixing quality.📲 Remind them that courts are increasingly sophisticated about AI and that discovery misconduct in this area can destroy otherwise strong cases. Treat deepfakes as another routine topic in your litigation checklist, alongside spoliation and privilege, and you will be better prepared for the next “too good to be true” video that lands in your inbox.

ANNOUNCEMENT: My Book, “The Lawyer’s Guide to Podcasting,” is Amazon #1 New Release (Law Office Technology)

I’m excited to report that The Lawyer’s Guide to Podcasting ranked #1 as a New Release in Amazon’s Law Office Technology category for the week of February 07, 2026, and sales have already doubled since last month. 🎙️📈

For lawyers with limited-to-moderate tech skills, the book focuses on practical, repeatable workflows for launching and sustaining a compliant podcast presence. ⚖️💡

As you plan content, remember ABA Model Rule 1.1 (technology competence) and the related duties of confidentiality (Rule 1.6) and communications about services (Rule 7.1): use secure tools, avoid accidental client disclosures, and ensure marketing statements are accurate. 🔐✅

Get your copy today! 📘🚀