📖 Word of the Week: “Cross‑Tenant” Learning in Legal Practice

Cross-tenant learning helps law firms improve AI tools without exposing data

If your firm uses cloud‑based tools, you are already living in a multi‑tenant world. In that world, cross‑tenant learning is quickly becoming a key concept that every lawyer and legal operations professional should understand. 🧠⚖️

In simple terms, a “tenant” is your firm’s logically separate space inside a cloud platform: your own users, matters, documents, and settings, isolated from everyone else’s. Cross‑tenant learning refers to techniques in which a vendor’s system learns from patterns across multiple tenants (for example, many law firms) to improve its features—such as search, drafting suggestions, or document classification—without exposing any other firm’s confidential data to you or yours to them.

Why cross‑tenant learning matters for law firms

Cross‑tenant learning is especially relevant as generative AI and machine‑learning tools become embedded in e‑discovery platforms, contract review tools, legal research systems, and practice‑management software. Vendors may use aggregated and anonymized usage data to:

  • Improve relevance of search results and recommendations.

  • Enhance clause and issue spotting in contracts and briefs.

  • Reduce false positives in e‑discovery or compliance alerts.

  • Optimize workflows based on how similar firms use the product.

For lawyers, the value proposition is straightforward: your tools can become “smarter” faster, based on lessons learned across many organizations, not just your own firm’s experience. Done properly, cross‑tenant learning can raise the baseline quality and efficiency of technology available to your practice. ⚙️📈

ABA Model Rules: Confidentiality and Competence

Any discussion of cross‑tenant learning for law firms must start with confidentiality and competence.

  • Model Rule 1.6 (Confidentiality of Information) requires lawyers to safeguard information relating to the representation of a client. That obligation extends to how your vendors collect, store, and use your data. You must understand whether and how client data may be used for cross‑tenant learning and ensure that any such use preserves confidentiality through anonymization, aggregation, and strong technical and contractual controls. 🔐

  • Model Rule 1.1 (Competence), including Comment 8, emphasizes that lawyers should keep abreast of the benefits and risks associated with relevant technology. Understanding cross‑tenant learning is now part of that duty. You do not need to become a data scientist, but you should be comfortable asking vendors precise questions and recognizing red flags.

  • Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) applies when you rely on vendors as nonlawyer assistants. You must make reasonable efforts to ensure that their conduct is compatible with your professional obligations, including how they use your data for cross‑tenant learning. 🧾

Key questions to ask your vendors

ABA Model Rules guide ethical use of cross-tenant learning technologies

When evaluating a product that relies on cross‑tenant learning, consider asking:

  1. What data is used?

    • Is it only metadata or usage logs, or are actual document contents included?

    • Is the data aggregated and anonymized before it is used to train shared models?

  1. How is confidentiality protected?

    • Can other tenants ever see prompts, documents, or client‑identifying information from our firm?

    • What technical measures (encryption, access controls, tenant isolation) are in place?

  1. Can cross‑tenant learning be limited or disabled?

    • Do we have opt‑out or configuration controls?

    • Is there a dedicated model or environment for our firm if needed?

  1. What do the contract and policies say?

    • Does the MSA or DPA clearly limit use of client data to defined purposes?

    • How long is data retained, and how is it deleted if we leave?

These questions are not merely IT concerns; they go directly to your obligations under the ABA Model Rules and your firm’s risk profile.

Practical examples in law practice

Consider a cloud‑based contract‑analysis platform used by hundreds of firms. Over time, the provider can see which clauses lawyers routinely flag as risky, which edits are typically made, and what becomes the “preferred” language for certain issues. Through cross‑tenant learning, the system can use that aggregated knowledge to highlight problematic clauses and suggest alternatives more accurately for everyone.

Another example is an e‑discovery platform that uses cross‑tenant learning to distinguish between truly relevant documents and common “noise” such as automatically generated emails. The more matters the system processes across different tenants, the better it gets at ranking documents and reducing review burdens. This can be a material efficiency gain for litigation teams. ⚖️💼

In both scenarios, your ethical comfort depends on whether underlying data is appropriately anonymized, compartmentalized, and contractually protected.

Governance steps for your firm

To align cross‑tenant learning with professional obligations, firms can:

  • Update vendor‑due‑diligence checklists to include explicit questions about cross‑tenant learning, training data use, and model isolation.

  • Involve a cross‑functional team—lawyers, IT, information security, and risk management—in vendor selection and review.

  • Document your analysis of vendor practices and how they satisfy confidentiality, competence, and supervision obligations under the ABA Model Rules.

  • Educate lawyers and staff about how AI‑enabled tools work, what kinds of data they send into the system, and how to avoid unnecessary exposure of client‑identifying details.

Takeaway for busy practitioners

Smart vendor questions reduce risk in cross-tenant legal technology adoption

You do not need to reject cross‑tenant learning to protect your clients. Instead, you should approach it as a powerful capability that demands informed oversight. When well‑implemented, cross‑tenant learning can help your firm deliver faster, more consistent, and more cost‑effective legal services, while still honoring confidentiality and ethical duties. When poorly explained or loosely governed, it becomes an unnecessary and avoidable risk.

Understanding how your tools learn—and from whom—is now part of competent, modern legal practice. ⚖️💡

BOLO: Gone (Almost) Phishin’: What a Sophisticated Apple Scam Teaches Lawyers About Cybersecurity, Client Confidentiality, and ABA Ethical Duties 🚨📱

Lawyers Face Sophisticated Apple Phishing Scam Cybersecurity Risks!

A recent real‑world phishing attempt against a well‑known technology CEO offers an important warning for lawyers and law firms about how modern scams now convincingly mimic “legitimate” security workflows. This attack did not rely on laughable grammar, obvious fake domains, or clumsy social engineering; instead, it weaponized Apple’s genuine password‑reset system, real support case IDs, and realistic phone support to try to compromise the victim’s Apple ID. For lawyers who increasingly rely on mobile devices, cloud services, and multi‑factor authentication for client communications, this kind of scam is not hypothetical—it's a direct threat to client confidentiality and professional responsibility.

In the incident, the victim’s Apple Watch, iPhone, and Mac all began displaying unexpected prompts to reset the Apple ID password, despite the user running Apple’s Lockdown Mode on all devices. The prompts were not generated by malware on the devices, but by an attacker repeatedly triggering Apple’s legitimate password reset flow, thereby flooding the user with authentic-looking notifications. From the perspective of a busy lawyer, such prompts might be dismissed as an annoyance or, worse, acted upon in haste. Either reaction, without careful verification, can create risk. 📲

The scam escalated when the attacker called, posing as “Alexander from Apple Support,” referencing a real Apple support case that they had opened themselves by impersonating the victim. Because Apple’s own systems generated a valid case ID and corresponding emails, the communications appeared fully authentic; no spam filter or “phishing awareness” toolbar would have flagged them as suspicious. The caller began with correct, even prudent, security advice—check your account, verify nothing has changed, consider updating your password—which is precisely the kind of guidance many lawyers expect from legitimate support channels. This blend of real security language with a fraudulent goal is what makes the scam so dangerous. 🧠

Phishing Lessons for Lawyers Using Apple Devices and Cloud Tools!

The critical moment came when “Alexander” sent a text with a link to “audit-apple.com,” a pixel‑perfect imitation of Apple’s site that displayed the real case ID and even a fake transcript of the attackers’ prior “chat” with Apple. At the bottom of the page sat a “Sign in with Apple” button, intended to harvest the victim’s credentials under the guise of closing a fraudulent request. Only after poking at the site and noticing that any case ID produced the same result did the victim confirm it was a scam and confront the attacker. Many lawyers, particularly those with only moderate comfort with technology, might not test the site this way and could be persuaded by the case ID and realistic presentation. 🕵️‍♂️

For legal professionals, the ethical implications are significant. ABA Model Rule 1.1 on competence requires lawyers to understand the benefits and risks associated with relevant technology, including the ability to recognize and respond to sophisticated phishing. The duty of confidentiality under Rule 1.6 requires taking reasonable steps to prevent unauthorized access to client information, which includes protecting accounts and devices that store or access client files, email, and messaging. If a lawyer’s Apple ID or similar account is compromised, attackers may gain access to privileged communications, document repositories, calendar entries, and even secure messaging apps that sync via the device.

Model Rule 5.3 extends these obligations to nonlawyer assistants, including staff and outside vendors who may handle client data or access firm systems. If partners and associates are vulnerable to such scams, staff and contractors are as well; firm leadership must implement policies, training, and incident‑response procedures that recognize the new generation of phishing where everything “looks right” until you inspect the URL or underlying flow. This aligns with recognized best practices: anti‑phishing training, simulated phishing exercises, and clear escalation paths for suspicious security communications.

Key practical lessons for lawyers from this incident include:

  • Do not approve unexpected password‑reset prompts; instead, go directly to your device or account settings via a known‑good path (e.g., Settings → Apple ID on your device).

  • Treat unsolicited “support” calls with extreme skepticism, even when they reference real case IDs or recent activity; major vendors like Apple will not call you out of the blue to fix a security issue.

  • Always verify the URL before entering credentials; for Apple, support should live on apple.com or getsupport.apple.com, not look‑alike domains.

  • Establish a firm‑wide rule: no one—IT, vendors, or support—will ever ask for passwords, one‑time codes, or sign‑in via a link sent in an unsolicited message; any such request must be verified through a separate, trusted channel.

Apple Scam Warning for Lawyers Protecting Client Confidentiality

From an ethical‑risk perspective, a successful attack of this kind could trigger duties to notify clients, insurers, and regulators, depending on your jurisdiction’s breach‑notification regime and professional‑conduct rules. Even an “almost‑breach,” like the one described in this article, is a valuable opportunity for firms to revisit incident‑response plans, document what would happen if a lawyer’s Apple ID or smartphone were compromised, and rehearse the steps for containing damage. Doing so not only supports compliance with Model Rules 1.1 and 1.6 but also demonstrates to clients and courts that the firm takes cybersecurity governance seriously. ✅

The story also underscores that even highly technical users can be momentarily convinced by a well‑crafted scam, which should encourage humility rather than embarrassment among lawyers who worry they are “not technical enough.” The correct response is not shame, but systems: layered security controls, clear verification procedures, and regular training that turn individual vigilance into institutional resilience. Ultimately, as phishing attacks become more sophisticated and exploit real security workflows, lawyers must elevate their cybersecurity awareness to meet their ethical obligations and preserve the trust at the core of the attorney‑client relationship. 💼

📰 ABA TECHSHOW 2026 Recap: From AI Hype to LLM Reality, Google Workspace, and Ethical Lawyering in the Age of Bots ⚖️🤖

The Real Story Behind ABA TECHSHOW 2026

The techshow is the conference to go to keep your pulse on the technology lawyers should be using every day!

Walking into ABA TECHSHOW 2026 this year, I wasn’t thinking about shiny gadgets; I was thinking about competence, client service, and what it will mean to practice law in an era dominated not just by “AI,” but by large language models (LLMs) quietly shaping almost everything we see and share online. During my work on The Tech-Savvy Lawyer.Page blog and podcast, I keep running into the same pattern: lawyers know they should understand legal technology, yet they worry they’ll break something, breach a rule, or look foolish in front of their staff. TECHSHOW 2026 aimed directly at that anxiety — but this year, the conversation needs to go beyond what AI and generative AI can do and toward how LLMs and search bots are already shaping our professional identities online and offline. ⚖️💻

Keynotes: The “AI Dividend” and Your Time

The keynote lineup captured the tension between promise and risk. Legal market analysts highlighted what some called the “AI Dividend”: when machines take over routine drafting and research, lawyers gain time to think, advise, and advocate at a higher level. The real question — one I’ve been hammering on The Tech-Savvy Lawyer.Page for years — is what you will do with the time technology gives back (some of that time should include reviewing your work, e.g., your case citations). Tech-savvy speakers pushed attendees to look past vendor hype and focus on the broader digital environment, where consumer-facing tools, search engines, and recommendation algorithms are setting new expectations for speed, transparency, and availability.

Practical AI in the Sessions

Inside the conference rooms, the “Taming the Machines” and related AI tracks met baseline concerns (some with hands-on workshops) focused on realistic use cases: assisted drafting, pattern spotting in discovery, and summarizing voluminous documents. These sessions were built for lawyers who live in Word, Outlook, Google Workspace, and practice management systems and who simply want to stop retyping the same paragraphs. The faculty hammered home a critical point: generative AI is an assistant, not a decision-maker; you remain the lawyer, responsible for accuracy, judgment, and ethics under the ABA Model Rules. 🤖📄

Google Workspace, Microsoft 365, and Using What You Already Own

Mathew Krebis’ session on Google Workspace drove that message home in very practical terms. He showed how many firms are only scratching the surface of tools they already pay for: shared Drives with well-structured permissions, real-time collaboration in Google Docs, Gmail automation for intake and follow-up, and Google Calendar combined with Tasks to keep matter timelines under control. When you layer in emerging AI features in Workspace — smart replies, document summaries, suggested outlines — you see how even modest use of these tools can dramatically reduce friction in daily practice, and the tools Mathew discussed are not isolated to “law practice management” systems.

The takeaway was powerful: before you chase a new platform, fully exploit the ecosystem you already have. For many firms, “being more tech-savvy” starts with properly configuring their Google Workspace, Microsoft 365, or other SaaS platform, rather than buying yet another service.

Podcasting, Social Media, and LLM-Driven Visibility

Meanwhile, one other yet important frontier — and one that still feels underexplored — is what happens when LLMs and search bots become the primary lens through which clients, colleagues, and even opposing counsel discover you. That’s where my panel, 🎧 Podcasting for Lawyers: The Truth Behind the Mic, came in.

Ruby L. Powers, Gyi Tsakalakis, Stephanie Everett, and I discussed podcasting and social media not just as marketing channels, but as structured signals fed into LLM-driven engines that are constantly indexing, ranking, and inferring who is an authority on a given topic. Whether you talk about appellate practice, family law, or even a hobby outside the law, your content becomes training data for Generative Engine Optimization/LLM bots that decide which voices surface first when someone types a question into an AI chatbox. 🎙️🌐

In other words, your digital footprint is no longer static. It is being interpreted, reassembled, and presented as answers — often without you ever seeing the intermediate steps. That reality raises a new layer of ethical questions under the ABA Model Rules. Model Rule 7.1’s prohibition on false or misleading communications about the lawyer or the lawyer’s services takes on a new twist when LLMs remix snippets of your posts, podcasts, Google Workspace–hosted client alerts, and blog articles into composite “advice.”

You might be scrupulously accurate in your content, but if an LLM mischaracterizes it or presents it out of context, what then? TECHSHOW 2026 addressed traditional risks like hallucinated case citations, but there is room for a deeper, explicit conversation about how LLM-driven discovery intersects with advertising, communication, and competence duties.

EXPO Hall: Tools, Timekeeping, and Vendor Reality Checks

The EXPO Hall, as always, served as a laboratory of possibilities. Practice management platforms, billing tools, document automation, and a wave of AI-enhanced products competed for attention. Timekeeping tools that automatically capture activity across devices and applications and then propose draft time entries have grown dramatically since last year. For lawyers still reconstructing their days from memory and sticky notes, this is more than a marginal upgrade; it directly affects revenue, work-life balance, and accuracy.

But the fair warning comes here: make sure vendors are showing you what their product can do today, not what they hope it will do someday. In the LLM era, marketing decks are often several steps ahead of deployed reality. 🧾⏱️

Remember, you have an obligation under Model Rule 1.1 (competence) and Model Rule 5.3 (responsibilities regarding non-lawyer assistance) to understand the capabilities and limitations of any tech you “delegate” work to. Asking hard questions about current functionality, data handling, and audit trails is not being difficult; it is part of your duty of care.

Cybersecurity, Confidentiality, and LLM Risk

networking oppOrtunities like the taste of tecHshow” is a great way to talk with and learn from other lawyers about using tech in the practice of law.

The sessions on cybersecurity and confidentiality continued to do vital work. Under Model Rule 1.6, our obligation to protect client information extends to cloud storage, email, video conferencing, and the mobile devices we casually use in airport lounges. The “Guardians of the Data” track walked through practical checklists rather than abstract fearmongering: password managers, multi-factor authentication, properly configured backups, and vendor due diligence.

For firms running on Google Workspace, that translated into concrete steps: enforcing two-step verification, tightening Drive sharing settings, using client-specific shared Drives instead of ad hoc personal folders, and monitoring admin logs for suspicious access. The move from generic “AI” to LLM-powered services on any platform increases data risk, because many tools rely on ingesting your content — sometimes including client information — to improve their models. If you don’t understand where your data is going and how it is used, you cannot credibly say you are meeting confidentiality obligations. 🔐☁️

Competence, Human-in-the-Loop, and Everyday Workflows

You have an obligation under Model Rule 1.1 (competence) and Model Rule 5.3 (responsibilities regarding non-lawyer assistance) to understand the capabilities and limitations of any tech you “delegate” work to. Asking hard questions about current functionality, data handling, and audit trails is part of your duty of care.

Balancing this skepticism, though, is an equally important truth: becoming proficient with AI and LLM-based tools is not a spectator sport. You cannot satisfy your duty of technological competence from the sidelines. You have to use the tools first on a small scale, then progressively in more critical workflows, always with appropriate supervision and verification.

That might mean piloting an AI drafting feature in Google Docs and Microsoft Word for internal templates, or testing structured intake forms and automations inside Google Workspace or Microsoft 365 before rolling them out firm-wide. Ignoring AI because it feels uncomfortable is no longer the safer option. In some practices, failing to integrate it intelligently — while peers and opposing counsel do — may itself raise competence concerns as expectations evolve in courts and among clients. 🧩📈

Saturday Sessions: From “Use AI” to “Use AI Responsibly”

On Saturday, the 9 a.m. conversation among ABA President Michelle A. Behnke, Immediate Past President William R. “Bill” Bay, and President-Elect Barbara J. Howard, underscored how all of this ties into the rule of law and access to justice, framing AI as something lawyers now have a responsibility to actually use, not simply watch from the sidelines. The 10 a.m. session with Judge Timothy S. Driscoll then shifted the focus from “use AI or be left behind” to “use AI responsibly,” making it clear that judges, too, are integrating AI into their work and that they are not immune from mistakes when they rely on it.

The message for everyone in the courtroom ecosystem was simple and blunt: “Review, review, and review” any work touched by AI, because AI is a non‑infallible tool that does make errors and can mislead the unwary. Together, these sessions acknowledged the growing digital divide: lawyers and clients who can’t or won’t adopt technology risk falling out of the mainstream of legal services, while those who adopt it recklessly risk eroding confidence in both their own work and the justice system as a whole.

We are not merely debating convenience; we are deciding who gets effective representation and who is left out because the lawyer they might have hired never appeared in their LLM‑driven search results — or appeared with AI‑boosted visibility but poor ethical judgment. Technology, in this sense, is not optional; it is one of the few levers we have to expand meaningful access to legal help, provided we wield it with intent, humility, and rigorous human review. ⚖️🧠

LLM Literacy: The Next Core Competency

That balance — between caution and experimentation — is where TECHSHOW 2026 both excelled and showed its next frontier. Many sessions made AI approachable, breaking down concepts for lawyers with limited to moderate tech skills and providing concrete workflows they could apply on Monday. What I would like to see more explicitly next year is programming that treats LLM literacy as a core competency: understanding how LLMs are built, how they index and surface information, how your content feeds into them, and how that affects everything from client intake to reputation, whether you are working in Microsoft 365, Google Workspace, or a specialized legal platform.

From my vantage point as a legal tech ambassador at The Tech-Savvy Lawyer, the most successful sessions respected that many lawyers are highly capable professionals who simply haven’t had the time or guidance to modernize their workflows. They don’t need to become prompt engineers. They need guardrails, roadmaps, and clear examples of how to align AI, LLM tools, and mainstream platforms like Microsoft 365 and Google Workspace with the ABA Model Rules and local bar guidance. When faculty focused on incremental steps — tightening cybersecurity configurations, adding a layer of AI-assisted drafting under strict human review, building a consistent content strategy that LLMs can reliably recognize — the room should lead in.

A Tough-Love Takeaway for Lawyers

If you are a lawyer who still feels behind, here’s the core message I took away from TECHSHOW 2026, with a bit of tough love: you don’t need to chase every new tool, but you can’t afford to ignore LLM-driven AI and the platforms you already live in, like Microsoft 365 and Google Workspace, any longer. Understand the basics; pilot one or two well-vetted tools to start improving your efficiency without sacrificing the need for a true human-in-the-loop.

SEE YOU IN CHICAGO FOR ABA TECHSHOW 2027!!!

Read your jurisdiction’s ethics opinions on AI and technology. Build habits that protect client data by default. Use your own content — whether blog posts, newsletters, or podcasts — to train the bots to see you as a trusted authority rather than a digital afterthought. Ultimately, your bar license may be at more risk from not engaging with AI than from engaging with it carefully and intelligently.

The future of legal practice will not wait until we are all comfortable; it is here now, embedded in the search boxes, recommendation engines, and tools your clients already use. TECHSHOW 2026 made that clear. The next move is yours. 🚀⚖️

MTC

MTC: Are Lawyers Really Ready for a Wallet‑Free Future? Digital Wallets, ABA Ethics, and the Reality of Going Fully Cashless 💳⚖️

Tech-savvy lawyers should not leave their physical wallets at home, BUT YOU CAN PROBABLY pare THEM down some.

When previous podcast guest David Sparks over at MacSparky shared his recent post about accidentally going out without his physical wallet—and still making it through the day just fine on his iPhone and Apple Wallet—it captured a quiet shift many of us in the legal profession are grappling with. He walked into his appointment armed only with a digital ID, digital insurance card, and Apple Pay, and everything worked. For a growing number of professionals, that is the new normal. The question for lawyers is more specific: not can we go wallet‑free, but should we—ethically, practically, and professionally—given our obligations under the ABA Model Rules?

Digital wallets are no longer niche tools reserved for tech enthusiasts. Apple Wallet and similar platforms have matured into robust ecosystems that can store payment cards, IDs, insurance cards, transit passes, and even car keys. They sit at the intersection of convenience, security, and risk. As attorneys, we have to examine that intersection with greater rigor than the average consumer, because our technology choices are framed by duties of competence, confidentiality, and client service.

The promise of a wallet‑free practice

On paper, the case for a full digital wallet is compelling. Digital payments can reduce friction at the courthouse café, client lunches, and bar events. Digital IDs eliminate worries about misplacing a physical card. Many platforms add layers of biometric security that traditional wallets can’t match. David notes that Apple Wallet has “been quietly getting better for years,” allowing storage of physical card numbers behind Face ID and making peer‑to‑peer payments a tap‑away. For a solo or small‑firm lawyer, that friction reduction compounds over time into real efficiency.

From a malpractice‑avoidance standpoint, a digital wallet can be safer than a billfold. Losing a traditional wallet means scrambling to cancel credit cards, monitoring for identity theft, and possibly dealing with unauthorized use of your bar ID or access cards. A lost phone, by contrast, can be located, remotely wiped, or locked with strong authentication. Properly configured, it can reduce risk rather than increase it.

This is where ABA Model Rule 1.1 on competence, particularly Comment 8, becomes relevant. The Comment notes that competent representation includes understanding “the benefits and risks associated with relevant technology.” A digital wallet is very much “relevant technology” for a modern practitioner. Choosing not to understand or use it, especially when it offers better security and traceability than analog methods, may itself become a competence question as the bar’s expectations evolve.

The gaps: cash, IDs, and access to justice

There are plenty of reasons not to go “cashless” when leaving home or the office.

Still, David’s hesitation—“there’s a part of me that still feels compelled to carry a small wallet with my driver’s license in it”—should resonate with lawyers. There are pockets of our professional lives where the ecosystem is not ready, and those pockets matter.

First, cash. Many lawyers still tip courthouse staff, parking attendants, baristas near the courthouse, and others in cash—including, in my case, using $2 bills (yes, they are still produced, still accepted, and can be obtained at many banks across the U.S. [At least as of the time of this posting]. I almost always get an excited smile when I tip my barista for his/her work with a $2 bill). Cash remains the lowest‑friction, most universally accepted “protocol” for small-scale human interactions. Refusing to carry any cash at all can put you in awkward social and professional situations, especially in older courthouses or local establishments that either do not take cards or resent micro‑transactions by card. For those committed to cash tipping as a personal or professional habit, a purely digital wallet is not yet a substitute.

Second, physical IDs. While TSA and some states are piloting and accepting digital IDs, acceptance is not universal, and the rules are in flux. David notes he has a state digital ID that “shows up nicely” in Apple Wallet. That is great—until you encounter an agency, judge, clerk, or officer who simply will not accept it. Not all jurisdictions recognize mobile driver’s licenses or digital IDs, and some procedures (e.g., certain filings or in‑person notarizations) still presume a physical, inspectable card. The risk is not hypothetical: show up with the wrong form of ID for a flight or a court security checkpoint, and you may face delay, additional fees, or outright denial of entry.

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.”

✈️ 🌎 ‼️

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.” ✈️ 🌎 ‼️

For lawyers, this is not just an inconvenience—it is a competence and diligence issue under Model Rules 1.1 and 1.3. If your failure to carry an accepted ID means you miss a hearing, delay a filing, or cannot visit a client, you have a professional problem, not just a tech annoyance. Likewise, local court rules and security policies may require a specific bar card or government‑issued ID to enter restricted areas. A digital ID on your phone will not help if the sheriff’s deputy at the door has not been trained or authorized to accept it.

Third, connectivity. A digital wallet that is fully dependent on live internet access is a fragile tool in old courthouses with thick stone walls, in rural jurisdictions, or during emergencies. Many modern digital wallets do allow offline transactions at NFC terminals using stored tokens, but not all. If your payment method, ID, or membership pass depends on a cloud verification step and you are in a dead zone—or your battery dies—you effectively have no wallet. Lawyers who rely on public transit, rideshares, or mobile office setups need to consider this in contingency planning, particularly when punctuality is essential.

Digital wallets and legal ethics

From an ethics perspective, digital wallets intersect with several core duties.

Under Model Rule 1.6, protecting client confidentiality extends to how you pay for and manage client‑related expenses. If you are using peer‑to‑peer payment apps or storing client‑related account details in a digital wallet, you must understand their privacy and data‑sharing practices. Some services expose transaction histories, social feeds, or metadata that could inadvertently reveal client relationships or matter details. Configuring strict privacy settings and separating personal from firm accounts is not optional; it is part of your duty of confidentiality.

Model Rule 1.15 on safekeeping property also comes into play if you ever use digital tools to handle client funds, reimbursements, or settlement distributions. While most bars still require traditional trust accounts and closely regulate payment processors, the trend toward digital payments will continue. Using any digital payment or wallet solution around client funds requires careful vetting, written policies, and—ideally—consultation with your malpractice carrier and bar ethics guidance.

Finally, Model Rule 5.3 on responsibilities regarding nonlawyer assistance extends to IT providers and wallet platforms. If your firm relies on third‑party providers to manage mobile device management (MDM), security, or payment integrations, you must make reasonable efforts to ensure their conduct aligns with your professional obligations. Managing digital wallets on firm‑owned or BYOD devices should be governed by a clear policy that addresses encryption, remote wipe, lock‑screen settings, and acceptable use.

Practical guidance: a hybrid, not a cliff

As advanced as our digital wallets are, the legal professional should carry a combination of digital and physical identification, means of payment, and cash!

Given these realities, are we “truly there” yet for lawyers to go fully wallet‑free? Not quite. For most practitioners, the prudent path is a hybrid approach:

  • Carry a slim physical wallet with a government‑issued ID, bar card (if used locally), a minimal backup payment card, and a small amount of cash for tipping and edge cases.

  • Use a digital wallet as your primary payment and convenience layer, especially in environments where it is well‑supported and secure.

  • Confirm, in advance, what IDs your courthouse, correctional facilities, and agencies accept, and do not assume your digital ID will suffice.

  • Harden your digital wallet: enable strong biometrics, ensure a reputable MDM or security solution manages any firm devices, and separate personal from professional payment flows where possible.

This hybrid approach aligns with Model Rule 1.1’s requirement to understand and responsibly adopt relevant technology while honoring the practical demands of courtroom work and client service. It allows you to benefit from the security and efficiency of digital wallets without betting your professional obligations on the most fragile parts of the ecosystem: universal acceptance and ubiquitous connectivity.

David ends his reflection by asking whether he will ever “truly go out knowingly wallet‑free” and whether he is alone in his hesitation. Lawyers should feel no pressure to be first in line to abandon physical wallets entirely. Our job is to advocate, counsel, and appear—on time, properly identified, and fully prepared. That may mean, for the foreseeable future, living comfortably in both worlds: with a well‑tuned digital wallet in your hand and a minimal, carefully curated physical wallet in your pocket.

MTC

MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖.  Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool’s Terms of Use can trigger a privilege waiver, and what “tech competence” really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff’s wake-up-call analysis of confidentiality and third-party disclosure risk.

🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters.

In our conversation, we cover the following

  • 00:00 — The “superhuman assistant” promise, and the procedural nightmare risk. 🧠⚖️

  • 00:01 — The core warning: AI use can “blow a hole” in privilege.

  • 00:02 — Editorial overview: “The AI Privilege Trap” by Michael D.J. Eisenberg.

  • 00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.

  • 00:03 — Why Judge Jed Rakoff’s opinion gets attention (tech-literate, influential).

  • 00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.

  • 00:04 — The court’s conclusion: no attorney-client privilege, no work product protection.

  • 00:05 — Privilege basics applied to AI: “confidential + lawyer” and why AI fails that test.

  • 00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾

  • 00:07 — The “stranger on the street” analogy: you can’t retroactively make it confidential.

  • 00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.

  • 00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.

  • 00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.

  • 00:10 — “Reasonable safeguards”: read policies, adjust settings, and know training/logging.

  • 00:11 — Public vs. enterprise AI: why contracts and “walled gardens” matter.

  • 00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.

  • 00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.

  • 00:13 — Redefining “tech-savvy lawyer” in 2026: judgment and restraint. 🧭

  • 00:14 — The “straight-face test”: could you defend confidentiality after a judge reads the policy?

  • 00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.

  • 00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

Word 📖 of the Week: Why Lawyers Need to Know the Term “Constitutional AI”

“Constitutional AI” is a design framework for artificial intelligence that aims to make AI systems helpful, harmless, and honest by training them to follow a defined set of higher‑level rules, much like a constitution. 🤖📜 For lawyers, this is not abstract theory; it connects directly to duties of technological competence, confidentiality, and supervision under the ABA Model Rules.

Most legal professionals now rely on AI‑enabled tools in research, drafting, e‑discovery, document automation, and client communication. These tools may use generative AI in the background even when the marketing materials do not emphasize “AI.” Constitutional AI gives you a practical way to evaluate those tools: are they structured to avoid hallucinations, protect confidential data, and resist being prompted into unethical behavior.

At a high level, a Constitutional AI system is trained to follow explicit principles, such as “do not fabricate legal citations,” “do not disclose confidential information,” and “do not assist in unlawful conduct.” The model learns to critique and revise its own outputs against those principles. For law firms, that aligns with the core expectations in ABA Model Rule 1.1 (competence) and its Comment 8, which require lawyers to understand the benefits and risks of relevant technology and stay current with changes in how these systems work. ⚖️

Constitutional AI also intersects with ABA Model Rule 1.6 on confidentiality. If an AI tool is not designed with strong guardrails, prompts, and outputs can expose sensitive client information to external systems or vendors. When you evaluate an AI platform, you should ask where data is stored, how prompts are logged, whether training data will include your matters, and whether the provider has implemented “constitutional” safeguards against data leakage and unsafe uses.

Supervision is another critical angle. ABA Formal Opinion 512 and Model Rules 5.1 and 5.3 stress that supervising lawyers must set policies and training for how attorneys and staff use generative AI. Constitutional AI can reduce risk, yet it does not replace supervisory duties. You still must review AI‑generated work product, confirm citations, validate factual assertions, and ensure the output is consistent with Rules 3.1, 3.3, and 8.4(c) on meritorious claims, candor to the tribunal, and avoiding dishonesty or misrepresentation.

For practitioners with limited to moderate tech skills, the key is to treat Constitutional AI as a practical checklist rather than a buzzword. ✅ Ask three questions about any AI tool you use:

  1. Is this AI actually helpful to the client’s matter, or is it just saving time while adding risk.

  2. Could this output harm the client through inaccuracy, bias, or disclosure of confidential data.

  3. Is the AI acting honestly, meaning it is not hallucinating cases or claiming certainty where none exists.

If any answer is “no,” you must pause, verify, and revise before relying on the AI output.

In the AI era, your ethical risk often turns on how you select, supervise, and document the use of AI in your practice. Constitutional AI will not make you bulletproof, but it gives you a structured way to align your technology choices with ABA Model Rules while protecting your clients, your license, and your reputation. 

⭐ First Five-Star Amazon Review for “The Lawyer’s Guide to Podcasting” – Why Tech-Savvy Lawyers Should Care About ABA Ethics, Client Trust, and Smart Marketing 🎙️⚖️

“The Lawyer’s Guide to Podcasting” by your favorite blogger/podcaster just earned its first five-star Amazon review, and it’s a milestone worth your attention. 🎉📘 The reviewer highlights what many of us in legal tech have been saying: podcasting is no longer a fringe hobby; it is a strategic, ethics-aware marketing channel for modern law practice. 🎙️

For lawyers with limited to moderate tech skills, this book demystifies microphones, workflows, and publishing tools without assuming you want to become an engineer. Instead, it walks you through practical steps to share your expertise in a format today’s clients already trust—long-form, authentic audio. 🔊

From a professional responsibility perspective, the guidance aligns with ABA Model Rule 1.1 on technology competence and Model Rule 1.6 on confidentiality by emphasizing the use of secure platforms, thoughtful content planning, and careful handling of client-identifying details. The book reinforces that podcasting can showcase your substantive knowledge while staying within the guardrails of Model Rule 7.1, avoiding misleading claims about your services. ⚖️

QR Code for Amazon book link

The first five-star review underlines two themes: listeners want real conversations, and they quickly recognize when a lawyer respects both the audience’s time and the profession’s ethical duties. That is exactly the posture this book encourages—credible, compliant, and client-centered. 🌟

If you are ready to build authority, differentiate your practice, and satisfy your tech-competence obligations without drowning in jargon, now is the perfect time to get your copy of “The Lawyer’s Guide to Podcasting” on Amazon and start planning your first ethically sound episode. 🚀

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC