📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

Word of the Week: Vendor Risk Management for Law Firms in 026: Lessons from the Clio–Alexi CRM Fight ⚖️💻

Clio vs. Alexi: CRM Litigation COULD THREATEN Law Firm Data

“Vendor risk management” is no longer an IT buzzword; it is now a core law‑practice skill for any attorney who relies on cloud‑based tools, CRMs, or AI‑driven research platforms.⚙️📊 The Tech‑Savvy Lawyer.Page’s February 2, 2026 editorial on the Clio–Alexi CRM litigation showed how a dispute between legal‑tech companies can reach straight into your client list, calendars, and workflows.⚖️🧾

In that piece, Clio and Alexi’s legal fight over data, AI training, and competition was framed not as “tech drama,” but as a live test of how well your firm understands its dependencies on vendors that control client‑related information.🧠📂 When the platform that hosts your CRM, matter data, or AI research tools becomes embroiled in high‑stakes litigation, your risk profile changes even if you never set foot in that courtroom.⚠️🏛️

Under ABA Model Rule 1.1, competence includes a practical understanding of the technology that underpins your practice, and that now clearly includes vendor risk.📚💡 You do not have to reverse‑engineer APIs, yet you should be able to answer basic questions: Which vendors are mission‑critical, what data do they hold, how would you respond if one faced an injunction, outage, or rushed acquisition.🧩🚨 That is vendor risk management at a level that is realistic for lawyers with limited to moderate tech skills.🙂🧑‍💼

LawyerS NEED TO Build Vendor Risk Plan for Ethical Compliance

Model Rule 1.6 on confidentiality sits at the center of this analysis, because litigation involving a vendor can expose or pressure the systems that hold client information.🔐📁 Our February 2 article emphasized the need to know where your data is hosted, what the contracts say about subpoenas and law‑enforcement requests, and how quickly you can export data if your ethics analysis changes.⏱️📄 Vendor risk management, therefore, includes reviewing terms of service, capturing “current” versions of online agreements, and documenting export rights and notice obligations.📝🧷

Model Rule 5.3 requires reasonable efforts to ensure that non‑lawyer assistance is compatible with your professional duties, and 2026 legal‑tech commentary increasingly treats vendors as supervised extensions of the law office.🧑‍⚖️🤝 CRMs, AI research tools, document‑automation platforms, and e‑billing systems all act as non‑lawyer assistants for ethics purposes, which means you must screen them before adoption, monitor them for material changes, and reassess when events like the Clio–Alexi dispute surface.📡📊

Recent legal‑tech reporting has described 2026 as a reckoning year for vendors, with AI‑driven tools under heavier regulatory and client scrutiny, which makes disciplined vendor risk management a competitive advantage rather than a burden.📈🤖 Practical steps include maintaining a simple vendor inventory, ranking systems by criticality, reviewing cyber and data‑security representations, and identifying a plausible backup provider for each crucial function.📋🛡️

LAWYERS NEED TO SHIELD THEIR CLIENT DATA FROM CRM LITIGATION AS MUCH AS THEY NEED TO PROTECT THEIR EthicS DUTIES!

Vendor risk management, properly understood, turns your technology stack into part of your professional judgment instead of a black box that “IT” owns alone.🧱🧠 For solo and small‑firm lawyers, that shift can feel incremental rather than overwhelming: start by reading the Clio–Alexi editorial, pull your top three vendor contracts, and ask whether they let you protect competence, confidentiality, and continuity if your vendors suddenly become the ones needing legal help.🧑‍⚖️🧰

Words of the Week: “ANTHROPIC” VS. “AGENTIC”: UNDERSTANDING THE DISTINCTION IN LEGAL TECHNOLOGY 🔍

lawyers need to know the difference anthropic v. agentic

The terms "Anthropic" and "agentic" circulate frequently in legal technology discussions. They sound similar. They appear in the same articles. Yet they represent fundamentally different concepts. Understanding the distinction matters deeply for legal practitioners seeking to leverage artificial intelligence effectively.

Anthropic is a company—specifically, an AI safety-focused organization that develops large language models, most notably Claude. Think of Anthropic as a technology provider. The company pioneered "Constitutional AI," a training methodology that embeds explicit principles into AI systems to guide their behavior toward helpfulness, harmlessness, and honesty. When you use Claude for legal research or document drafting, you are using a product built by Anthropic.

Agentic describes a category of AI system architecture and capability—not a company or product. Agentic systems operate autonomously, plan multi-step tasks, make decisions dynamically, and execute workflows with minimal human intervention. An agentic system can break down complex assignments, gather information, refine outputs, and adjust its approach based on changing circumstances. It exercises judgment about which tools to deploy and when to escalate matters to human oversight.

"Constitutional AI" is an ai training methodology promoting helpfulness, harmlessness, and honesty in ai programing

The relationship between these concepts becomes clearer through a practical scenario. Imagine you task an AI system with analyzing merger agreements from a target company. A non-agentic approach requires you to provide explicit instructions for each step: search the database, extract key clauses, compare terms against templates, and prepare a summary. You guide the process throughout. An agentic approach allows you to assign a goal—Review these contracts, flag risks, and prepare a risk summary—and the AI system formulates its own research plan, prioritizes which documents to examine first, identifies gaps requiring additional information, and works through the analysis independently, pausing only when human judgment becomes necessary.

Anthropic builds AI models capable of agentic behavior. Claude, Anthropic's flagship model, can function as an agentic system when configured appropriately. However, Anthropic's models can also operate in simpler, non-agentic modes. You might use Claude to answer a direct question or draft a memo without any agentic capability coming into play. The capability exists within Anthropic's models, but agentic functionality remains optional depending on your implementation.

They work together as follows: Anthropic provides the underlying AI model and the training methodology emphasizing constitutional principles. That foundation becomes the engine powering agentic systems. The Constitutional AI approach matters specifically for agentic applications because autonomous systems require robust safeguards. As AI systems operate more independently, explicit principles embedded during training help ensure they remain aligned with human values and institutional requirements. Legal professionals cannot simply deploy an autonomous AI agent without trust in its underlying decision-making framework.

Agentic vs. Anthropic: Know the Difference. Shape the Future of Law!

For legal practitioners, the distinction carries practical implications. You evaluate Anthropic as a vendor when selecting which AI provider's tools to adopt. You evaluate agentic architecture when deciding whether your specific use case requires autonomous task execution or whether simpler, more directed AI assistance suffices. Many legal workflows benefit from direct AI support without requiring full autonomy. Others—such as high-volume contract analysis during due diligence—leverage agentic capabilities to move work forward rapidly.

Both elements represent genuine advances in legal technology. Recognizing the difference positions you to make informed decisions about tool adoption and appropriate implementation for your practice. ✅

🎙️TSL Labs! MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late!

📌 Too Busy to Read This Week's Editorial?

Join us for a professional deep dive into essential tech strategies for AI compliance in your legal practice. 🎙️ This AI-powered discussion unpacks the November 17, 2025, editorial, MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late! with actionable intelligence on hidden AI detection, confidentiality protocols, ethics compliance frameworks, and risk mitigation strategies. Artificial intelligence has been silently operating inside your most trusted legal software for years, and under ABA Formal Opinion 512, you bear full responsibility for all AI use, whether you knowingly activated it or it came as a default software update. The conversation makes complex technical concepts accessible to lawyers with varying levels of tech expertise—from tech-hesitant solo practitioners to advanced users—so you'll walk away with immediate, actionable steps to protect your practice, your clients, and your professional reputation.

In Our Conversation, We Cover the Following

00:00:00 - Introduction: Overview of TSL Labs initiative and the AI-generated discussion format

00:01:00 - The Silent Compliance Crisis: How AI has been operating invisibly in your software for years

00:02:00 - Core Conflict: Understanding why helpful tools simultaneously create ethical threats to attorney-client privilege

00:03:00 - Document Creation Vulnerabilities: Microsoft Word Co-pilot and Grammarly's hidden data processing

00:04:00 - Communication Tools Risks: Zoom AI Companion and the cautionary Otter.ai incident

00:05:00 - Research Platform Dangers: Westlaw and Lexis+ AI hallucination rates between 17-33%

00:06:00 - ABA Formal Opinion 512: Full lawyer responsibility for AI use regardless of awareness

00:07:00 - Model Rule 1.6 Analysis: Confidentiality breaches through third-party AI systems

00:08:00 - Model Rule 5.3 Requirements: Supervising AI tools with the same diligence as human assistants

00:09:00 - Five-Step Compliance Framework: Technology audits and vendor agreement evaluation

00:10:00 - Firm Policies and Client Consent: Establishing protocols and securing informed consent

00:11:00 - The Verification Imperative: Lessons from the Mata v. Avianca sanctions case

00:12:00 - Billing Considerations: Navigating hourly versus value-based fee models with AI

00:13:00 - Professional Development: Why tool learning time is non-billable competence maintenance

00:14:00 - Ongoing Compliance: The necessity of quarterly reviews as platforms rapidly evolve

00:15:00 - Closing Remarks: Resources and call to action for tech-savvy innovation

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late!

Lawyers need Digital due diligence in order to say on top of their ethic’s requirements.

Artificial intelligence has infiltrated legal practice in ways most attorneys never anticipated. While lawyers debate whether to adopt AI tools, they've already been using them—often without knowing it. These "hidden AI" features, silently embedded in everyday software, present a compliance crisis that threatens attorney-client privilege, confidentiality obligations, and professional responsibility standards.

The Invisible Assistant Problem

Hidden AI operates in plain sight. Microsoft Word's Copilot suggests edits while you draft pleadings. Adobe Acrobat's AI Assistant automatically identifies contracts and extracts key terms from PDFs you're reviewing. Grammarly's algorithm analyzes your confidential client communications for grammar errors. Zoom's AI Companion transcribes strategy sessions with clients—and sometimes captures what happens after you disconnect.

DocuSign now deploys AI-Assisted Review to analyze agreements against predefined playbooks. Westlaw and Lexis+ embed generative AI directly into their research platforms, with hallucination rates between 17% and 33%. Even practice management systems like Clio and Smokeball have woven AI throughout their platforms, from automated time tracking descriptions to matter summaries.

The challenge isn't whether these tools provide value—they absolutely do. The crisis emerges because lawyers activate features without understanding the compliance implications.

ABA Model Rules Meet Modern Technology

The American Bar Association's Formal Opinion 512, issued in July 2024, makes clear that lawyers bear full responsibility for AI use regardless of whether they actively chose the technology or inherited it through software updates. Several Model Rules directly govern hidden AI features in legal practice.

Model Rule 1.1 requires competence, including maintaining knowledge about the benefits and risks associated with relevant technology. Comment 8 to this rule, adopted by most states, mandates that lawyers understand not just primary legal tools but embedded AI features within those tools. This means attorneys cannot plead ignorance when Microsoft Word's AI Assistant processes privileged documents.

Model Rule 1.6 imposes strict confidentiality obligations. Lawyers must make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". When Grammarly accesses your client emails to check spelling, or when Zoom's AI transcribes confidential settlement discussions, you're potentially disclosing protected information to third-party AI systems.

Model Rule 5.3 extends supervisory responsibilities to "nonlawyer assistance," which includes non-human assistance like AI. The 2012 amendment changing "assistants" to "assistance" specifically contemplated this scenario. Lawyers must supervise AI tools with the same diligence they'd apply to paralegals or junior associates.

Model Rule 1.4 requires communication with clients about the means used to accomplish their objectives. This includes informing clients when AI will process their confidential information, obtaining informed consent, and explaining the associated risks.

Where Hidden AI Lurks in Legal Software

🚨 lawyers don’t breach your ethical duties with AI shortcuts!!!

Microsoft 365 Copilot integrates AI across Word, Outlook, and Teams—applications lawyers use hundreds of times daily. The AI drafts documents, summarizes emails, and analyzes meeting transcripts. Most firms that subscribe to Microsoft 365 have Copilot enabled by default in recent licensing agreements, yet many attorneys remain unaware their correspondence flows through generative AI systems.

Adobe Acrobat now automatically recognizes contracts and generates summaries with AI Assistant. When you open a PDF contract, Adobe's AI immediately analyzes it, extracts key dates and terms, and offers to answer questions about the document. This processing occurs before you explicitly request AI assistance.

Legal research platforms embed AI throughout their interfaces. Westlaw Precision AI and Lexis+ AI process search queries through generative models that hallucinate incorrect case citations 17% to 33% of the time according to Stanford research. These aren't separate features—they're integrated into the standard search experience lawyers rely upon daily.

Practice management systems deploy hidden AI for intake forms, automated time entry descriptions, and matter summaries. Smokeball's AutoTime AI generates detailed billing descriptions automatically. Clio integrates AI into client relationship management. These features activate without explicit lawyer oversight for each instance of use.

Communication platforms present particularly acute risks. Zoom AI Companion and Microsoft Teams AI automatically transcribe meetings and generate summaries. Otter.ai's meeting assistant infamously continued recording after participants thought a meeting ended, capturing investors' candid discussion of their firm's failures. For lawyers, such scenarios could expose privileged attorney-client communications or work product.

The Compliance Framework

Establishing ethical AI use requires systematic assessment. First, conduct a comprehensive technology audit. Inventory every software application your firm uses and identify embedded AI features. This includes obvious tools like research platforms and less apparent sources like PDF readers, email clients, and document management systems.

Second, evaluate each AI feature against confidentiality requirements. Review vendor agreements to determine whether the AI provider uses your data for model training, stores information after processing, or could disclose data in response to third-party requests. Grammarly, for example, offers HIPAA compliance but only for enterprise customers with 100+ seats who execute Business Associate Agreements. Similar limitations exist across legal software.

Third, implement technical safeguards. Disable AI features that lack adequate security controls. Configure settings to prevent automatic data sharing. Adobe and Microsoft both offer options to prevent AI from training on customer data, but these protections require active configuration.

Fourth, establish firm policies governing AI use. Designate responsibility for monitoring AI features in licensed software. Create protocols for evaluating new tools before deployment. Develop training programs ensuring all attorneys understand their obligations when using AI-enabled applications.

Fifth, secure client consent. Update engagement letters to disclose AI use in service delivery. Explain the specific risks associated with processing confidential information through AI systems. Document informed consent for each representation.

The Verification Imperative

ABA Formal Opinion 512 emphasizes that lawyers cannot delegate professional judgment to AI. Every output requires independent verification. When Westlaw Precision AI suggests research authorities, lawyers must confirm those cases exist and accurately reflect the law. When CoCounsel Drafting generates contract language in Microsoft Word, attorneys must review for accuracy, completeness, and appropriateness to the specific client matter.

The infamous Mata v. Avianca case, where lawyers submitted AI-generated briefs citing fabricated cases, illustrates the catastrophic consequences of failing to verify AI output. Every jurisdiction that has addressed AI ethics emphasizes this verification duty.

Cost and Billing Considerations

Formal Opinion 512 addresses whether lawyers can charge the same fees when AI accelerates their work. The opinion suggests lawyers cannot bill for time saved through AI efficiency under traditional hourly billing models. However, value-based and flat-fee arrangements may allow lawyers to capture efficiency gains, provided clients understand AI's role during initial fee negotiations.

Lawyers cannot bill clients for time spent learning AI tools—maintaining technological competence represents a professional obligation, not billable work. As AI becomes standard in legal practice, using these tools may become necessary to meet competence requirements, similar to how electronic research and e-discovery tools became baseline expectations.

Practical Steps for Compliance

Start by examining your Microsoft Office subscription. Determine whether Copilot is enabled and what data sharing settings apply. Review Adobe Acrobat's AI Assistant settings and disable automatic contract analysis if your confidentiality review hasn't been completed.

Contact your Westlaw and Lexis representatives to understand exactly how AI features operate in your research platform. Ask specific questions: Does the AI train on your search queries? How are hallucinations detected and corrected? What happens to documents you upload for AI analysis?

Audit your practice management system. If you use Clio, Smokeball, or similar platforms, identify every AI feature and evaluate its compliance with confidentiality obligations. Automatic time tracking that generates descriptions based on document content may reveal privileged information if billing statements aren't properly redacted.

Review video conferencing policies. Establish protocols requiring explicit disclosure when AI transcription activates during client meetings. Obtain informed consent before recording privileged discussions. Consider disabling AI assistants entirely for confidential matters.

Implement regular training programs. Technology competence isn't achieved once—it requires ongoing education as AI features evolve. Schedule quarterly reviews of new AI capabilities deployed in your software stack.

Final Thoughts 👉 The Path Forward

lawyers must be able to identify and contain ai within the tech tools they use for work!

Hidden AI represents both opportunity and obligation. These tools genuinely enhance legal practice by accelerating research, improving drafting, and streamlining administrative tasks. The efficiency gains translate into better client service and more competitive pricing.

However, lawyers cannot embrace these benefits while ignoring their ethical duties. The Model Rules apply with equal force to hidden AI as to any other aspect of legal practice. Ignorance provides no defense when confidentiality breaches occur or inaccurate AI-generated content damages client interests.

The legal profession stands at a critical juncture. AI integration will only accelerate as software vendors compete to embed intelligent features throughout their platforms. Lawyers who proactively identify hidden AI, assess compliance risks, and implement appropriate safeguards will serve clients effectively while maintaining professional responsibility.

Those who ignore hidden AI features operating in their daily practice face disciplinary exposure, malpractice liability, and potential privilege waivers. The choice is clear: unmask the hidden AI now, or face consequences later.

MTC

MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

🎙️ Ep. 120: AI Game Changers for Law Firms - Stephen Embry on Legal Tech Adoption and Privacy Concerns 🤖⚖️

My next guest is Stephen Embry. Steve is a legal technology expert, blogger at Tech Law Crossroads, and contributor to Above the Law. A former mass tort defense litigator with 20 years of remote practice experience, Steven specializes in AI implementation for law firms and legal technology adoption challenges. With a master's degree in civil engineering and programming expertise since 1980, he brings a unique technical insight to legal practice. Steven provides data-driven analysis on how AI is revolutionizing law firms while addressing critical privacy and security concerns for legal professionals. 💻

Join Stephen Embry and me as we discuss the following three questions and more! 🎯

  1. What do you think are the top three game-changer announcements from the 2025 ILTA Conference for AI that're gonna make the most impact for solo, small, and mid-size law firms?

  2. What are the top three security and privacy concerns lawyers should address when using AI?

  3. What are your top three hacks when it comes to using AI in legal?

In our conversation, we covered the following and more! 📝

  • [00:00:00] Episode Introduction & Guest Bio

  • [00:01:00] Steve's Current Tech Setup

  • [00:02:00] Apple Devices Discussion - MacBook Air M4, AirPods Pro

  • [00:06:00] Android Phone & Remote Practice Experience

  • [00:09:00] iPad Collection & MacBook Air Purchase Story

  • [00:12:00] Travel Tech & Backup Strategies

  • [00:15:00] Q1: AI Game Changers from ILTA 2025 Conference

  • [00:24:00] Billable Hour vs AI Adoption Challenges

  • [00:26:00] Competition & Client Demands for Technology

  • [00:35:00] Q2: AI Security & Privacy Concerns for Lawyers

  • [00:37:00] Discoverability & Privilege Waiver Issues

  • [00:44:00] Q3: Top AI Hacks for Legal Professionals

  • [00:46:00] Using AI for Document Construction & Rules Compliance

  • [00:50:00] Contact Information & Resources

Resources 📚

Connect with Stephen Embry

• Email: sembry@techlawcrossroads.com
• Blog: Tech Law Crossroads - https://techlawcrossroads.com
• Above the Law Contributions: https://abovethelaw.com
• LinkedIn: [Stephen Embry LinkedIn Profile]

Mentioned in the Episode

• ILTA (International Lawyers Technology Association) Conference 2025 - https://www.iltanet.org
• Max Stock Conference - Chicago area legal technology conference
• Consumer Electronics Show (CES) - https://www.ces.tech
• Federal Rules of Civil Procedure - https://www.uscourts.gov/rules-policies/current-rules/federal-rules-civil-procedure
• Apple Event (October 9th) - Apple's product announcement events
• Gaylord Conference Center - Washington, DC area conference venue

Hardware Mentioned in the Conversation 🖥️

• MacBook Air M4 (13-inch) - https://www.apple.com/macbook-air/
• iPad Pro - https://www.apple.com/ipad-pro/
• iPad Air - https://www.apple.com/ipad-air/
• iPad Mini - https://www.apple.com/ipad-mini/
• iPhone 16 - https://www.apple.com/iphone-16/
• Apple Watch Ultra 2 - https://www.apple.com/apple-watch-ultra-2/
• AirPods Pro - https://www.apple.com/airpods-pro/
• Samsung Galaxy (Android phone) - https://www.samsung.com/us/mobile/phones/galaxy/
• Samsung Galaxy Fold 7 - https://www.samsung.com/global/galaxy/galaxy-z-fold7/

Software & Cloud Services Mentioned in the Conversation ☁️

• Apple Intelligence - https://www.apple.com/apple-intelligence/
• ChatGPT - https://chat.openai.com
• Claude (Anthropic) - https://claude.ai
• Brock AI - AI debate and argumentation tool
• Notebook AI - https://notebooklm.google.com
• Microsoft Word - https://www.microsoft.com/en-us/microsoft-365/word
• Dropbox - https://www.dropbox.com
• Backblaze - https://www.backblaze.com
• Synology - https://www.synology.com
• Whisper AI - https://openai.com/research/whisper

Don't forget to give The Tech-Savvy Lawyer.Page Podcast a Five-Star ⭐️ review on Apple Podcasts or wherever you get your podcast feeds! Your support helps us continue bringing you expert insights on legal technology.

Our next episode will be posted in about two weeks. If you have any ideas about a future episode, please contact Michael at michaeldj@techsavvylawyer.page 📧

🚀 Shout Out to Steve Embry: A Legal Tech Visionary Tackling AI's Billing Revolution!

Legal technology expert Steve Embry has once again hit the mark with his provocative and insightful article examining the collision between AI adoption and billable hour pressures in law firms. Writing for TechLaw Crossroads, Steve masterfully dissects the DeepL survey findings that reveal 96% of legal professionals are using AI tools, with 71% doing so without organizational approval. His analysis illuminates a critical truth that many in the profession are reluctant to acknowledge: the billable hour model is facing its most serious existential threat yet.

The AI Efficiency Paradox in Legal Practice ⚖️

Steve’s article brilliantly connects the dots between mounting billable hour pressures and the rise of shadow AI use in legal organizations. The DeepL study reveals that 35% of legal professionals frequently use unauthorized AI tools, primarily driven by pressure to deliver work faster. This finding aligns perfectly with research showing that AI-driven efficiencies are forcing law firms to reconsider traditional billing models. When associates can draft contracts 70% faster with AI assistance, the fundamental economics of legal work shift dramatically.

The legal profession finds itself caught in what experts call the "AI efficiency paradox". As generative AI tools become more sophisticated at automating legal research, document drafting, and analysis, the justification for billing clients based purely on time spent becomes increasingly problematic. This creates a perfect storm when combined with the intense pressure many firms place on associates to meet billable hour quotas - some firms now demanding 2,400 hours annually, with 2,000 being billable and collectible.

Shadow AI Use: A Symptom of Systemic Pressure 🔍

Steve's analysis goes beyond surface-level criticism to examine the root causes of unauthorized AI adoption. The DeepL survey data shows that unclear policies account for only 24% of shadow AI use, while pressure to deliver faster work represents 35% of the motivation. This finding supports Steve's central thesis that "the responsibility for hallucinations and inaccuracies is not just that of the lawyer. It's that of senior partners and clients who expect and demand AI use. They must recognize their accountability in creating demands and pressures to not do the time-consuming work to check cites".

This systemic pressure has created a dangerous environment where junior lawyers face impossible choices. They must choose between taking unbillable time to thoroughly verify AI outputs or risk submitting work with potential hallucinations to meet billing targets. Recent data shows that AI hallucinations have appeared in over 120 legal cases since mid-2023, with 58 occurring in 2025 alone. The financial consequences are real - one firm faced $31,100 in sanctions for relying on bogus AI research.

The Billable Hour's Reckoning 💰

How will lawyers handle the challenge to the billable hour with AI use in their practice of law?

Multiple industry observers now predict that AI adoption will accelerate the demise of traditional hourly billing. Research indicates that 67% of corporate legal departments and 55% of law firms expect AI-driven efficiencies to impact the prevalence of the billable hour significantly. The legal profession is witnessing a fundamental shift where "[t]he less time something takes, the more money a firm can earn" once alternative billing methods are adopted.

Forward-thinking firms are already adapting by implementing hybrid billing models that combine hourly rates for complex judgment calls with flat fees for AI-enhanced routine tasks. This transition requires firms to develop what experts call "AI-informed Alternative Fee Arrangements" that embed clear automation metrics into legal pricing.

The Path Forward: Embracing Responsible AI Integration 🎯

Steve’s article serves as a crucial wake-up call for legal organizations to move beyond sanctions-focused approaches toward comprehensive AI integration strategies. The solution requires acknowledgment from senior partners and clients that AI adoption must include adequate time for verification and quality control processes. This too should serve as a reminder for any attorney, big firm to solo, to check their work before submitting it to a court, regulatory agency, etc. Several state bars and courts have begun requiring certification that AI-generated content has been reviewed for accuracy, recognizing that oversight cannot be an afterthought.

The most successful firms will be those that embrace AI while building robust verification protocols into their workflows. This means training lawyers to use AI competently, establishing clear policies for AI use, and most importantly, ensuring billing practices reflect the true value delivered rather than simply time spent. As one expert noted, "AI isn't the problem, poor process is".

Final Thoughts: Technology Strategy for Modern Legal Practice 📱

Are you ready to take your law practice to the next step with AI?

For legal professionals with limited to moderate technology skills, the key is starting with purpose-built legal AI tools rather than general-purpose solutions. Specialized legal research platforms that include retrieval-augmented generation (RAG) technology can significantly reduce hallucination risks while providing the efficiency gains clients expect. These tools ground AI responses in verified legal databases, offering the speed benefits of AI with enhanced accuracy.

The profession must also recognize that competent AI use requires ongoing education. Lawyers need not become AI experts, but they must develop "a reasonable understanding of the capabilities and limitations of the specific GAI technology" they employ. This includes understanding when human judgment must predominate and how to effectively verify AI-generated content.

Steve's insightful analysis reminds us that the legal profession's AI revolution cannot be solved through individual blame or simplistic rules. Instead, it requires systemic changes that address the underlying pressures driving risky AI use while embracing the transformative potential of these technologies. The firms that succeed will be those that view AI not as a threat to traditional billing but as an opportunity to deliver greater value to clients while building more sustainable and satisfying practices for their legal professionals. 🌟

🎙️ TSL Labs: Listen to June 30, 2025, TSL editorial as Discussed by two AI-Generated Podcast Hosts Turn Editorial Into Engaging Discussion for Busy Legal Professionals!

🎧 Can't find time to read lengthy legal tech editorials? We've got you covered.

As part of our Tech Savvy Lawyer Labs initiative, I've been experimenting with cutting-edge AI to make legal content more accessible. This bonus episode showcases how Notebook.AI can transform written editorials into engaging podcast discussions.

Our latest experiment takes the editorial "AI and Legal Research: The Existential Threat to Lexis, Westlaw, and Fastcase" and converts it into a compelling conversation between two AI hosts who discuss the content as if they've thoroughly analyzed the piece.

This Labs experiment demonstrates how AI can serve as a time-saving alternative for legal professionals who prefer audio learning or lack time for extensive reading. The AI hosts engage with the material authentically, providing insights and analysis that make complex legal tech topics accessible to practitioners at all technology skill levels.

🚀 Perfect for commutes, workouts, or multitasking—get the full editorial insights without the reading time.

Enjoy!

🎙️ Bonus Episode: TSL Lab’s Notebook.AI Commentary on June 23, 2025, TSL Editorial!

Hey everyone, welcome to this bonus episode!

As you know, in this podcast we explore the future of law through engaging interviews with lawyers, judges, and legal tech professionals on the cutting edge of legal innovation. As part of our Labs initiative, I am experimenting with AI-generated discussions—this episode features two Google Notebook.AI hosts who dive deep into our latest Editorial: "Lawyers, Generative AI, and the Right to Privacy: Navigating Ethics, Client Confidentiality, and Public Data in the Digital Age." If you’re a busy legal professional, join us for an insightful, AI-powered conversation that unpacks the editorial’s key themes, ethical challenges, and practical strategies for safeguarding privacy in the digital era.

Enjoy!

In our conversation, the "Bots" covered the following:

00:00 Introduction to the Bonus Episode

01:01 Exploring Generative AI in Law

01:24 Ethical Challenges and Client Confidentiality

01:42 Deep Dive into the Editorial

09:31 Practical Strategies for Lawyers

13:03 Conclusion and Final Thoughts

Resources:

Google Notebook.AI - https://notebooklm.google/