Word of the Week: "Constitutional AI" for Lawyers - What It Is, Why It Matters for ABA Rules, and How Solo & Small Firms Should Use It!

Constitutional AI’s ‘helpful, harmless, honest’ standard is a solid starting point for lawyers evaluating AI platforms.

The term “Constitutional AI” appeared this week in a Tech Savvy Lawyer post about the MTC/PornHub breach as a cybersecurity wake‑up call for lawyers 🚨. That article used it to highlight how AI systems (like those law firms now rely on) must be built and governed by clear, ethical rules — much like a constitution — to protect client data and uphold professional duties. This week’s Word of the Week unpacks what Constitutional AI really means and explains why it matters deeply for solo, small, and mid‑size law firms.

🔍 What is Constitutional AI?

Constitutional AI is a method for training large language models so they follow a written set of high‑level principles, called a “constitution” 📜. Those principles are designed to make the AI helpful, honest, and harmless in its responses.

As Claude AI from Anthropic explains:
Constitutional AI refers to a set of techniques developed by researchers at Anthropic to align AI systems like myself with human values and make us helpful, harmless, and honest. The key ideas behind Constitutional AI are aligning an AI’s behavior with a ‘constitution’ defined by human principles, using techniques like self‑supervision and adversarial training, developing constrained optimization techniques, and designing training data and model architecture to encode beneficial behaviors.” — Claude AI, Anthropic (July 7th, 2023).

In practice, Constitutional AI uses the model itself to critique and revise its own outputs against that constitution. For example, the model might be told: “Do not generate illegal, dangerous, or unethical content,” “Be honest about what you don’t know,” and “Protect user privacy.” It then evaluates its own answers against those rules before giving a final response.

Think of it like a junior associate who’s been given a firm’s internal ethics manual and told: “Before you send that memo, check it against these rules.” Constitutional AI does that same kind of self‑checking, but at machine speed.

🤝 How Constitutional AI Relates to Lawyers

For lawyers, Constitutional AI is important because it directly shapes how AI tools behave when handling legal work 📚. Many legal AI tools are built on models that use Constitutional AI techniques, so understanding this concept helps lawyers:

  • Judge whether an AI assistant is likely to hallucinate, leak sensitive info, or give ethically problematic advice.

  • Choose tools whose underlying AI is designed to be more transparent, less biased, and more aligned with professional norms.

  • Better supervise AI use in the firm, which is a core ethical duty under the ABA Model Rules.

Solo and small firms, in particular, often rely on off‑the‑shelf AI tools (like chatbots or document assistants). Knowing that a tool is built on Constitutional AI principles can give more confidence that it’s designed to avoid harmful outputs and respect confidentiality.

⚖️ Why It Matters for ABA Model Rules

For solo and small firms, asking whether an AI platform aligns with Constitutional AI’s standards is a practical first step in choosing a trustworthy tool.

The ABA’s Formal Opinion 512 on generative AI makes clear that lawyers remain responsible for all work done with AI, even if an AI tool helped draft it 📝. Constitutional AI is relevant here because it’s one way that AI developers try to build in ethical guardrails that align with lawyers' obligations.

Key connections to the Model Rules:

  • Rule 1.1 (Competence): Lawyers must understand the benefits and risks of the technology they use. Knowing that a tool uses Constitutional AI helps assess whether it’s reasonably reliable for tasks like research, drafting, or summarizing.

  • Rule 1.6 (Confidentiality): Constitutional AI models are designed to refuse to disclose sensitive information and to avoid memorizing or leaking private data. This supports the lawyer’s duty to make “reasonable efforts” to protect client confidences.

  • Rule 5.1 / 5.3 (Supervision): Managing partners and supervising attorneys must ensure that AI tools used by staff are consistent with ethical rules. A tool built on Constitutional AI principles is more likely to support, rather than undermine, those supervisory duties.

  • Rule 3.3 (Candor to the Tribunal): Constitutional AI models are trained to admit uncertainty and avoid fabricating facts or cases, which helps reduce the risk of submitting false or misleading information to a court.

In short, Constitutional AI doesn’t relieve lawyers of their ethical duties, but it can make AI tools safer and more trustworthy when used under proper supervision.

🛡️ The “Helpful, Harmless, and Honest” Principle

The three pillars of Constitutional AI — helpful, harmless, and honest — are especially relevant for lawyers:

  • Helpful: The AI should provide useful, relevant information that advances the client’s matter, without unnecessary or irrelevant content.

  • Harmless: The AI should avoid generating illegal, dangerous, or unethical content, and should respect privacy and confidentiality.

  • Honest: The AI should admit when it doesn’t know something, avoid fabricating facts or cases, and not misrepresent its capabilities.

For law firms, this “helpful, harmless, and honest” standard is a useful mental checklist when using AI:

  • Is this AI output actually helpful to the client’s case?

  • Could this output harm the client (e.g., by leaking confidential info or suggesting an unethical strategy)?

  • Is the AI being honest (e.g., not hallucinating case law or pretending to know facts it can’t know)?

If the answer to any of those questions is “no,” the AI output should not be used without significant human review and correction.

🛠️ Practical Takeaways for Law Firms

For solo, small, and mid‑size firms, here’s how to put this into practice:

Lawyers need to screen AI tools and ensure they are aligned with ABA Model Rules.

  1. Know your tools. When evaluating a legal AI product, ask whether it’s built on a Constitutional AI–style model (e.g., Claude). That tells you it’s designed with explicit ethical constraints.

  2. Treat AI as a supervised assistant. Never let AI make final decisions or file work without a lawyer’s review. Constitutional AI reduces risk, but it doesn’t eliminate the need for human judgment.

  3. Train your team. Make sure everyone in the firm understands that AI outputs must be checked for accuracy, confidentiality, and ethical compliance — especially when using third‑party tools.

  4. Update your engagement letters and policies. Disclose to clients when AI is used in their matters, and explain how the firm supervises it. This supports transparency under Rule 1.4 and Rule 1.6.

  5. Focus on “helpful, honest, harmless.” Use Constitutional AI as a mental checklist: Is this AI being helpful to the client? Is it honest about its limits? Is it harmless (no bias, no privacy leaks)? If not, don’t rely on it.

🎙️📘 Quick reminder: The Lawyer’s Guide to Podcasting releases NEXT WEEK!

Inside title page of The Lawyer’s Guide to Podcasting, releasing January 12, 2026.

If you want a podcast that sounds professional without turning your week into a production project, this book is built for you. It’s practical. It’s workflow-first. It keeps ethics and confidentiality in view. 🔐⚖️

✅ Inside you’ll learn:

  • How to choose a podcast format that fits your goals 🎯

  • A simple, reliable setup that sounds credible 🎤

  • Recording habits that reduce editing time ⏱️

  • Repurposing steps so one episode powers your content plan ♻️

📩 Want the release link the moment it’s live? Email Admin@TheTechSavvyLawyer.Page with subject “Book Link.” I’ll send it on launch day. 🚀

ANNOUNCEMENT (BOOK RELEASE): The Lawyer’s Guide to Podcasting: The Simple, Ethics-Aware Playbook to Launch a Professional Podcast (Release mid-January, 2026)

Anticipated release is mid-january 2026.

🎙️📘 Podcasting is still one of the fastest ways to build trust. It works for lawyers, legal professionals, and any expert who needs to explain complex topics in plain language.

On January 12, 2026, I’m releasing The Lawyer’s Guide to Podcasting. This book is designed for busy professionals who want a podcast that sounds credible, protects confidentiality, and fits into a real workflow. No studio required. No tech overwhelm.

✅ Inside the book, you’ll learn:

  • How to pick a podcast format that matches your goals 🎯

  • The “minimum viable setup” that sounds professional 🎤

  • Recording workflows that reduce editing time ⏱️

  • Practical ethics and risk habits for public content 🔐

  • Repurposing steps so one episode becomes a week of marketing ♻️

📩 Get the release link: Email Admin@TheTechSavvyLawyer.Page with the subject line “Podcasting Book Link” and I’ll send the link as soon as the book is released. 📩🎙️

MTC (Bonus): National Court Technology Rules: Finding Balance Between Guidance and Flexibility ⚖️

Standardizing Tech Guidelines in the Legal System

Lawyers and their staff needs to know the standard and local rules of AI USe in the courtroom - their license could depend on it.

The legal profession stands at a critical juncture where technological capability has far outpaced judicial guidance. Nicole Black's recent commentary on the fragmented approach to technology regulation in our courts identifies a genuine problem—one that demands serious consideration from both proponents of modernization and cautious skeptics alike.

The core tension is understandable. Courts face legitimate concerns about technology misuse. The LinkedIn juror research incident in Judge Orrick's courtroom illustrates real risks: a consultant unknowingly violated a standing order, resulting in a $10,000 sanction despite the attorney's good-faith disclosure and remedial efforts. These aren't theoretical concerns—they reflect actual ethical boundaries that protect litigants and preserve judicial integrity. Yet the response to these concerns has created its own problems.

The current patchwork system places practicing attorneys in an impossible position. A lawyer handling cases across multiple federal districts cannot reasonably track the varying restrictions on artificial intelligence disclosure, social media evidence protocols, and digital research methodologies. When the safe harbor is simply avoiding technology altogether, the profession loses genuine opportunities to enhance accuracy and efficiency. Generative AI's citation hallucinations justify judicial scrutiny, but the ad hoc response by individual judges—ranging from simple guidance to outright bans—creates unpredictability that chills responsible innovation.

SHould there be an international standard for ai use in the courtroom

There are legitimate reasons to resist uniform national rules. Local courts understand their communities and case management needs better than distant regulatory bodies. A one-size-fits-all approach might impose burdensome requirements on rural jurisdictions with fewer tech-savvy practitioners. Furthermore, rapid technological evolution could render national rules obsolete within months, whereas individual judges retain flexibility to respond quickly to emerging problems.

Conversely, the current decentralized approach creates serious friction. The 2006 amendments to Federal Rules of Civil Procedure for electronically stored information succeeded partly because they established predictability across jurisdictions. Lawyers knew what preservation obligations applied regardless of venue. That uniformity enabled the profession to invest in training, software, and processes. Today's lawyers lack that certainty. Practitioners must maintain contact lists tracking individual judge orders, and smaller firms simply cannot sustain this administrative burden.

The answer likely lies between extremes. Rather than comprehensive national legislation, the profession would benefit from model standards developed collaboratively by the Federal Judicial Conference, state supreme courts, and bar associations. These guidelines could allow reasonable judicial discretion while establishing baseline expectations—defining when AI disclosure is mandatory, clarifying which social media research constitutes impermissible contact, and specifying preservation protocols that protect evidence without paralyzing litigation.

Such an approach acknowledges both legitimate judicial concerns and legitimate professional needs. It recognizes that judges require authority to protect courtroom procedures while recognizing that lawyers require predictability to serve clients effectively.

I basically agree with Nicole: The question is not whether courts should govern technology use. They must. The question is whether they govern wisely—with sufficient uniformity to enable compliance, sufficient flexibility to address local concerns, and sufficient clarity to encourage rather than discourage responsible innovation.

TSL Labs 🧪Bonus: 🎙️ From Cyber Compliance to Cyber Dominance: What VA's AI Revolution Means for Government Cybersecurity, Legal Ethics, and ABA Model Rule Compliance!

In this TSL Labs bonus episode, we examine this week’s editorial on how the Department of Veterans Affairs is leading a historic transformation from traditional compliance frameworks to a dynamic, AI-driven approach called "cyber dominance." This conversation unpacks what this seismic shift means for legal professionals across all practice areas—from procurement and contract law to privacy, FOIA, and litigation. Whether you're advising government agencies, representing contractors, or handling cases where data security matters, this discussion provides essential insights into how continuous monitoring, zero trust architecture, and AI-driven threat detection are redefining professional competence under ABA Model Rule 1.1. 💻⚖️🤖

Join our AI hosts and me as we discuss the following three questions and more!

  1. How has federal cybersecurity evolved from the compliance era to the cyber dominance paradigm? 🔒

  2. What are the three technical pillars—continuous monitoring, zero trust architecture, and AI-driven detection—and how do they interconnect? 🛡️

  3. What professional liability and ethical obligations do lawyers now face under ABA Model Rule 1.1 regarding technology competence? ⚖️

In our conversation, we cover the following:

  • [00:00:00] - Introduction: TSL Labs Bonus Podcast on VA's AI Revolution 🎯

  • [00:01:00] - Introduction to Federal Cybersecurity: The End of the Compliance Era 📋

  • [00:02:00] - Legal Implications and Professional Liability Under ABA Model Rules ⚖️

  • [00:03:00] - From Compliance to Continuous Monitoring: Understanding the Static Security Model 🔄

  • [00:04:00] - The False Comfort of Compliance-Only Approaches 🚨

  • [00:05:00] - The Shift to Cyber Dominance: Three Integrated Technical Pillars 💪

  • [00:06:00] - Zero Trust Architecture (ZTA) Explained: Verify Everything, Trust Nothing 🔐

  • [00:07:00] - AI-Driven Detection and Legal Challenges: Professional Competence Under Model Rule 1.1 🤖

  • [00:08:00] - The New Legal Questions: Real-Time Risk vs. Static Compliance 📊

  • [00:09:00] - Evolving Compliance: From Paper Checks to Dynamic Evidence 📈

  • [00:10:00] - Cybersecurity as Operational Discipline: DevSecOps and Security by Design 🔧

  • [00:11:00] - Litigation Risks: Discovery, Red Teaming, and Continuous Monitoring Data ⚠️

  • [00:12:00] - Cyber Governance with AI: Algorithmic Bias and Explainability 🧠

  • [00:13:00] - Synthesis and Future Outlook: Law Must Lead, Not Chase Technology 🚀

  • [00:14:00] - The Ultimate Question: Is Your Advice Ready for Real-Time Risk Management? 💡

  • [00:15:00] - Conclusion and Resources 📚

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

  • AI-Driven Detection Systems - Automated threat detection and response platforms

  • Automated Compliance Platforms - Dynamic evidence generation systems

  • Continuous Monitoring Systems - Real-time security assessment platforms

  • DevSecOps Tools - Automated security testing in software development pipelines

  • Firewalls - Network security hardware devices

  • Google Notebook AI - https://notebooklm.google.com/

  • Penetration Testing Software - Security vulnerability assessment tools

  • Zero Trust Architecture (ZTA) Solutions - Identity and access verification systems

MTC: From Cyber Compliance to Cyber Dominance: What VA’s AI Revolution Means for Government Cybersecurity, Legal Ethics, and ABA Model Rule Compliance 💻⚖️🤖

In the age of cyber dominance, “I did not understand the technology” is increasingly unlikely to serve as a safe harbor.

🚨 🤖 👩🏻‍💼👨‍💼

In the age of cyber dominance, “I did not understand the technology” is increasingly unlikely to serve as a safe harbor. 🚨 🤖 👩🏻‍💼👨‍💼

Government technology is in the middle of a historic shift. The Department of Veterans Affairs (VA) stands at the center of this transformation, moving from a check‑the‑box cybersecurity culture to a model of “cyber dominance” that fuses artificial intelligence (AI), zero trust architecture (a security model that assumes no user or device is trusted by default, even inside the network), and continuous risk management. 🔐

For lawyers who touch government work in any way—inside agencies, representing contractors, handling whistleblowers, litigating Freedom of Information Act (FOIA) or privacy issues, or advising regulated entities—this is not just an IT story. It is a law license story. Under the American Bar Association (ABA) Model Rules, failing to grasp core cyber and AI governance concepts can now translate into ethical risk and potential disciplinary exposure. ⚠️

Resources such as The Tech-Savvy Lawyer.Page blog and podcast are no longer “nice to have.” They are becoming essential continuing education for lawyers who want to stay competent in practice, protect their clients, and safeguard their own professional standing. 🧠🎧

Where Government Agency Technology Has Been: The Compliance Era 🗂️

For decades, many federal agencies lived in a world dominated by static compliance frameworks. Security often meant passing audits and meeting minimum requirements, including:

  • Annual or periodic Authority to Operate (ATO, the formal approval for a system to run in a production environment based on security review) exercises

  • A focus on the Federal Information Security Modernization Act (FISMA) and National Institute of Standards and Technology (NIST) security control checklists

  • Point‑in‑time penetration tests

  • Voluminous documentation, thin on real‑time risk

The VA was no exception. Like many agencies, it grappled with large legacy systems, fragmented data, and a culture in which “security” was a paperwork event, not an operational discipline. 🧾

In that world, lawyers often saw cybersecurity as a box to tick in contracts, privacy impact assessments, and procurement documentation. The legal lens focused on:

  • Whether the required clauses were in place

  • Whether a particular system had its ATO

  • Whether mandatory training was completed

The result: the law frequently chased the technology instead of shaping it.

Where Government Technology Is Going: Cyber Dominance at the VA 🚀

The VA is now in the midst of what its leadership calls a “cybersecurity awakening” and a shift toward “cyber dominance”. The message is clear: compliance is not enough, and in many ways, it can be dangerously misleading if it creates a false sense of security.

Key elements of this new direction include:

  • Continuous monitoring instead of purely static certification

  • Zero trust architecture (a security model that assumes no user, device, or system is trusted by default, and that every access request must be verified) as a design requirement, not an afterthought

  • AI‑driven threat detection and anomaly spotting at scale

  • Integrated cybersecurity into mission operations, not a separate silo

  • Real‑time incident response and resilience, rather than after‑the‑fact blame

“Cyber dominance” reframes cybersecurity as a dynamic contest with adversaries. Agencies must assume compromise, hunt threats proactively, and adapt in near real time. That shift depends heavily on data engineering, automation, and AI models that can process signals far beyond human capacity. 🤖

For both government and nongovernment lawyers, this means that the facts on the ground—what systems actually do, how they are monitored, and how decisions are made—are changing fast. Advocacy and counseling that rely on outdated assumptions about “IT systems” will be incomplete at best and unethical at worst.

The Future: Cybersecurity Compliance, Cybersecurity, and Cybergovernance with AI 🔐🌐

The future of government technology involves an intricate blend of compliance, operational security, and AI governance. Each element increasingly intersects with legal obligations and the ABA Model Rules.

1. Cybersecurity Compliance: From Static to Dynamic ⚙️

Traditional compliance is not disappearing. The FISMA, NIST standards, the Federal Risk and Authorization Management Program (FedRAMP), the Health Insurance Portability and Accountability Act (HIPAA), and other frameworks still govern federal systems and contractor environments.

But the definition of compliance is evolving:

  • Continuous compliance: Automated tools generate near real‑time evidence of security posture instead of relying only on annual snapshots.

  • Risk‑based prioritization: Not every control is equal; agencies must show how they prioritize high‑impact cyber risks.

  • Outcome‑focused oversight: Auditors and inspectors general care less about checklists and more about measurable risk reduction and resilience.

Lawyers must understand that “we’re compliant” will no longer end the conversation. Decision‑makers will ask:

  • What does real‑time monitoring show about actual risk?

  • How quickly can the VA or a contractor detect and contain an intrusion?

  • How are AI tools verifying, logging, and explaining security‑related decisions?

2. Cybersecurity as an Operational Discipline 🛡️

The VA’s push toward cyber dominance relies on building security into daily operations, not layering it on top. That includes:

  • Secure‑by‑design procurement and contract terms, which require modern controls and realistic reporting duties

  • DevSecOps (development, security, and operations) pipelines that embed automated security testing and code scanning into everyday software development

  • Data segmentation and least‑privilege access across systems, so users and services only see what they truly need

  • Routine red‑teaming (simulated attacks by ethical hackers to test defenses) and table‑top exercises (structured discussion‑based simulations of incidents to test response plans)

For government and nongovernment lawyers, this raises important questions:

  • Are contracts, regulations, and interagency agreements aligned with zero trust principles (treating every access request as untrusted until verified)?

  • Do incident response plans meet regulatory and contractual notification timelines, including state and federal breach laws?

  • Are representations to courts, oversight bodies, and counterparties accurate in light of actual cyber capabilities and known limitations?

3. Cybergovernance with AI: The New Frontier 🌐🤖

Lawyers can no longer sit idlely by their as cyber-ethic responsibilities are changing!

AI will increasingly shape how agencies, including the VA, manage cyber risk:

  • Machine learning models will flag suspicious behavior or anomalous network traffic faster than humans alone.

  • Generative AI tools will help triage incidents, search legal and policy documents, and assist with internal investigations.

  • Decision‑support systems may influence resource allocation, benefit determinations, or enforcement priorities.

These systems raise clear legal and ethical issues:

  • Transparency and explainability: Can lawyers understand and, if necessary, challenge the logic behind AI‑assisted or AI‑driven decisions?

  • Bias and fairness: Do algorithms create discriminatory impacts on veterans, contractors, or employees, even if unintentional?

  • Data governance: Is sensitive, confidential, or privileged information being exposed to third‑party AI providers or trained into their models?

Blogs and podcasts like Tech-Savvy Lawyer.Page blog and podcast often highlight practical workflows for lawyers using AI tools safely, along with concrete questions to ask vendors and IT teams. Those insights are particularly valuable as agencies and law practices both experiment with AI for document review, legal research, and compliance tracking. 💡📲

What Lawyers in Government and Nongovernment Need to Know 🏛️⚖️

Lawyers inside agencies such as the VA now sit at the intersection of mission, technology, and ethics. Under ABA Model Rule 1.1 (Competence) and its comment on technological competence, agency counsel must acquire and maintain a basic understanding of relevant technology that affects client representation.

For government lawyers and nongovernment lawyers who advise, contract with, or litigate against agencies such as the VA, technological competence now has a common core. It requires enough understanding of system architecture, cybersecurity practices, and AI‑driven tools to ask the right questions, spot red flags, and give legally sound, ethics‑compliant advice on how those systems affect veterans, agencies, contractors, and the public. ⚖️💻

For government lawyers and nongovernment lawyers who interact with agencies such as the VA, this includes:

  • Understanding the basic architecture and risk profile of key systems (for example, benefits, health data, identity, and claims platforms), so you can evaluate how failures affect legal rights and obligations. 🧠

  • Being able to ask informed questions about zero trust architecture, encryption, system logging, and AI tools used by the agency or contractor.

  • Knowing the relevant incident response plans, data breach notification obligations, and coordination pathways with regulators and law enforcement, whether you are inside the agency or across the table. 🚨

  • Ensuring that policies, regulations, contracts, and public statements about cybersecurity and AI reflect current technical realities, rather than outdated assumptions that could mislead courts, oversight bodies, or the public.

Model Rules 1.6 (Confidentiality of Information) and 1.13 (Organization as Client) are especially important. Government lawyers must:

  • Guard sensitive data, including classified, personal, and privileged information, against unauthorized disclosure or misuse.

  • Advise the “client” (the agency) when cyber or AI practices present significant legal risk, even if those practices are popular or politically convenient.

If a lawyer signs off on policies or representations about cybersecurity that they know—or should know—are materially misleading, that can implicate Rule 3.3 (Candor Toward the Tribunal) and Rule 8.4 (Misconduct). The shift to cyber dominance means that “we passed the audit” will no longer excuse ignoring operational defects that put veterans or the public at risk. 🚨

What Lawyers Outside Government Need to Know 🏢⚖️

Lawyers representing contractors, vendors, whistleblowers, advocacy groups, or regulated entities cannot ignore these changes at the VA and other agencies. Their clients operate in the same new environment of continuous oversight and AI‑informed risk management.

Key responsibilities for nongovernmental lawyers include:

  • Contract counseling: Understanding cybersecurity clauses, incident response requirements, AI‑related representations, and flow‑down obligations in government contracts.

  • Regulatory compliance: Navigating overlapping regimes (for example, federal supply chain rules, state data breach statutes, HIPAA in health contexts, and sector‑specific regulations).

  • Litigation strategy: Incorporating real‑time cyber telemetry and AI logs into discovery, privilege analyses, and evidentiary strategies.

  • Advising on AI tools: Ensuring that client use of generative AI in government‑related work does not compromise confidential information or violate procurement, export control, or data localization rules.

Under Model Rule 1.1 (Competence), outside counsel must be sufficiently tech‑savvy to spot issues and know when to bring in specialized expertise. Ignoring cyber and AI governance concerns can:

  • Lead to inadequate or misleading advice.

  • Misstate risk in negotiations, disclosures, or regulatory filings.

  • Expose clients to enforcement actions, civil liability, or debarment.

  • Expose lawyers to malpractice claims and disciplinary complaints.

ABA Model Rules: How Cyber and AI Now Touch Your License 🧾⚖️

Several American Bar Association (ABA) Model Rules are directly implicated by the VA’s evolution from compliance to cyber dominance and by the broader adoption of artificial intelligence (AI) in government operations:

  • Rule 1.1 – Competence

    • Comment 8 recognizes a duty of technological competence.

    • Lawyers must understand enough about cyber risk and AI systems to represent clients prudently.

  • Rule 1.6 – Confidentiality of Information

    • Lawyers must take reasonable measures to safeguard client information, including in cloud environments and AI‑enabled workflows.

    • Uploading sensitive or privileged content into consumer‑grade AI tools without safeguards can violate this duty.

  • Rule 1.4 – Communication

    • Clients should be informed—in clear, non‑technical terms—about significant cyber and AI risks that may affect their matters.

  • Rules 5.1 and 5.3 – Responsibilities of Partners, Managers, and Supervisory Lawyers; Responsibilities Regarding Nonlawyer Assistance

    • Law firm leaders must ensure that policies, training, vendor selection, and supervision support secure, ethical use of technology and AI by lawyers and staff.

  • Rule 1.13 – Organization as Client

    • Government and corporate counsel must advise leadership when cyber or AI governance failures pose substantial legal or regulatory risk.

  • Rules 3.3, 3.4, and 8.4 – Candor, Fairness, and Misconduct

    • Misrepresenting cyber posture, ignoring known vulnerabilities, or manipulating AI‑generated evidence can rise to ethical violations and professional misconduct.

In the age of cyber dominance, “I did not understand the technology” is increasingly unlikely to serve as a safe harbor. Judges, regulators, and disciplinary authorities expect lawyers to engage these issues competently.

Practical Next Steps for Lawyers: Moving from Passive to Proactive 🧭💼

To meet this moment, lawyers—both in government and outside—should:

  • Learn the language of modern cybersecurity:

    • Zero trust (a model that treats every access request as untrusted until verified)

    • Endpoint detection and response (EDR, tools that continuously monitor and respond to threats on endpoints such as laptops, servers, and mobile devices)

    • Security Information and Event Management (SIEM, systems that collect and analyze security logs from across the network)

    • Security Orchestration, Automation, and Response (SOAR, tools that automate and coordinate security workflows and responses)

    • Encryption at rest and in transit (protecting data when it is stored and when it moves across networks)

    • Multi‑factor authentication (MFA, requiring more than one factor—such as password plus a code—to log in)

  • Understand AI’s role in the client’s environment: what tools are used, where data goes, how outputs are checked, and how decisions are logged.

  • Review incident response plans and breach notification workflows with an eye on legal timelines, cross‑jurisdictional obligations, and contractual requirements.

  • Update engagement letters, privacy notices, and internal policies to reflect real‑world use of cloud services and AI tools.

  • Invest in continuous learning through technology‑forward legal resources, including The Tech-Savvy Lawyer.Page blog and podcast, which translate evolving tech into practical law practice strategies. 💡

Final Thoughts: The VA’s journey from compliance to cyber dominance is more than an agency story. It is a case study in how technology, law, and ethics converge. Lawyers who embrace this reality will better protect their clients, their institutions, and their licenses. Those who do not will risk being left behind—by adversaries, by regulators, and by their own professional standards. 🚀🔐⚖️

Editor’s Note: I used the VA as my “example” because Veterans mean a lot to me. I have been a Veterans Disability Benefits Advocate for nearly two decades. Their health and welfare should not be harmed by faulty tech compliance. 🇺🇸⚖️

MTC

🎙️TSL Labs! MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late!

📌 Too Busy to Read This Week's Editorial?

Join us for a professional deep dive into essential tech strategies for AI compliance in your legal practice. 🎙️ This AI-powered discussion unpacks the November 17, 2025, editorial, MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late! with actionable intelligence on hidden AI detection, confidentiality protocols, ethics compliance frameworks, and risk mitigation strategies. Artificial intelligence has been silently operating inside your most trusted legal software for years, and under ABA Formal Opinion 512, you bear full responsibility for all AI use, whether you knowingly activated it or it came as a default software update. The conversation makes complex technical concepts accessible to lawyers with varying levels of tech expertise—from tech-hesitant solo practitioners to advanced users—so you'll walk away with immediate, actionable steps to protect your practice, your clients, and your professional reputation.

In Our Conversation, We Cover the Following

00:00:00 - Introduction: Overview of TSL Labs initiative and the AI-generated discussion format

00:01:00 - The Silent Compliance Crisis: How AI has been operating invisibly in your software for years

00:02:00 - Core Conflict: Understanding why helpful tools simultaneously create ethical threats to attorney-client privilege

00:03:00 - Document Creation Vulnerabilities: Microsoft Word Co-pilot and Grammarly's hidden data processing

00:04:00 - Communication Tools Risks: Zoom AI Companion and the cautionary Otter.ai incident

00:05:00 - Research Platform Dangers: Westlaw and Lexis+ AI hallucination rates between 17-33%

00:06:00 - ABA Formal Opinion 512: Full lawyer responsibility for AI use regardless of awareness

00:07:00 - Model Rule 1.6 Analysis: Confidentiality breaches through third-party AI systems

00:08:00 - Model Rule 5.3 Requirements: Supervising AI tools with the same diligence as human assistants

00:09:00 - Five-Step Compliance Framework: Technology audits and vendor agreement evaluation

00:10:00 - Firm Policies and Client Consent: Establishing protocols and securing informed consent

00:11:00 - The Verification Imperative: Lessons from the Mata v. Avianca sanctions case

00:12:00 - Billing Considerations: Navigating hourly versus value-based fee models with AI

00:13:00 - Professional Development: Why tool learning time is non-billable competence maintenance

00:14:00 - Ongoing Compliance: The necessity of quarterly reviews as platforms rapidly evolve

00:15:00 - Closing Remarks: Resources and call to action for tech-savvy innovation

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

📖 Word of the Week: The Meaning of “Data Governance” and the Modern Law Practice - Your Essential Guide for 2025

Understanding Data Governance: A Lawyer's Blueprint for Protecting Client Information and Meeting Ethical Obligations

Lawyers need to know about “DAta governance” and how it affects their practice of law.

Data governance has emerged as one of the most critical responsibilities facing legal professionals today. The digital transformation of legal practice brings tremendous efficiency gains but also creates significant risks to client confidentiality and attorney ethical obligations. Every email sent, document stored, and case file managed represents a potential vulnerability that requires careful oversight.

What Data Governance Means for Lawyers

Data governance encompasses the policies, procedures, and practices that ensure information is managed consistently and reliably throughout its lifecycle. For legal professionals, this means establishing clear frameworks for how client information is collected, stored, accessed, shared, retained, and ultimately deleted. The goal is straightforward: protect sensitive client data while maintaining the accessibility needed for effective representation.

The framework defines who can take which actions with specific data assets. It establishes ownership and stewardship responsibilities. It classifies information by sensitivity and criticality. Most importantly for attorneys, it ensures compliance with ethical rules while supporting operational efficiency.

The Ethical Imperative Under ABA Model Rules

The American Bar Association Model Rules of Professional Conduct create clear mandates for lawyers regarding technology and data management. These obligations serve as an excellent source of guidance regardless of whether your state has formally adopted specific technology competence requirements. BUT REMEMBER ALWAYS FOLLOW YOUR STATE’S ETHIC’S RULES FIRST!

Model Rule 1.1 addresses competence and was amended in 2012 to explicitly include technological competence. Comment 8 now requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". This means attorneys must understand the data systems they use for client representation. Ignorance of technology is no longer acceptable.

Model Rule 1.6 governs confidentiality of information. The rule requires lawyers to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". Comment 18 specifically addresses the need to safeguard information against unauthorized access by third parties. This creates a direct ethical obligation to implement appropriate data security measures.

Model Rule 5.3 addresses responsibilities regarding nonlawyer assistants. This rule extends to technology vendors and service providers who handle client data. Lawyers must ensure that third-party vendors comply with the same ethical obligations that bind attorneys. This requires due diligence when selecting cloud storage providers, practice management software, and artificial intelligence tools.

The High Cost of Data Governance Failures

lawyers need to know the multiple facets of data Governance

Law firms face average data breach costs of $5.08 million. These financial losses pale in comparison to the reputational damage and loss of client trust that follows a security incident. A single breach can expose trade secrets, privileged communications, and personally identifiable information.

The consequences extend beyond monetary damages. Ethical violations can result in disciplinary action. Inadequate data security arguably constitutes a failure to fulfill the duty of confidentiality under Rule 1.6. Some jurisdictions have issued ethics opinions requiring attorneys to notify clients of breaches resulting from lawyer negligence.

Recent guidance from state bars emphasizes that lawyers must self-report breaches involving client data exposure. The ABA's Formal Opinion 483 addresses data breach obligations directly. The opinion confirms that lawyers have duties under Rules 1.1, 1.4, 1.6, 5.1, and 5.3 related to cybersecurity.

Building Your Data Governance Framework

Implementing effective data governance requires systematic planning and execution. The process begins with understanding your current data landscape.

Step One: Conduct a Data Inventory

Identify all data assets within your practice. Catalog their sources, types, formats, and locations. Map how data flows through your firm from creation to disposal. This inventory reveals where client information resides and who has access to it.

Step Two: Classify Your Data

Not all information requires the same level of protection. Establish a classification system based on sensitivity and confidentiality. Many firms use four levels: public, internal, confidential, and restricted.

Privileged attorney-client communications require the highest protection level. Publicly filed documents may still be confidential under Rule 1.6, contrary to common misconception. Client identity itself often qualifies as protected information.

Step Three: Define Access Controls

Implement role-based access controls that limit data exposure. Apply the principle of least privilege—users should access only information necessary for their specific responsibilities. Multi-factor authentication adds essential security for sensitive systems.

Step Four: Establish Policies and Procedures

Document clear policies governing data handling. Address encryption requirements for data at rest and in transit. Set retention schedules that balance legal obligations with security concerns. Create incident response plans for potential breaches.

Step Five: Train Your Team

The human element represents the greatest security vulnerability. Sixty-eight percent of data breaches involve human error. Regular training ensures staff understand their responsibilities and can recognize threats. Training should cover phishing awareness, password security, and proper data handling procedures.

Step Six: Monitor and Audit

Continuous oversight maintains governance effectiveness. Regular audits identify vulnerabilities before they become breaches. Review access logs for unusual activity. Update policies as technology and regulations evolve.

Special Considerations for Artificial Intelligence

The rise of generative AI tools creates new data governance challenges. ABA Formal Opinion 512 specifically addresses AI use in legal practice. Lawyers must understand whether AI systems are "self-learning" and use client data for training.

Many consumer AI platforms retain and learn from user inputs. Uploading confidential client information to ChatGPT or similar tools may constitute an ethical violation. Even AI tools marketed to law firms require careful vetting.

Before using any AI system with client data, obtain informed consent. Boilerplate language in engagement letters is insufficient. Clients need clear explanations of how their information will be used and what risks exist.

Vendor Management and Third-Party Risk

Lawyers cannot delegate their ethical obligations to technology vendors. Rule 5.3 requires reasonable efforts to ensure nonlawyer assistants comply with professional obligations. This extends to cloud storage providers, case management platforms, and cybersecurity consultants.

Before engaging any vendor handling client data, conduct thorough due diligence. Verify the vendor maintains appropriate security certifications like SOC 2, ISO 27001, or HIPAA compliance. Review vendor contracts to ensure adequate data protection provisions. Understand where data will be stored and who will have access.

The Path Forward

lawyers need to advocate data governance for their clients!

Data governance is not optional for modern legal practice. It represents a fundamental ethical obligation under multiple Model Rules. Client trust depends on proper data stewardship.

Begin with a realistic assessment of your current practices. Identify gaps between your current state and ethical requirements. Develop policies that address your specific risks and practice areas. Implement controls systematically rather than attempting wholesale transformation overnight.

Remember that data governance is an ongoing process requiring continuous attention. Technology evolves. Threats change. Regulations expand. Your governance framework must adapt accordingly.

The investment in proper data governance protects your clients, your practice, and your professional reputation. More importantly, it fulfills your fundamental ethical duty to safeguard client confidences in an increasingly digital world.

MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

MTC: The End of Dial-Up Internet: A Digital Divide Crisis for Legal Practice 📡⚖️

Dial-up shutdown deepens rural legal digital divide.

The legal profession faces an unprecedented access to justice challenge as AOL officially terminated its dial-up internet service on September 30, 2025, after 34 years of operation. This closure affects approximately 163,401 American households that depended solely on dial-up connections as of 2023, creating barriers to legal services in an increasingly digital world. While other dial-up providers like NetZero, Juno, and DSLExtreme continue operating, they may not cover all geographic areas previously served by AOL and offer limited long-term viability.

While many view dial-up as obsolete, its elimination exposes critical technology gaps that disproportionately impact vulnerable populations requiring legal assistance. Rural residents, low-income individuals, and elderly clients who relied on this affordable connectivity option now face digital exclusion from essential legal services and court systems. The remaining dial-up options provide minimal relief as these smaller providers lack AOL's extensive infrastructure coverage.

Split Courtroom!

Legal professionals must recognize that technology barriers create access to justice issues. When clients cannot afford high-speed internet or live in areas without broadband infrastructure, they lose the ability to participate in virtual court proceedings, access online legal resources, or communicate effectively with their attorneys. This digital divide effectively creates a two-tiered justice system where technological capacity determines legal access.

The legal community faces an implicit ethical duty to address these technology barriers. While no specific ABA Model Rule mandates accommodating clients' internet limitations, the professional responsibility to ensure access to justice flows from fundamental ethical obligations.

This implicit duty derives from several ABA Model Rules that create relevant obligations. Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including how technology barriers affect client representation. Rule 1.4 (Communication) mandates effective client communication, which encompasses understanding technology limitations that prevent meaningful attorney-client interaction. Rule 1.6 (Confidentiality) requires reasonable efforts to protect client information, necessitating awareness of technology security implications. Additionally, 41 jurisdictions have adopted technology competence requirements that obligate lawyers to stay current with technological developments affecting legal practice.

Lawyers are a leader when it comes to calls for action to help narrow the access to justice devide!

The legal community must advocate for affordable internet solutions and develop technology-inclusive practices to fulfill these professional responsibilities and ensure equal access to justice for all clients.

MTC