MTC: Deepfakes, Deception, and Professional Duty - What the North Bethesda AI Incident Teaches Lawyers About Ethics in the Digital Age 🧠⚖️

Lawyers need to be aware of the potential Professional and ethical consequences if they allow deepfakes to enter the courtroom.

In October 2025, a seemingly lighthearted prank spiraled into a serious legal matter that carries profound implications for every practicing attorney. A 27 year-old, North Bethesda woman sent her husband an AI-generated photograph depicting a man lounging on their living room couch. Alarmed by the apparent intrusion, he called 911. The subsequent police response was swift and overwhelming: eight marked cruisers raced through daytime traffic with lights and sirens activated. When officers arrived, they found no burglar—the woman was alone at home, a cellphone mounted on a tripod aimed at the front door, and the admission that it was all a prank.

The story might have ended as a cautionary tale about viral social media trends gone awry. But for the legal profession, it offers urgent and multifaceted lessons about technological competence, professional responsibility, and the ethical obligations that now define modern legal practice.

The woman was charged with making a false statement concerning an emergency or crime and providing a false statement to a state official. Though the charges are criminal in nature, they illuminate a landscape that the legal profession must navigate with far greater care than many currently do. The intersection of generative AI, digital deception, and legal ethics represents uncharted territory—one where professional liability and disciplinary action await those who fail to understand the technology reshaping evidence, testimony, and truth-seeking in the courtroom.

The Technology Competence Imperative

In 2012, the American Bar Association amended Comment 8 to Model Rule 1.1 (Competence) to include an explicit requirement that lawyers remain competent in "the benefits and risks associated with relevant technology." This was not a suggestion; it was a mandate. Today, 31 states have adopted or adapted this language into their own professional conduct rules. The ABA's accompanying committee report emphasized that the amendment serves as "a reminder to lawyers that they should remain aware of technology." Yet the word "reminder" should not be mistaken for optional guidance. As the digital landscape grows more sophisticated—and more legally consequential—ignorance of technology is increasingly indefensible as a basis for professional incompetence.

This case exemplifies why: An attorney representing clients in disputes involving digital media—whether custody cases, employment disputes, criminal defense, or civil litigation—cannot afford to lack foundational knowledge of how AI-generated images are created, detected, and authenticated. A lawyer who fails to distinguish authentic video evidence from a deepfake, or who presents such evidence without proper verification, may be engaging in conduct that violates not only Rule 1.1 but also Rules 3.3 and 8.4 of the ABA Model Rules of Professional Conduct.

Rule 1.1 creates a floor, not a ceiling. While most attorneys are not expected to become machine learning engineers, they must possess working knowledge of AI detection tools, image metadata analysis, forensic software, and the limitations of each. Many free and low-cost resources now exist for such training. Bar associations, CLE providers, and technology vendors offer courses specifically designed for attorneys with moderate tech proficiency. The obligation is not to achieve expertise but to make a deliberate, documented effort to stay reasonably informed.

Lawyers may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available.

🚨

Lawyers may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available. 🚨

Candor, Evidence, and the Truth-Seeking Function

The Maryland incident also implicates ABA Model Rule 3.3 (Candor Toward the Tribunal). Rule 3.3(a)(3) prohibits lawyers from offering evidence that they know to be false. But what does a lawyer know when AI makes authenticity ambiguous?

Consider a hypothetical: A client provides a lawyer with a photograph purporting to show the opposing party engaged in misconduct. The lawyer accepts it at face value and presents it to the court. Later, it is discovered that the image was AI-generated. The lawyer may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available. A lawyer's failure to employ basic verification protocols—such as checking metadata, using AI detection software, or consulting a forensic expert—may render their "belief" in authenticity unreasonable, transforming what appears to be good-faith conduct into a breach of the duty of candor.

The deeper concern is what scholars call the "Liar's Dividend": the phenomenon by which the mere existence of convincing deepfakes causes observers to distrust even genuine evidence. Lawyers can inadvertently exploit this dynamic by introducing AI-generated content without disclosure, or by sowing doubt in jurors' minds about the authenticity of real evidence. When a lawyer does so knowingly—or worse, with willful indifference—they corrupt the judicial process itself.

Rule 3.3 does not merely prevent lawyers from lying; it affirms their role as officers of the court whose duty to truth transcends client advocacy. This duty becomes more, not less, demanding in an age of manipulated media.

Dishonesty, Fraud, and the Outer Boundaries of Professional Conduct

North Bethesda deepfake prank highlights ethical gaps for attorneys.

ABA Model Rule 8.4(c) prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation. On its face, Rule 8.4 seems straightforward. But its application to AI-generated evidence raises subtle questions. If a lawyer negligently fails to detect a deepfake and introduces it as genuine, are they guilty of "deceit"? Does their ignorance of the technology constitute a defense, or does it constitute a separate violation of Rule 1.1?

The answer likely depends on context. A lawyer who presents AI-generated evidence without having undertaken any effort to verify it—in a jurisdiction where technological competence is mandated, and where basic detection tools are publicly available—may struggle to argue that they acted with mere negligence rather than reckless indifference to truth. The line between incompetence and dishonesty can be perilously thin.

Consider, too, the scenario in which a lawyer becomes aware that a client has manufactured evidence using AI. Rule 8.4(c) does not explicitly prevent a lawyer from advising a client about the legal risks of doing so, nor does it require immediate disclosure to opposing counsel or the court in all circumstances. However, if the lawyer then remains silent while the falsified evidence is introduced into litigation, they may be viewed as having effectively participated in fraud. The duty to maintain client confidentiality (Rule 1.6) can conflict with the duty of candor, but Rule 3.3 clarifies that candor prevails: "The duties stated in paragraph (a) … continue to the conclusion of the proceeding, and apply even if compliance requires disclosure of information otherwise protected by Rule 1.6.

Practical Safeguards and Professional Resilience

So what can lawyers do—immediately and pragmatically—to protect themselves and their clients?

First, invest in education. Most state bar associations now offer CLE courses on AI, deepfakes, and digital evidence. Many require only two to three hours. Florida has mandated three hours of technology CLE every three years; others will likely follow. Attending such courses is not an extravagance; it is the baseline floor of professional duty.

Second, establish verification protocols. When digital evidence is introduced in a case—particularly photographs, videos, or audio recordings—require documentation of provenance. Demand metadata. Consider retained expert assistance to authenticate digital files. Many law firms now partner with forensic technology consultants for exactly this purpose. The cost is modest compared to the risk of professional discipline or malpractice liability.

Third, disclose limitations transparently. If you lack expertise in evaluating a particular form of digital evidence, say so. Rule 1.1 permits lawyers to partner with others possessing requisite skills. Transparency about technological limitations is not weakness; it is professionalism.

Fourth, update client engagement letters and retention agreements. Explicitly discuss how your firm will handle digital evidence, what verification steps will be taken, and what the client can reasonably expect. Document these conversations. In disputes with clients later, such records can be invaluable.

Fifth, stay alert to emerging guidance. Bar associations continue to issue formal opinions on technology and ethics. Journals, conference presentations, and industry publications track the intersection of AI and law. Subscribing to alerts from your state bar's ethics committee or joining legal technology practice groups ensures you remain informed as standards evolve. *You may find following The Tech-Savvy Lawyer.Page a great source for alerts and guidance! 🤗

Final Thoughts: The Deeper Question

Lawyers have the professional and ethical responsibility of knowing how deepfakes work!

The Maryland case is ultimately not about one woman's ill-advised prank. It is about the profession's obligation to remain trustworthy stewards of justice in an age when truth itself can be fabricated with a few keystrokes. The legal system depends on evidence, testimony, and the adversarial process to uncover truth. Lawyers are its guardians.

Technology competence is not an optional specialization or a nice-to-have skill. Under the ABA Model Rules and the rules adopted by 31 states, it is a foundational professional duty. Failure to acquire it exposes practitioners to disciplinary action, malpractice claims, and—most importantly—the real possibility of leading their clients, courts, and the public toward injustice.

The invitation to lawyers is clear: engage with the technology that is reshaping litigation, evidence, and professional practice. Understand its capabilities and risks. Invest in verification, transparency, and ongoing education. In doing so, you honor not just your professional obligations but the deeper mission of the law itself: the pursuit of truth.

Word of the Week: Technology Stack - Your Law Firm's Digital Foundation 📖

A technology stack (commonly called a tech stack) represents the complete collection of software tools, applications, and technologies that work together to support your law firm's operations. This digital infrastructure powers everything from client communication to case management.

Your tech stack functions like building blocks. Each component serves a specific purpose. The foundation includes your operating system and hardware. The middle layer contains your practice management software and document systems. The top layer delivers the interfaces you interact with daily.

Modern law firms require robust tech stacks to remain competitive. These systems streamline workflows and improve efficiency. They also enhance client service delivery.

A well-designed legal tech stack typically includes practice management software as its core. This central system tracks deadlines, manages contacts, and coordinates team workflows. Document management and automation tools handle file storage, retrieval, and template creation. Client intake systems capture potential client information automatically. Communication tools such as Voice Over Internet Protocol (VOIP) systems ensure your firm never misses important calls.

Additional components strengthen your stack's capabilities. Financial management tools automate billing and expense tracking. Legal research platforms provide access to current case law and regulations. Security systems protect confidential client data through encryption and multi-factor authentication. Cloud-based solutions enable remote access and collaboration.

Building an effective tech stack requires careful planning. Start by identifying your firm's core needs. Prioritize tools that integrate smoothly with each other. Evaluate your budget for both licenses and training. Test new tools with a small team before firm-wide deployment. Choose vendors who offer reliable support and clear product roadmaps.

The benefits of a unified tech stack are substantial. Automated processes save hours each week. Smart templates reduce human errors and improve accuracy. Client portals provide real-time case updates that build trust. Enhanced security measures protect sensitive information while maintaining compliance. Scalable systems grow alongside your practice without requiring complete rebuilds.

A well-designed tech stack is important for any modern day law practice.

Your tech stack directly impacts your firm's ability to serve clients effectively. Technology-savvy clients expect modern tools and service levels comparable to other industries. Firms that invest in strong tech stacks gain competitive advantages in case management, client interactions, and overall productivity.

Remote work capabilities have become essential components. Cloud-based case management systems enable real-time collaboration regardless of location. Video conferencing and virtual collaboration tools maintain productivity in hybrid environments. Secure access through robust platforms ensures business continuity.

The legal technology landscape continues evolving rapidly. Artificial intelligence now powers research and document review. Automation handles contract generation and compliance checks. Advanced financial management solutions streamline billing and payment processing. Integration between these systems creates seamless workflows that maximize efficiency.

Choosing the right tech stack positions your firm for long-term success. Focus on solutions that address real problems rather than simply adding tools. Seek platforms that work together rather than operating in isolation. Regularly review your stack as your firm grows and technology advances. This strategic approach ensures your digital infrastructure supports your practice goals effectively.

MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

🚨 AWS Outage Resolved: Critical Ethics Guidance for Lawyers Using Cloud-Based Legal Services

Legal professionals don’t react but act when your online legal systems are down!

Amazon Web Services experienced a major outage on October 20, 2025, disrupting legal practice management platforms like Clio, MyCase, PracticePanther, LEAP, and Lawcus. The Domain Name Service (DNS) resolution failure in AWS's US-EAST-1 region was fully mitigated by 6:35 AM EDT after approximately three hours. BUT THIS DOES NOT MEAN THEY HAVE RESOLVED ALL OF THE BACK ISSUES THAT ORIGINATED DUE TO THE OUTAGE at the time of this posting.  Note: DNS - the internet's phone book that translates human-readable web addresses into the numerical IP addresses that computers actually use. When DNS fails, it's like having all the street signs disappear at once. Your destination still exists, but there's no way to find it.

Try clearing your browser’s cache - that may help resolve some of the issues.

‼️ TIP! ‼️

Try clearing your browser’s cache - that may help resolve some of the issues. ‼️ TIP! ‼️

Legal professionals, what are your protocols when your online legal services are down?!

Lawyers using cloud-dependent legal services must review their ethical obligations under ABA Model Rules 1.1 and comment [8] (technological competence), 1.6 (confidentiality), and 5.3 (supervision of third-party vendors). Key steps include: documenting the incident's impact on client matters (if any), assessing whether material client information was compromised, notifying affected current clients if data breach occurred, reviewing business continuity plans, and conducting due diligence on cloud providers' disaster recovery protocols. Law firms should verify their vendors maintain redundant backup systems, SSAE16 audited data centers, and clear data ownership policies. The outage highlights the critical need for lawyers to understand their cloud infrastructure dependencies and maintain contingency plans for service disruptions.

🔒 Word (Phrase) of the Week: “Zero Data Retention” Agreements: Why Every Lawyer Must Pay Attention Now!

Understanding Zero Data Retention in Legal Practice

🚨 Lawyers Must Know Zero Data Retention Now!

Zero Data Retention (ZDR) agreements represent a fundamental shift in how law firms protect client confidentiality when using third-party technology services. These agreements ensure that sensitive client information is processed but never stored by vendors after immediate use. For attorneys navigating an increasingly digital practice environment, understanding ZDR agreements has become essential to maintaining ethical compliance.

ZDR works through a simple but powerful principle: access, process, and discard. When lawyers use services with ZDR agreements, the vendor connects to data only when needed, performs the requested task, and immediately discards all information without creating persistent copies. This architectural approach dramatically reduces the risk of data breaches and unauthorized access.

The Legal Ethics Crisis Hidden in Your Vendor Contracts

Recent court orders have exposed a critical vulnerability in how lawyers use technology. A federal court ordered OpenAI to preserve all ChatGPT conversation logs indefinitely, including deleted content—even for paying subscribers. This ruling affects millions of users and demonstrates how quickly data retention policies can change through litigation.

The implications for legal practice are severe. Attorneys using consumer-grade AI tools, standard cloud storage, or free collaboration platforms may unknowingly expose client confidences to indefinite retention. This creates potential violations of fundamental ethical obligations, regardless of the lawyer's intent or the vendor's original promises.

ABA Model Rules Create Mandatory Obligations

Three interconnected ABA Model Rules establish clear ethical requirements for lawyers using technology vendors.

Rule 1.1 and its Comment [8] requires technological competence. Attorneys must understand "the benefits and risks associated with relevant technology". This means lawyers cannot simply trust vendor marketing claims about data security. They must conduct meaningful due diligence before entrusting client information to any third party.

Rule 1.6 mandates confidentiality protection. Lawyers must "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". This obligation extends to all digital communications and cloud-based storage. When vendors retain data beyond the immediate need, attorneys face heightened risks of unauthorized disclosure.

Rule 5.3 governs supervision of nonlawyer assistants. This rule applies equally to technology vendors who handle client information. Lawyers with managerial authority must ensure their firms implement measures that provide reasonable assurance that vendors comply with the attorney's professional obligations.

Practical Steps for Ethical Compliance

Attorneys must implement specific practices to satisfy their ethical obligations when selecting technology vendors.

1. Demand written confirmation of zero data retention policies from all vendors handling client information. Ask whether the vendor uses client data for training AI models. Determine how long any data remains accessible after processing. These questions must be answered clearly before using any service.

Lawyers Need Zero Data Retention Agreements!

Review vendor agreements carefully. Standard terms of service often fail to provide adequate confidentiality protections. Attorneys should negotiate explicit contractual provisions that prohibit data retention beyond immediate processing needs. These agreements must specify encryption standards, access controls, and breach notification procedures.

Obtain client consent when using third-party services that may access confidential information. While not always legally required, informed consent demonstrates respect for client autonomy and provides an additional layer of protection.

Conduct ongoing monitoring of vendor practices. Initial due diligence is insufficient. Technology changes rapidly, and vendors may alter their data handling practices. Regular reviews ensure continued compliance with ethical obligations.

Restrict employee use of unauthorized tools. Many data breaches stem from "shadow IT"—employees using personal accounts or unapproved services for work purposes. Clear policies and training can prevent inadvertent ethical violations.

The Distinction Between Consumer and Enterprise Services

Not all AI and cloud services create equal ethical risks. Consumer versions of popular tools often lack the security features required for legal practice. Enterprise subscriptions typically provide enhanced protections, including zero data retention options.

For example, OpenAI offers different service tiers with dramatically different data handling practices. ChatGPT Free, Plus, Pro, and Team subscriptions now face indefinite data retention due to court orders. However, ChatGPT Enterprise and API customers with ZDR agreements remain unaffected. This distinction matters enormously for attorney compliance.

Industry-Specific Legal AI Offers Additional Safeguards

Legal-specific AI platforms build confidentiality protections into their core architecture. These tools understand attorney-client privilege requirements and design their systems accordingly. They typically offer encryption, access controls, SOC 2 compliance, and explicit commitments not to use client data for training.

When evaluating legal technology vendors, attorneys should prioritize those offering private AI environments, end-to-end encryption, and contractual guarantees about data retention. These features align with the ethical obligations imposed by the Model Rules.

Zero Data Retention as Competitive Advantage

Beyond ethical compliance, ZDR agreements offer practical benefits. They reduce storage costs, simplify regulatory compliance, and minimize the attack surface for cybersecurity threats. In an era of increasing data breaches, the ability to tell clients that their information is never stored by third parties provides meaningful competitive differentiation.

Final Thoughts: Action Required Now

Lawyers must Protect Client Data with ZDR!

The landscape of legal technology changes constantly. Court orders can suddenly transform data retention policies. Vendors can modify their terms of service. New ethical opinions can shift compliance expectations.

Attorneys cannot afford passive approaches to vendor management. They must actively investigate, negotiate, and monitor the data handling practices of every technology provider accessing client information. Zero data retention agreements represent one powerful tool for maintaining ethical compliance in an increasingly complex technological environment.

The duty of confidentiality remains absolute, regardless of the tools lawyers choose. By demanding ZDR agreements and implementing comprehensive vendor management practices, attorneys can embrace technological innovation while protecting the fundamental trust that defines the attorney-client relationship.

🎙️ Ep. 122: Cybersecurity Essentials for Law Firms: Proven Strategies from Navy Veteran & Attorney Cordell Robinson

My next guest is Cordell Brion Robinson, CEO of Brownstone Consulting Firm and a decorated US Navy veteran who brings an extraordinary combination of expertise to cybersecurity. With a background in Computer Science, Electrical Engineering, and law, plus experience as a Senior Intelligence Analyst, Cordell has created cybersecurity programs that comply with the National Institute of Standards and Technology, the Federal Information Security Management Act, and the Office of Management and Budget standards for both government and commercial organizations. His firm specializes in compliance services, performing security framework assessments globally for commercial and government entities. Currently, he's innovating the cybersecurity space through automation for security assessments. Beyond his professional accomplishments, Cordell runs the Shaping Futures Foundation, a nonprofit dedicated to empowering youth through education, demonstrating his commitment to giving back to the community.

Join Cordell Robinson and me as we discuss the following three questions and more! 🎙️

1. What are the top three cybersecurity practices that lawyers should immediately adopt to secure both client data and sensitive case material in their practice?

2. From your perspective as both a legal and cybersecurity expert, what are the top three technology tools or platforms that can help lawyers streamline compliance and governance requirements in a rapidly evolving regulatory environment?

3. What are the top three steps lawyers can take to overcome resistance to technology adoption in law firms, ensuring these tools actually improve outcomes and efficiency rather than just adding complexity

In our conversation, we cover the following: ⏱️

- 00:00:00 - Introduction and welcome to the podcast

- 00:00:30 - Cordell's current tech setup - Windows laptop, MacBook, and iPhone

- 00:01:00 - iPhone 17 Pro Max features including 48MP camera, 2TB storage, and advanced video capture

- 00:01:30 - iPhone 17 Air comparison and laptop webcam discussion

- 00:02:00 - VPN usage strategies - Government VPN for secure client communications

- 00:02:30 - Commercial client communications and secure file sharing practices

- 00:03:00 - Why email encryption matters and Mac Mail setup tutorial

- 00:04:00 - Bonus question: Key differences between commercial and government security work

- 00:05:00 - Security protocols comparison and navigating government red tape

- 00:06:00 - Question 1: Top three cybersecurity practices lawyers must implement immediately

- 00:06:30 - Understanding where client data comes from and having proper IT security professionals

- 00:07:00 - Implementing cybersecurity awareness training for all staff members

- 00:07:30 - Practical advice for solo and small practitioners without dedicated IT staff

- 00:08:00 - Proper email practices and essential security awareness training skills

- 00:08:30 - Handling data from average clients in sensitive cases like family law

- 00:09:00 - Social engineering considerations in contentious legal matters such as divorces

- 00:10:00 - Screening threats from seemingly reliable platforms - Google Play slop ads as recent example

- 00:10:30 - Tenable vulnerability scanning tool recommendation (approximately $1,500/year)

- 00:11:00 - Question 2: Technology tools for streamlining compliance and governance

- 00:11:30 - GRC tools for organizing compliance documentation across various price points

- 00:12:00 - SharePoint security lockdown and importance of proper system configuration

- 00:12:30 - Monitoring tools discussion - why no perfect solution exists and what to consider

- 00:13:00 - Being amenable to change and avoiding long-term contracts with security tools

- 00:14:00 - Question 3: Strategies for overcoming resistance to technology adoption

- 00:14:30 - Demonstrating efficiency and explaining the full implementation process

- 00:15:00 - Converting time savings to dollars and cents for senior attorney buy-in

- 00:15:30 - Mindset shift for billable hour attorneys and staying competitive in the market

- 00:16:00 - Being a technology Guinea pig and testing tools yourself first

- 00:16:30 - Showing real results to encourage buy-in from colleagues

- 00:17:00 - Real-world Microsoft Word example - styles, cross-references, and table of contents time savings

- 00:17:30 - Showing value add and how technology can bring in more revenue

- 00:18:00 - Where to find Cordell Robinson - LinkedIn, www.bcf-us.com, Brownstone Consulting Firm

- 00:18:30 - Company description and closing remarks

Resources 📚

Connect with Cordell Robinson:

Government & Compliance Frameworks:

Software & Tools:

MTC: Balancing Digital Transparency and Government Employee Safety: The Legal Profession's Ethical Crossroads in the Age of ICE Tracking Apps

The balance between government employee saftey and the public’s right to know is always in flux.

The intersection of technology, government transparency, and employee safety has created an unprecedented ethical challenge for the legal profession. Recent developments surrounding ICE tracking applications like ICEBlock, People Over Papers, and similar platforms have thrust lawyers into a complex moral and professional landscape where the traditional principle of "sunlight as the best disinfectant" collides with legitimate security concerns for government employees.

The Technology Landscape: A New Era of Crowdsourced Monitoring

The proliferation of ICE tracking applications represents a significant shift in how citizens monitor government activities. ICEBlock, developed by Joshua Aaron, allows users to anonymously report ICE agent sightings within a five-mile radius, functioning essentially as "Waze for immigration enforcement". People Over Papers, created by TikTok user Celeste, operates as a web-based platform using Padlet technology to crowdsource and verify ICE activity reports with photographs and timestamps. Additional platforms include Islip Forward, which provides real-time push notifications for Suffolk County residents, and Coquí, offering mapping and alert systems for ICE activities.

These applications exist within a broader ecosystem of similar technologies. Traditional platforms like Waze, Google Maps, and Apple Maps have long enabled police speed trap reporting. More controversial surveillance tools include Fog Reveal, which allows law enforcement to track civilian movements using advertising IDs from popular apps. The distinction between citizen-initiated transparency tools and government surveillance technologies highlights the complex ethical terrain lawyers must navigate.

The Ethical Framework: ABA Guidelines and Professional Responsibilities

Legal professionals face multiple competing ethical obligations when addressing these technological developments. ABA Model Rule 1.1 requires lawyers to maintain technological competence, understanding both the benefits and risks associated with relevant technology. This competence requirement extends beyond mere familiarity to encompass the ethical implications of technology use in legal practice.

Rule 1.6's confidentiality obligations create additional complexity when lawyers handle cases involving government employees, ICE agents, or immigration-related matters. The duty to protect client information becomes particularly challenging when technology platforms may compromise attorney-client privilege or expose sensitive personally identifiable information to third parties.

The tension between advocacy responsibilities and ethical obligations becomes acute when lawyers represent clients on different sides of immigration enforcement. Attorneys representing undocumented immigrants may view transparency tools as legitimate safety measures, while those representing government employees may consider the same applications as security threats that endanger their clients.

Balancing Transparency and Safety: The Core Dilemma

Who watches whom? Exploring transparency limits in democracy.

The principle of transparency in government operations serves as a cornerstone of democratic accountability. However, the safety of government employees, including ICE agents, presents legitimate counterbalancing concerns. Federal officials have reported significant increases in assaults against ICE agents, citing these tracking applications as contributing factors.

The challenge for legal professionals lies in advocating for their clients while maintaining ethical standards that protect all parties' legitimate interests. This requires nuanced understanding of both technology capabilities and legal boundaries. Lawyers must recognize that the same transparency tools that may protect their immigrant clients could potentially endanger government employees who are simply performing their lawful duties.

Technology Ethics in Legal Practice: Professional Standards

The legal profession's approach to technology ethics must evolve to address these emerging challenges. Lawyers working with sensitive immigration cases must implement robust cybersecurity measures, understand the privacy implications of various communication platforms, and maintain clear boundaries between personal advocacy and professional obligations.

The ABA's guidance on generative AI and technology use provides relevant frameworks for addressing these issues. Legal professionals must ensure that their technology choices do not inadvertently compromise client confidentiality or create security vulnerabilities that could harm any party to legal proceedings.

Jurisdictional and Regulatory Considerations

The removal of ICEBlock from Apple's App Store and People Over Papers from Padlet demonstrates how private platforms exercise content moderation that can significantly impact government transparency tools. These actions raise important questions about the role of technology companies in mediating between transparency advocates and security concerns.

Legal professionals must understand the complex regulatory environment governing these technologies. Federal agencies like CISA recommend encrypted communications for high-value government targets while acknowledging the importance of government transparency. This creates a nuanced landscape where legitimate security measures must coexist with accountability mechanisms.

Professional Recommendations and Best Practices

Legal practitioners working in this environment should adopt several key practices. First, maintain clear separation between personal political views and professional obligations. Second, implement comprehensive cybersecurity measures that protect all client information regardless of their position in legal proceedings proceedings. Third, stay informed about technological developments and their legal implications through continuing education focused on technology law and ethics.

Lawyers should also engage in transparent communication with clients about the risks and benefits of various technology platforms. This includes obtaining informed consent when using technologies that may impact privacy or security, and maintaining awareness of how different platforms handle data security and user privacy.

The legal profession must also advocate for balanced regulatory approaches that protect both government transparency and employee safety. This may involve supporting legislation that creates appropriate oversight mechanisms while maintaining necessary security protections for government workers.

The Path Forward: Ethical Technology Advocacy

The future of legal practice will require increasingly sophisticated approaches to balancing competing interests in our digital age. Legal professionals must serve as informed advocates who understand both the technological landscape and the ethical obligations that govern their profession. This includes recognizing that technology platforms designed for legitimate transparency purposes can be misused, while also acknowledging that government accountability remains essential to democratic governance.

transparency is a balancing act that all lawyers need to be aware of in their practice!

The legal profession's response to ICE tracking applications and similar technologies will establish important precedents for how lawyers navigate future ethical challenges in our increasingly connected world. By maintaining focus on professional ethical standards while advocating effectively for their clients, legal professionals can help ensure that technological advances serve justice rather than undermining it.

Success in this environment requires lawyers to become technologically literate advocates who understand both the promise and perils of digital transparency tools. Only through this balanced approach can the legal profession effectively serve its clients while maintaining the ethical standards that define professional practice in the digital age.

MTC

MTC (Bonus): The Critical Importance of Source Verification When Using AI in Legal Practice 📚⚖️

The Fact-Checking Lawyer vs. AI Errors!

Legal professionals face an escalating verification crisis as AI tools proliferate throughout the profession. A recent conversation I had with an AI research assistant about AOL's dial-up internet shutdown perfectly illustrates why lawyers must rigorously fact-check AI outputs. In preparing my editorial for earlier today (see here), I came across a glaring error.  And when I corrected the AI's repeated date errors—it incorrectly cited 2024 instead of 2025 for AOL's September 30 shutdown—this highlighted the dangerous gap between AI confidence and AI accuracy that has resulted in over 410 documented AI hallucination cases worldwide. (You can also see my previous discussions on the topic here).

This verification imperative extends beyond simple date corrections. Stanford University research reveals troubling accuracy rates across legal AI tools, with some systems producing incorrect information over 34% of the time, while even the best-performing specialized legal AI platforms still generate false information approximately 17% of the time. These statistics underscore a fundamental truth: AI tools are powerful research assistants, not infallible oracles.

AI Hallucinations in the Courtroom are not a good thing!

Editor's Note: The irony was not lost on me that while writing this editorial about AI accuracy problems, I had to correct the AI assistant multiple times for contradictory statements about error rates in this very paragraph. The AI initially claimed Westlaw had 34% errors while specialized legal platforms had only 17% errors—ignoring that Westlaw IS a specialized legal platform. This real-time experience of catching AI logical inconsistencies while drafting an article about AI verification perfectly demonstrates the critical need for human oversight that this editorial advocates.

The consequences of inadequate verification are severe and mounting. Courts have imposed sanctions ranging from $2,500 to $30,000 on attorneys who submitted AI-generated fake cases. Recent cases include Morgan & Morgan lawyers sanctioned $5,000 for citing eight nonexistent cases, and a California attorney fined $10,000 for submitting briefs where "nearly all legal quotations ... [were] fabricated". These sanctions reflect judicial frustration with attorneys who fail to fulfill their gatekeeping responsibilities.

Legal professionals face implicit ethical obligations that demand rigorous source verification when using AI tools. ABA Model Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including AI's propensity for hallucinations. Rule 3.4 (Fairness to Opposing Party and Tribunal) prohibits knowingly making false statements of fact or law to courts. Rule 5.1 (Responsibilities Regarding Nonlawyer Assistance) extends supervisory duties to AI tools, requiring lawyers to ensure AI work product meets professional standards. Courts consistently emphasize that "existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings".

The Tech-Savvy Lawyer should have AI Verification Protocols.

The legal profession must establish verification protocols that treat AI as sophisticated but fallible technology requiring human oversight (perhaps a comment to Rule 1.1(8). This includes cross-referencing AI citations against authoritative databases, validating factual claims through independent sources, and maintaining detailed records of verification processes. Resources like The Tech-Savvy Lawyer blog and podcast provide valuable guidance for implementing these best practices. As one federal judge warned, "the duty to check their sources and make a reasonable inquiry into existing law remains unchanged" in the age of AI.

Attorneys who embrace AI without implementing robust verification systems risk professional sanctions, client harm, and reputational damage that could have been prevented through diligent fact-checking practices.  Simply put - check your work when using AI.

MTC

MTC: The End of Dial-Up Internet: A Digital Divide Crisis for Legal Practice 📡⚖️

Dial-up shutdown deepens rural legal digital divide.

The legal profession faces an unprecedented access to justice challenge as AOL officially terminated its dial-up internet service on September 30, 2025, after 34 years of operation. This closure affects approximately 163,401 American households that depended solely on dial-up connections as of 2023, creating barriers to legal services in an increasingly digital world. While other dial-up providers like NetZero, Juno, and DSLExtreme continue operating, they may not cover all geographic areas previously served by AOL and offer limited long-term viability.

While many view dial-up as obsolete, its elimination exposes critical technology gaps that disproportionately impact vulnerable populations requiring legal assistance. Rural residents, low-income individuals, and elderly clients who relied on this affordable connectivity option now face digital exclusion from essential legal services and court systems. The remaining dial-up options provide minimal relief as these smaller providers lack AOL's extensive infrastructure coverage.

Split Courtroom!

Legal professionals must recognize that technology barriers create access to justice issues. When clients cannot afford high-speed internet or live in areas without broadband infrastructure, they lose the ability to participate in virtual court proceedings, access online legal resources, or communicate effectively with their attorneys. This digital divide effectively creates a two-tiered justice system where technological capacity determines legal access.

The legal community faces an implicit ethical duty to address these technology barriers. While no specific ABA Model Rule mandates accommodating clients' internet limitations, the professional responsibility to ensure access to justice flows from fundamental ethical obligations.

This implicit duty derives from several ABA Model Rules that create relevant obligations. Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including how technology barriers affect client representation. Rule 1.4 (Communication) mandates effective client communication, which encompasses understanding technology limitations that prevent meaningful attorney-client interaction. Rule 1.6 (Confidentiality) requires reasonable efforts to protect client information, necessitating awareness of technology security implications. Additionally, 41 jurisdictions have adopted technology competence requirements that obligate lawyers to stay current with technological developments affecting legal practice.

Lawyers are a leader when it comes to calls for action to help narrow the access to justice devide!

The legal community must advocate for affordable internet solutions and develop technology-inclusive practices to fulfill these professional responsibilities and ensure equal access to justice for all clients.

MTC