MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late!

Lawyers need Digital due diligence in order to say on top of their ethic’s requirements.

Artificial intelligence has infiltrated legal practice in ways most attorneys never anticipated. While lawyers debate whether to adopt AI tools, they've already been using them—often without knowing it. These "hidden AI" features, silently embedded in everyday software, present a compliance crisis that threatens attorney-client privilege, confidentiality obligations, and professional responsibility standards.

The Invisible Assistant Problem

Hidden AI operates in plain sight. Microsoft Word's Copilot suggests edits while you draft pleadings. Adobe Acrobat's AI Assistant automatically identifies contracts and extracts key terms from PDFs you're reviewing. Grammarly's algorithm analyzes your confidential client communications for grammar errors. Zoom's AI Companion transcribes strategy sessions with clients—and sometimes captures what happens after you disconnect.

DocuSign now deploys AI-Assisted Review to analyze agreements against predefined playbooks. Westlaw and Lexis+ embed generative AI directly into their research platforms, with hallucination rates between 17% and 33%. Even practice management systems like Clio and Smokeball have woven AI throughout their platforms, from automated time tracking descriptions to matter summaries.

The challenge isn't whether these tools provide value—they absolutely do. The crisis emerges because lawyers activate features without understanding the compliance implications.

ABA Model Rules Meet Modern Technology

The American Bar Association's Formal Opinion 512, issued in July 2024, makes clear that lawyers bear full responsibility for AI use regardless of whether they actively chose the technology or inherited it through software updates. Several Model Rules directly govern hidden AI features in legal practice.

Model Rule 1.1 requires competence, including maintaining knowledge about the benefits and risks associated with relevant technology. Comment 8 to this rule, adopted by most states, mandates that lawyers understand not just primary legal tools but embedded AI features within those tools. This means attorneys cannot plead ignorance when Microsoft Word's AI Assistant processes privileged documents.

Model Rule 1.6 imposes strict confidentiality obligations. Lawyers must make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". When Grammarly accesses your client emails to check spelling, or when Zoom's AI transcribes confidential settlement discussions, you're potentially disclosing protected information to third-party AI systems.

Model Rule 5.3 extends supervisory responsibilities to "nonlawyer assistance," which includes non-human assistance like AI. The 2012 amendment changing "assistants" to "assistance" specifically contemplated this scenario. Lawyers must supervise AI tools with the same diligence they'd apply to paralegals or junior associates.

Model Rule 1.4 requires communication with clients about the means used to accomplish their objectives. This includes informing clients when AI will process their confidential information, obtaining informed consent, and explaining the associated risks.

Where Hidden AI Lurks in Legal Software

🚨 lawyers don’t breach your ethical duties with AI shortcuts!!!

Microsoft 365 Copilot integrates AI across Word, Outlook, and Teams—applications lawyers use hundreds of times daily. The AI drafts documents, summarizes emails, and analyzes meeting transcripts. Most firms that subscribe to Microsoft 365 have Copilot enabled by default in recent licensing agreements, yet many attorneys remain unaware their correspondence flows through generative AI systems.

Adobe Acrobat now automatically recognizes contracts and generates summaries with AI Assistant. When you open a PDF contract, Adobe's AI immediately analyzes it, extracts key dates and terms, and offers to answer questions about the document. This processing occurs before you explicitly request AI assistance.

Legal research platforms embed AI throughout their interfaces. Westlaw Precision AI and Lexis+ AI process search queries through generative models that hallucinate incorrect case citations 17% to 33% of the time according to Stanford research. These aren't separate features—they're integrated into the standard search experience lawyers rely upon daily.

Practice management systems deploy hidden AI for intake forms, automated time entry descriptions, and matter summaries. Smokeball's AutoTime AI generates detailed billing descriptions automatically. Clio integrates AI into client relationship management. These features activate without explicit lawyer oversight for each instance of use.

Communication platforms present particularly acute risks. Zoom AI Companion and Microsoft Teams AI automatically transcribe meetings and generate summaries. Otter.ai's meeting assistant infamously continued recording after participants thought a meeting ended, capturing investors' candid discussion of their firm's failures. For lawyers, such scenarios could expose privileged attorney-client communications or work product.

The Compliance Framework

Establishing ethical AI use requires systematic assessment. First, conduct a comprehensive technology audit. Inventory every software application your firm uses and identify embedded AI features. This includes obvious tools like research platforms and less apparent sources like PDF readers, email clients, and document management systems.

Second, evaluate each AI feature against confidentiality requirements. Review vendor agreements to determine whether the AI provider uses your data for model training, stores information after processing, or could disclose data in response to third-party requests. Grammarly, for example, offers HIPAA compliance but only for enterprise customers with 100+ seats who execute Business Associate Agreements. Similar limitations exist across legal software.

Third, implement technical safeguards. Disable AI features that lack adequate security controls. Configure settings to prevent automatic data sharing. Adobe and Microsoft both offer options to prevent AI from training on customer data, but these protections require active configuration.

Fourth, establish firm policies governing AI use. Designate responsibility for monitoring AI features in licensed software. Create protocols for evaluating new tools before deployment. Develop training programs ensuring all attorneys understand their obligations when using AI-enabled applications.

Fifth, secure client consent. Update engagement letters to disclose AI use in service delivery. Explain the specific risks associated with processing confidential information through AI systems. Document informed consent for each representation.

The Verification Imperative

ABA Formal Opinion 512 emphasizes that lawyers cannot delegate professional judgment to AI. Every output requires independent verification. When Westlaw Precision AI suggests research authorities, lawyers must confirm those cases exist and accurately reflect the law. When CoCounsel Drafting generates contract language in Microsoft Word, attorneys must review for accuracy, completeness, and appropriateness to the specific client matter.

The infamous Mata v. Avianca case, where lawyers submitted AI-generated briefs citing fabricated cases, illustrates the catastrophic consequences of failing to verify AI output. Every jurisdiction that has addressed AI ethics emphasizes this verification duty.

Cost and Billing Considerations

Formal Opinion 512 addresses whether lawyers can charge the same fees when AI accelerates their work. The opinion suggests lawyers cannot bill for time saved through AI efficiency under traditional hourly billing models. However, value-based and flat-fee arrangements may allow lawyers to capture efficiency gains, provided clients understand AI's role during initial fee negotiations.

Lawyers cannot bill clients for time spent learning AI tools—maintaining technological competence represents a professional obligation, not billable work. As AI becomes standard in legal practice, using these tools may become necessary to meet competence requirements, similar to how electronic research and e-discovery tools became baseline expectations.

Practical Steps for Compliance

Start by examining your Microsoft Office subscription. Determine whether Copilot is enabled and what data sharing settings apply. Review Adobe Acrobat's AI Assistant settings and disable automatic contract analysis if your confidentiality review hasn't been completed.

Contact your Westlaw and Lexis representatives to understand exactly how AI features operate in your research platform. Ask specific questions: Does the AI train on your search queries? How are hallucinations detected and corrected? What happens to documents you upload for AI analysis?

Audit your practice management system. If you use Clio, Smokeball, or similar platforms, identify every AI feature and evaluate its compliance with confidentiality obligations. Automatic time tracking that generates descriptions based on document content may reveal privileged information if billing statements aren't properly redacted.

Review video conferencing policies. Establish protocols requiring explicit disclosure when AI transcription activates during client meetings. Obtain informed consent before recording privileged discussions. Consider disabling AI assistants entirely for confidential matters.

Implement regular training programs. Technology competence isn't achieved once—it requires ongoing education as AI features evolve. Schedule quarterly reviews of new AI capabilities deployed in your software stack.

Final Thoughts 👉 The Path Forward

lawyers must be able to identify and contain ai within the tech tools they use for work!

Hidden AI represents both opportunity and obligation. These tools genuinely enhance legal practice by accelerating research, improving drafting, and streamlining administrative tasks. The efficiency gains translate into better client service and more competitive pricing.

However, lawyers cannot embrace these benefits while ignoring their ethical duties. The Model Rules apply with equal force to hidden AI as to any other aspect of legal practice. Ignorance provides no defense when confidentiality breaches occur or inaccurate AI-generated content damages client interests.

The legal profession stands at a critical juncture. AI integration will only accelerate as software vendors compete to embed intelligent features throughout their platforms. Lawyers who proactively identify hidden AI, assess compliance risks, and implement appropriate safeguards will serve clients effectively while maintaining professional responsibility.

Those who ignore hidden AI features operating in their daily practice face disciplinary exposure, malpractice liability, and potential privilege waivers. The choice is clear: unmask the hidden AI now, or face consequences later.

MTC

🎙️ TSL Labs! Google AI Discussion of MTC: 🚨‼️ Emergency BOLO! 🚨‼️ Lawyers on the Go: Essential Tech Strategies for Air Travel During the Government Shutdown ✈️

📌 Too Busy to Read This Week's Editorial?

Join us for an emergency professional deep dive into essential tech strategies for air travel during government shutdowns and travel disruptions. 🛫 This AI-powered roundtable unpacks Michael D.J. Eisenberg's critical editorial with actionable intelligence on real-time flight tracking, data security protocols, connectivity redundancy, and power management. Whether you're a legal professional navigating travel chaos or anyone managing disruptions during system-wide stress, discover how to transform from reactive scrambling to proactive control—turning travel crises into manageable projects you command. Learn the five professional-grade rules that separate those who navigate disruptions from those who get derailed.

In our conversation, we cover the following:

  • 00:00:00 – Introduction: Welcome to Tech Savvy Lawyer Labs Emergency BOLO

  • 00:01:00 – Travel Chaos as the New Normal: System Volatility & Professional Vulnerability

  • 00:02:00 – Flight Schedule Control: The Illusion & Reality of Travel Disruptions

  • 00:02:00 – Extreme Volatility in Air Travel: Cascading Flight Cancellations & Customer Service Chaos

  • 00:02:00 – Real-Time Flight Tracking Strategy: Flightradar24 & FlightAware Intelligence Systems

  • 00:02:00 – Backup Flight Monitoring: Multi-Carrier Surveillance Strategy (Delta, United, American)

  • 00:03:00 – Proactive Intelligence vs. Reactive Response: One-Hour Lead Time Advantage

  • 00:03:00 – Early Rebooking Strategy: First and Second Choice Flight Selection

  • 00:03:00 – Trusted Traveler Programs: TSA PreCheck & Time Investment ROI

  • 00:03:00 – TSA PreCheck Value: $78 for Five Years & Security Line Efficiency

  • 00:03:00 – Global Entry: $100 for Five Years with International Customs Acceleration

  • 00:04:00 – Trusted Traveler Planning: Background Checks, Interviews & Months-Ahead Application

  • 00:04:00 – Public WiFi Malpractice Alert: Data Security & Vulnerability Assessment

  • 00:04:00 – Personal Mobile Hotspot: Cellular Encryption Over Public Networks

  • 00:05:00 – Dual Carrier Coverage: eSIM Technology & Connectivity Insurance

  • 00:05:00 – Dual SIM Implementation: T-Mobile & Verizon Redundancy Strategy Without Two Phones

  • 00:05:00 – eSIM Digital Technology: Two Active Lines on One Device

  • 00:05:00 – Prepaid Data Plan Strategy: Coffee-Price Monthly Cost for Connectivity Backup

  • 00:06:00 – VPN Non-Negotiables: Encrypted Tunnel & Automatic Connection Protocol

  • 00:06:00 – VPN Automatic Startup: Device Initialization & All-Device Coverage (Phone, Tablet, Laptop)

  • 00:06:00 – International Travel Security: VPN Encryption & Surveillance Protection

  • 00:07:00 – TSA-Approved Power Banks: 100 Watt-Hour Specifications & 27,000 mAh Ceiling

  • 00:07:00 – Laptop Charging: 100-Watt USB-C Power Bank Requirements (MacBook Pro)

  • 00:07:00 – Multi-Device Charging: Simultaneous Laptop, Phone & Tablet Power Delivery

  • 00:07:00 – Smart Power Display: Charging Speed Monitoring & Juice Rationing

  • 00:07:00 – Surge Protector Safety: Airport Outlet Protection & Device Insurance

  • 00:08:00 – Airport Lounges: Priority Pass Access & Productivity Sanctuaries (1,300+ Worldwide)

  • 00:08:00 – Travel Credit Card Benefits: Complimentary Lounge Visits Strategy

  • 00:08:00 – Conference Call Chaos: Professional Communication Environment Solutions

  • 00:08:00 – Noise-Canceling Headphones: Sony XM5 & Bose QuietComfort Professional Focus

  • 00:08:00 – Battery Life Requirements: 30-40 Hour Endurance for Extended Delays

  • 00:09:00 – Offline Access Mandate: Pre-Departure Critical File Downloads

  • 00:09:00 – Six-Hour Offline Capability: Zero-Connectivity Work Strategy

  • 00:09:00 – Adobe Scan App: OCR Technology & Mobile Document Management

  • 00:10:00 – Adobe Ecosystem Syncing: Cross-Device Workflow & E-Signature Integration

  • 00:10:00 – Apple Ecosystem Continuity: iPhone, iPad & MacBook Seamless Integration

  • 00:10:00 – FileVault Encryption & Face ID: Built-In Security Non-Negotiables

  • 00:11:00 – Five Professional-Grade Rules: Pre-Travel Checklist & Crisis Preparation

  • 00:11:00 – Rule One: Full Device Charge Before Departure

  • 00:11:00 – Rule Two: Offline Maps & Critical Files Downloaded Locally

  • 00:11:00 – Rule Three: Screenshot Everything (Boarding Passes, Hotel, Car Rental)

  • 00:11:00 – Rule Four: Distributed Charger Storage Across Multiple Bags for Backup Power

  • 00:11:00 – Rule Five: Share Itinerary with Emergency Contact

  • 00:11:00 – Post-Crisis Integration: Permanent Daily Workflow Implementation

  • 00:11:00 – The Bigger Question: Crisis Tools as Permanent Professional Standards

  • 00:12:00 – Transition to AI Ethics Discussion: Hidden AI Crisis in Legal Practice Teaser

  • 00:14:00 – Conclusion: Tech Savvy Lawyer Labs Roundtable Summary & Resources

Resources 📚

Mentioned in the episode:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation:

MTC: London's iPhone Theft Crisis: Critical Mobile Device Security Lessons for Traveling Lawyers 📱⚖️

lawyers can learn about cyber mobile security from the recent iphone thefts in london

Recent events in London should serve as a wake-up call for every legal professional who carries client data beyond the office walls. London police recently dismantled a sophisticated international theft ring responsible for smuggling approximately 40,000 stolen iPhones to China in just twelve months. This operation revealed thieves earning up to £300 per stolen device, with phones reselling overseas for as much as $5,000. With over 80,000 phones stolen in London last year alone, this crisis underscores critical vulnerabilities that lawyers must address when working remotely.

The sophistication of these operations is alarming. Criminals on electric bikes snatch phones from unsuspecting victims and immediately wrap devices in aluminum foil to block tracking signals. This industrial-scale crime demonstrates that our mobile devices—which contain privileged communications, case strategies, and confidential client data—are valuable targets for organized criminal networks operating globally.

Your Ethical Obligations Are Clear

ABA Model Rule 1.1 requires lawyers to maintain competence, including understanding "the benefits and risks associated with relevant technology". This duty of technological competence has been adopted by over 40 states and isn't optional—it's fundamental to ethical practice. Model Rule 1.6(c) mandates that lawyers "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client".

When your phone disappears—whether through theft, loss, or border seizure—you face potential violations of these ethical duties. Recent data shows U.S. Customs and Border Protection searched 14,899 devices between April and June 2025, a 16.7% increase from previous surges. Lawyers traveling internationally face heightened risks, and a stolen or searched device can compromise attorney-client privilege instantly.

Essential Security Measures for Mobile Lawyers

Before leaving your office, implement these non-negotiable protections. Enable full-device encryption on all smartphones, tablets, and laptops. For iPhones, setting a passcode automatically enables encryption; Android users must manually activate this feature in security settings. Strong passwords matter—use alphanumeric combinations of at least 12 characters, avoiding easily guessed patterns.

lawyer need to know how to protect their client’s pii when crossing the boarder!

Two-factor authentication (2FA) adds critical protection layers. Even if someone obtains your password, 2FA requires secondary verification through your phone or authentication app. This simple step dramatically reduces unauthorized access risks. Configure remote wipe capabilities before traveling. If your device is stolen, you can erase all data remotely, protecting client information even when physical recovery is impossible.

Disable biometric authentication when traveling internationally. Face ID and fingerprint scanners can be used against you at borders where Fourth Amendment protections are diminished. Restart your device before crossing borders to force password-only access. Consider carrying a "clean" device for international travel, accessing files only through encrypted cloud storage rather than storing sensitive data locally.

Coffee Shops, Airports, and Public Spaces

Public Wi-Fi networks pose serious interception risks. Hackers create fake hotspots with legitimate-sounding names, capturing everything you transmit. As lawyers increasingly embrace cloud-based computing for their work, encryption when using public Wi-Fi becomes non-negotiable

Always use a trusted VPN (Virtual Private Network) when connecting to public networks. VPNs encrypt your internet traffic, preventing interception even on compromised networks. Alternatively, use your smartphone's personal hotspot rather than connecting to public Wi-Fi. Turn off file sharing on all mobile devices. Avoid accessing highly sensitive client files in public spaces altogether—save detailed case work for secure, private connections.

Physical security deserves equal attention. Visual privacy screens prevent shoulder surfing. Position yourself with your back to walls in coffee shops so others cannot observe your screen. Be alert to your surroundings and maintain physical control of devices at all times. Never leave laptops, tablets, or phones unattended, even briefly.

Border Crossings and International Travel

Lawyers crossing international borders face unique challenges. CBP policies permit extensive device searches within 100 miles of borders under the border search exception, significantly reducing Fourth Amendment protections. New York State Bar Association Ethics Opinion 2017-5 addresses lawyers' duties when traveling with client data across borders.

The reasonableness standard governs your obligations. Evaluate whether you truly need to bring confidential information across borders. If travel requires client data, bring only materials professionally necessary for your specific purpose. Consider these strategies: store files in encrypted cloud services rather than locally; use strong passwords and disable biometric authentication; carry your bar card to identify yourself as an attorney if questioned; identify which files contain privileged information before reaching the border.

If border agents demand device access, clearly state that you are an attorney and the device contains privileged client communications. Ask whether the request is optional or mandatory. If agents conduct a search, document what occurred and consider whether client notification is required under Rule 1.4. New York Rule 1.6 requires taking reasonable steps to prevent unauthorized disclosure, with heightened precautions necessary when government agencies are opposing parties.

Practical Implementation Today

Create firm policies addressing mobile device security. Require immediate reporting of lost or stolen devices. Implement Mobile Device Management (MDM) software to monitor, secure, and remotely wipe all connected devices. Conduct regular security awareness training covering email practices, phishing recognition, and social engineering tactics.

Develop an Incident Response Plan before breaches occur. Know which experts to contact, document cybersecurity policies, and establish notification protocols. Under various state laws and regulations like California Civil Code § 1.798.82 and HIPAA's Breach Notification Rule, lawyers may be legally required to notify clients of data breaches.

Lawyers are on the front line of cybersecurity when on the go!

Communicate with clients about security measures. Obtain informed consent regarding electronic communications and any security limitations. Some firms include these discussions in engagement letters, setting clear expectations about communication methods and encryption use.

Stay current with evolving threats. Subscribe to legal technology security bulletins. The Tech-Savvy Lawyer blog regularly covers mobile security issues, including recent coverage of the SlopAds malware campaign that compromised 224 Android applications on Google Play Store. Technology competence requires ongoing learning as threats and safeguards evolve.

The Bottom Line

The London iPhone theft crisis demonstrates that our devices are valuable targets for sophisticated criminal networks operating internationally. Every lawyer who works outside the office—whether at coffee shops, client meetings, or international destinations—must take mobile security seriously. Your ethical obligations under Model Rules 1.1 and 1.6 demand it. Your clients' confidential information depends on it. Your professional reputation requires it.

Implementing these security measures isn't complicated or expensive. Enable encryption. Use strong passwords and 2FA. Avoid public Wi-Fi or use VPNs. Disable biometrics when traveling. Maintain physical control of devices. These straightforward steps significantly reduce risks while allowing you to work effectively from anywhere.

The legal profession has embraced mobile technology's benefits—now we must address its risks with equal commitment. Don't wait for a theft, loss, or border seizure to prompt action. Protect your clients' confidential information today.

MTC

MTC: Deepfakes, Deception, and Professional Duty - What the North Bethesda AI Incident Teaches Lawyers About Ethics in the Digital Age 🧠⚖️

Lawyers need to be aware of the potential Professional and ethical consequences if they allow deepfakes to enter the courtroom.

In October 2025, a seemingly lighthearted prank spiraled into a serious legal matter that carries profound implications for every practicing attorney. A 27 year-old, North Bethesda woman sent her husband an AI-generated photograph depicting a man lounging on their living room couch. Alarmed by the apparent intrusion, he called 911. The subsequent police response was swift and overwhelming: eight marked cruisers raced through daytime traffic with lights and sirens activated. When officers arrived, they found no burglar—the woman was alone at home, a cellphone mounted on a tripod aimed at the front door, and the admission that it was all a prank.

The story might have ended as a cautionary tale about viral social media trends gone awry. But for the legal profession, it offers urgent and multifaceted lessons about technological competence, professional responsibility, and the ethical obligations that now define modern legal practice.

The woman was charged with making a false statement concerning an emergency or crime and providing a false statement to a state official. Though the charges are criminal in nature, they illuminate a landscape that the legal profession must navigate with far greater care than many currently do. The intersection of generative AI, digital deception, and legal ethics represents uncharted territory—one where professional liability and disciplinary action await those who fail to understand the technology reshaping evidence, testimony, and truth-seeking in the courtroom.

The Technology Competence Imperative

In 2012, the American Bar Association amended Comment 8 to Model Rule 1.1 (Competence) to include an explicit requirement that lawyers remain competent in "the benefits and risks associated with relevant technology." This was not a suggestion; it was a mandate. Today, 31 states have adopted or adapted this language into their own professional conduct rules. The ABA's accompanying committee report emphasized that the amendment serves as "a reminder to lawyers that they should remain aware of technology." Yet the word "reminder" should not be mistaken for optional guidance. As the digital landscape grows more sophisticated—and more legally consequential—ignorance of technology is increasingly indefensible as a basis for professional incompetence.

This case exemplifies why: An attorney representing clients in disputes involving digital media—whether custody cases, employment disputes, criminal defense, or civil litigation—cannot afford to lack foundational knowledge of how AI-generated images are created, detected, and authenticated. A lawyer who fails to distinguish authentic video evidence from a deepfake, or who presents such evidence without proper verification, may be engaging in conduct that violates not only Rule 1.1 but also Rules 3.3 and 8.4 of the ABA Model Rules of Professional Conduct.

Rule 1.1 creates a floor, not a ceiling. While most attorneys are not expected to become machine learning engineers, they must possess working knowledge of AI detection tools, image metadata analysis, forensic software, and the limitations of each. Many free and low-cost resources now exist for such training. Bar associations, CLE providers, and technology vendors offer courses specifically designed for attorneys with moderate tech proficiency. The obligation is not to achieve expertise but to make a deliberate, documented effort to stay reasonably informed.

Lawyers may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available.

🚨

Lawyers may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available. 🚨

Candor, Evidence, and the Truth-Seeking Function

The Maryland incident also implicates ABA Model Rule 3.3 (Candor Toward the Tribunal). Rule 3.3(a)(3) prohibits lawyers from offering evidence that they know to be false. But what does a lawyer know when AI makes authenticity ambiguous?

Consider a hypothetical: A client provides a lawyer with a photograph purporting to show the opposing party engaged in misconduct. The lawyer accepts it at face value and presents it to the court. Later, it is discovered that the image was AI-generated. The lawyer may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available. A lawyer's failure to employ basic verification protocols—such as checking metadata, using AI detection software, or consulting a forensic expert—may render their "belief" in authenticity unreasonable, transforming what appears to be good-faith conduct into a breach of the duty of candor.

The deeper concern is what scholars call the "Liar's Dividend": the phenomenon by which the mere existence of convincing deepfakes causes observers to distrust even genuine evidence. Lawyers can inadvertently exploit this dynamic by introducing AI-generated content without disclosure, or by sowing doubt in jurors' minds about the authenticity of real evidence. When a lawyer does so knowingly—or worse, with willful indifference—they corrupt the judicial process itself.

Rule 3.3 does not merely prevent lawyers from lying; it affirms their role as officers of the court whose duty to truth transcends client advocacy. This duty becomes more, not less, demanding in an age of manipulated media.

Dishonesty, Fraud, and the Outer Boundaries of Professional Conduct

North Bethesda deepfake prank highlights ethical gaps for attorneys.

ABA Model Rule 8.4(c) prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation. On its face, Rule 8.4 seems straightforward. But its application to AI-generated evidence raises subtle questions. If a lawyer negligently fails to detect a deepfake and introduces it as genuine, are they guilty of "deceit"? Does their ignorance of the technology constitute a defense, or does it constitute a separate violation of Rule 1.1?

The answer likely depends on context. A lawyer who presents AI-generated evidence without having undertaken any effort to verify it—in a jurisdiction where technological competence is mandated, and where basic detection tools are publicly available—may struggle to argue that they acted with mere negligence rather than reckless indifference to truth. The line between incompetence and dishonesty can be perilously thin.

Consider, too, the scenario in which a lawyer becomes aware that a client has manufactured evidence using AI. Rule 8.4(c) does not explicitly prevent a lawyer from advising a client about the legal risks of doing so, nor does it require immediate disclosure to opposing counsel or the court in all circumstances. However, if the lawyer then remains silent while the falsified evidence is introduced into litigation, they may be viewed as having effectively participated in fraud. The duty to maintain client confidentiality (Rule 1.6) can conflict with the duty of candor, but Rule 3.3 clarifies that candor prevails: "The duties stated in paragraph (a) … continue to the conclusion of the proceeding, and apply even if compliance requires disclosure of information otherwise protected by Rule 1.6.

Practical Safeguards and Professional Resilience

So what can lawyers do—immediately and pragmatically—to protect themselves and their clients?

First, invest in education. Most state bar associations now offer CLE courses on AI, deepfakes, and digital evidence. Many require only two to three hours. Florida has mandated three hours of technology CLE every three years; others will likely follow. Attending such courses is not an extravagance; it is the baseline floor of professional duty.

Second, establish verification protocols. When digital evidence is introduced in a case—particularly photographs, videos, or audio recordings—require documentation of provenance. Demand metadata. Consider retained expert assistance to authenticate digital files. Many law firms now partner with forensic technology consultants for exactly this purpose. The cost is modest compared to the risk of professional discipline or malpractice liability.

Third, disclose limitations transparently. If you lack expertise in evaluating a particular form of digital evidence, say so. Rule 1.1 permits lawyers to partner with others possessing requisite skills. Transparency about technological limitations is not weakness; it is professionalism.

Fourth, update client engagement letters and retention agreements. Explicitly discuss how your firm will handle digital evidence, what verification steps will be taken, and what the client can reasonably expect. Document these conversations. In disputes with clients later, such records can be invaluable.

Fifth, stay alert to emerging guidance. Bar associations continue to issue formal opinions on technology and ethics. Journals, conference presentations, and industry publications track the intersection of AI and law. Subscribing to alerts from your state bar's ethics committee or joining legal technology practice groups ensures you remain informed as standards evolve. *You may find following The Tech-Savvy Lawyer.Page a great source for alerts and guidance! 🤗

Final Thoughts: The Deeper Question

Lawyers have the professional and ethical responsibility of knowing how deepfakes work!

The Maryland case is ultimately not about one woman's ill-advised prank. It is about the profession's obligation to remain trustworthy stewards of justice in an age when truth itself can be fabricated with a few keystrokes. The legal system depends on evidence, testimony, and the adversarial process to uncover truth. Lawyers are its guardians.

Technology competence is not an optional specialization or a nice-to-have skill. Under the ABA Model Rules and the rules adopted by 31 states, it is a foundational professional duty. Failure to acquire it exposes practitioners to disciplinary action, malpractice claims, and—most importantly—the real possibility of leading their clients, courts, and the public toward injustice.

The invitation to lawyers is clear: engage with the technology that is reshaping litigation, evidence, and professional practice. Understand its capabilities and risks. Invest in verification, transparency, and ongoing education. In doing so, you honor not just your professional obligations but the deeper mission of the law itself: the pursuit of truth.

MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

MTC: Balancing Digital Transparency and Government Employee Safety: The Legal Profession's Ethical Crossroads in the Age of ICE Tracking Apps

The balance between government employee saftey and the public’s right to know is always in flux.

The intersection of technology, government transparency, and employee safety has created an unprecedented ethical challenge for the legal profession. Recent developments surrounding ICE tracking applications like ICEBlock, People Over Papers, and similar platforms have thrust lawyers into a complex moral and professional landscape where the traditional principle of "sunlight as the best disinfectant" collides with legitimate security concerns for government employees.

The Technology Landscape: A New Era of Crowdsourced Monitoring

The proliferation of ICE tracking applications represents a significant shift in how citizens monitor government activities. ICEBlock, developed by Joshua Aaron, allows users to anonymously report ICE agent sightings within a five-mile radius, functioning essentially as "Waze for immigration enforcement". People Over Papers, created by TikTok user Celeste, operates as a web-based platform using Padlet technology to crowdsource and verify ICE activity reports with photographs and timestamps. Additional platforms include Islip Forward, which provides real-time push notifications for Suffolk County residents, and Coquí, offering mapping and alert systems for ICE activities.

These applications exist within a broader ecosystem of similar technologies. Traditional platforms like Waze, Google Maps, and Apple Maps have long enabled police speed trap reporting. More controversial surveillance tools include Fog Reveal, which allows law enforcement to track civilian movements using advertising IDs from popular apps. The distinction between citizen-initiated transparency tools and government surveillance technologies highlights the complex ethical terrain lawyers must navigate.

The Ethical Framework: ABA Guidelines and Professional Responsibilities

Legal professionals face multiple competing ethical obligations when addressing these technological developments. ABA Model Rule 1.1 requires lawyers to maintain technological competence, understanding both the benefits and risks associated with relevant technology. This competence requirement extends beyond mere familiarity to encompass the ethical implications of technology use in legal practice.

Rule 1.6's confidentiality obligations create additional complexity when lawyers handle cases involving government employees, ICE agents, or immigration-related matters. The duty to protect client information becomes particularly challenging when technology platforms may compromise attorney-client privilege or expose sensitive personally identifiable information to third parties.

The tension between advocacy responsibilities and ethical obligations becomes acute when lawyers represent clients on different sides of immigration enforcement. Attorneys representing undocumented immigrants may view transparency tools as legitimate safety measures, while those representing government employees may consider the same applications as security threats that endanger their clients.

Balancing Transparency and Safety: The Core Dilemma

Who watches whom? Exploring transparency limits in democracy.

The principle of transparency in government operations serves as a cornerstone of democratic accountability. However, the safety of government employees, including ICE agents, presents legitimate counterbalancing concerns. Federal officials have reported significant increases in assaults against ICE agents, citing these tracking applications as contributing factors.

The challenge for legal professionals lies in advocating for their clients while maintaining ethical standards that protect all parties' legitimate interests. This requires nuanced understanding of both technology capabilities and legal boundaries. Lawyers must recognize that the same transparency tools that may protect their immigrant clients could potentially endanger government employees who are simply performing their lawful duties.

Technology Ethics in Legal Practice: Professional Standards

The legal profession's approach to technology ethics must evolve to address these emerging challenges. Lawyers working with sensitive immigration cases must implement robust cybersecurity measures, understand the privacy implications of various communication platforms, and maintain clear boundaries between personal advocacy and professional obligations.

The ABA's guidance on generative AI and technology use provides relevant frameworks for addressing these issues. Legal professionals must ensure that their technology choices do not inadvertently compromise client confidentiality or create security vulnerabilities that could harm any party to legal proceedings.

Jurisdictional and Regulatory Considerations

The removal of ICEBlock from Apple's App Store and People Over Papers from Padlet demonstrates how private platforms exercise content moderation that can significantly impact government transparency tools. These actions raise important questions about the role of technology companies in mediating between transparency advocates and security concerns.

Legal professionals must understand the complex regulatory environment governing these technologies. Federal agencies like CISA recommend encrypted communications for high-value government targets while acknowledging the importance of government transparency. This creates a nuanced landscape where legitimate security measures must coexist with accountability mechanisms.

Professional Recommendations and Best Practices

Legal practitioners working in this environment should adopt several key practices. First, maintain clear separation between personal political views and professional obligations. Second, implement comprehensive cybersecurity measures that protect all client information regardless of their position in legal proceedings proceedings. Third, stay informed about technological developments and their legal implications through continuing education focused on technology law and ethics.

Lawyers should also engage in transparent communication with clients about the risks and benefits of various technology platforms. This includes obtaining informed consent when using technologies that may impact privacy or security, and maintaining awareness of how different platforms handle data security and user privacy.

The legal profession must also advocate for balanced regulatory approaches that protect both government transparency and employee safety. This may involve supporting legislation that creates appropriate oversight mechanisms while maintaining necessary security protections for government workers.

The Path Forward: Ethical Technology Advocacy

The future of legal practice will require increasingly sophisticated approaches to balancing competing interests in our digital age. Legal professionals must serve as informed advocates who understand both the technological landscape and the ethical obligations that govern their profession. This includes recognizing that technology platforms designed for legitimate transparency purposes can be misused, while also acknowledging that government accountability remains essential to democratic governance.

transparency is a balancing act that all lawyers need to be aware of in their practice!

The legal profession's response to ICE tracking applications and similar technologies will establish important precedents for how lawyers navigate future ethical challenges in our increasingly connected world. By maintaining focus on professional ethical standards while advocating effectively for their clients, legal professionals can help ensure that technological advances serve justice rather than undermining it.

Success in this environment requires lawyers to become technologically literate advocates who understand both the promise and perils of digital transparency tools. Only through this balanced approach can the legal profession effectively serve its clients while maintaining the ethical standards that define professional practice in the digital age.

MTC

MTC (Bonus): The Critical Importance of Source Verification When Using AI in Legal Practice 📚⚖️

The Fact-Checking Lawyer vs. AI Errors!

Legal professionals face an escalating verification crisis as AI tools proliferate throughout the profession. A recent conversation I had with an AI research assistant about AOL's dial-up internet shutdown perfectly illustrates why lawyers must rigorously fact-check AI outputs. In preparing my editorial for earlier today (see here), I came across a glaring error.  And when I corrected the AI's repeated date errors—it incorrectly cited 2024 instead of 2025 for AOL's September 30 shutdown—this highlighted the dangerous gap between AI confidence and AI accuracy that has resulted in over 410 documented AI hallucination cases worldwide. (You can also see my previous discussions on the topic here).

This verification imperative extends beyond simple date corrections. Stanford University research reveals troubling accuracy rates across legal AI tools, with some systems producing incorrect information over 34% of the time, while even the best-performing specialized legal AI platforms still generate false information approximately 17% of the time. These statistics underscore a fundamental truth: AI tools are powerful research assistants, not infallible oracles.

AI Hallucinations in the Courtroom are not a good thing!

Editor's Note: The irony was not lost on me that while writing this editorial about AI accuracy problems, I had to correct the AI assistant multiple times for contradictory statements about error rates in this very paragraph. The AI initially claimed Westlaw had 34% errors while specialized legal platforms had only 17% errors—ignoring that Westlaw IS a specialized legal platform. This real-time experience of catching AI logical inconsistencies while drafting an article about AI verification perfectly demonstrates the critical need for human oversight that this editorial advocates.

The consequences of inadequate verification are severe and mounting. Courts have imposed sanctions ranging from $2,500 to $30,000 on attorneys who submitted AI-generated fake cases. Recent cases include Morgan & Morgan lawyers sanctioned $5,000 for citing eight nonexistent cases, and a California attorney fined $10,000 for submitting briefs where "nearly all legal quotations ... [were] fabricated". These sanctions reflect judicial frustration with attorneys who fail to fulfill their gatekeeping responsibilities.

Legal professionals face implicit ethical obligations that demand rigorous source verification when using AI tools. ABA Model Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including AI's propensity for hallucinations. Rule 3.4 (Fairness to Opposing Party and Tribunal) prohibits knowingly making false statements of fact or law to courts. Rule 5.1 (Responsibilities Regarding Nonlawyer Assistance) extends supervisory duties to AI tools, requiring lawyers to ensure AI work product meets professional standards. Courts consistently emphasize that "existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings".

The Tech-Savvy Lawyer should have AI Verification Protocols.

The legal profession must establish verification protocols that treat AI as sophisticated but fallible technology requiring human oversight (perhaps a comment to Rule 1.1(8). This includes cross-referencing AI citations against authoritative databases, validating factual claims through independent sources, and maintaining detailed records of verification processes. Resources like The Tech-Savvy Lawyer blog and podcast provide valuable guidance for implementing these best practices. As one federal judge warned, "the duty to check their sources and make a reasonable inquiry into existing law remains unchanged" in the age of AI.

Attorneys who embrace AI without implementing robust verification systems risk professional sanctions, client harm, and reputational damage that could have been prevented through diligent fact-checking practices.  Simply put - check your work when using AI.

MTC

MTC: The End of Dial-Up Internet: A Digital Divide Crisis for Legal Practice 📡⚖️

Dial-up shutdown deepens rural legal digital divide.

The legal profession faces an unprecedented access to justice challenge as AOL officially terminated its dial-up internet service on September 30, 2025, after 34 years of operation. This closure affects approximately 163,401 American households that depended solely on dial-up connections as of 2023, creating barriers to legal services in an increasingly digital world. While other dial-up providers like NetZero, Juno, and DSLExtreme continue operating, they may not cover all geographic areas previously served by AOL and offer limited long-term viability.

While many view dial-up as obsolete, its elimination exposes critical technology gaps that disproportionately impact vulnerable populations requiring legal assistance. Rural residents, low-income individuals, and elderly clients who relied on this affordable connectivity option now face digital exclusion from essential legal services and court systems. The remaining dial-up options provide minimal relief as these smaller providers lack AOL's extensive infrastructure coverage.

Split Courtroom!

Legal professionals must recognize that technology barriers create access to justice issues. When clients cannot afford high-speed internet or live in areas without broadband infrastructure, they lose the ability to participate in virtual court proceedings, access online legal resources, or communicate effectively with their attorneys. This digital divide effectively creates a two-tiered justice system where technological capacity determines legal access.

The legal community faces an implicit ethical duty to address these technology barriers. While no specific ABA Model Rule mandates accommodating clients' internet limitations, the professional responsibility to ensure access to justice flows from fundamental ethical obligations.

This implicit duty derives from several ABA Model Rules that create relevant obligations. Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including how technology barriers affect client representation. Rule 1.4 (Communication) mandates effective client communication, which encompasses understanding technology limitations that prevent meaningful attorney-client interaction. Rule 1.6 (Confidentiality) requires reasonable efforts to protect client information, necessitating awareness of technology security implications. Additionally, 41 jurisdictions have adopted technology competence requirements that obligate lawyers to stay current with technological developments affecting legal practice.

Lawyers are a leader when it comes to calls for action to help narrow the access to justice devide!

The legal community must advocate for affordable internet solutions and develop technology-inclusive practices to fulfill these professional responsibilities and ensure equal access to justice for all clients.

MTC

MTC:  Federal Circuit's Drop Box Relocation Sends a Signal Threatening Access to Justice: Why Paper Filing Options Must Remain Accessible 📝⚖️

Midnight Filing Rights Under Threat by Federal Court Drop Box Move.

The Federal Circuit's recent decision to relocate its paper filing drop box from outside the courthouse to inside the building, with restricted hours of 8:30 AM to 7:00 PM, represents a concerning step backward for legal accessibility. This policy change, effective October 20, 2025, fundamentally undermines decades of established legal practice and creates unnecessary barriers to justice that disproportionately impact solo practitioners, small firms, and self-represented litigants.

The Critical Role of 24/7 Drop Box Access 🕐

For generations, the legal profession has relied on midnight filing capabilities as an essential safety net. The traditional 24-hour drop box access has served as a crucial backup system when electronic filing systems fail, internet connectivity issues arise, or attorneys face last-minute technical emergencies. Federal courts have long recognized that electronic filing deadlines extend until midnight in the court's time zone, acknowledging that legal work often continues around the clock and in different time zones across the globe.

The ability to file papers at any hour has been particularly vital for attorneys handling time-sensitive matters such as emergency motions, appeals with strict deadlines, and patent applications where timing can be critical to a client's rights. Research shows that approximately 10% of federal court filings occur after 5:00 PM, with many of these representing urgent legal matters that cannot wait until the next business day.

Technology's Promise and Perils ⚙️

While electronic filing systems have revolutionized legal practice, they are far from infallible. Court system outages occur with concerning regularity - as recently demonstrated by Washington State's two-week court system shutdown due to unauthorized network activity. When CM/ECF systems go offline, attorneys must have reliable alternative filing methods to meet critical deadlines.

The Federal Circuit's own procedures acknowledge this reality, noting that their CM/ECF system undergoes scheduled maintenance and may experience unexpected outages. During these periods, having accessible backup filing options becomes essential for maintaining the integrity of the legal process. The relocation of the drop box inside the building with limited hours eliminates this crucial failsafe, potentially leaving attorneys with no viable filing option during system emergencies outside business hours.

Digital Divide and Access to Justice Concerns 📱

Tech-Savvy Lawyer Battles Drop Box Access and Justice Barrier.

The restricted drop box access exacerbates existing digital equity issues within the legal system. While large law firms have robust IT infrastructure and technical support, solo practitioners and small firms often lack these resources. Self-represented litigants, who represent approximately 75-95% of parties in many civil cases, face even greater challenges navigating electronic filing requirements.

Studies have shown that technology adoption in courts has disproportionately benefited well-resourced parties while creating additional barriers for vulnerable populations. The Federal Circuit's policy change continues this troubling trend by prioritizing operational convenience over equal access to justice.

Legal Practice Realities 💼

The Federal Circuit's restricted hours—8:30 AM to 7:00 PM, Monday through Friday—fail to recognize the realities of modern legal practice. Patent attorneys, who frequently practice before this court, often work across multiple time zones and may need to file documents outside traditional business hours due to client demands or international coordination requirements.

Moreover, the new policy requires documents to be date-stamped and security-screened before deposit, adding additional procedural steps that could create delays and complications. These requirements, while perhaps well-intentioned from a security perspective, create practical obstacles that could prevent the timely filing of critical documents.

Recommendations for Balanced Approach

The Federal Circuit should reconsider this policy change and adopt a more balanced approach that strikes a balance between security and access to justice. Recommended alternatives include:

Hybrid access model: Maintain extended drop box hours (perhaps 6:00 AM to 10:00 PM) to accommodate working attorneys while addressing security concerns.

Emergency filing provisions: Establish clear procedures for after-hours emergency filings when deadlines cannot be met due to the restricted schedule.

Enhanced electronic backup systems: Invest in more robust CM/ECF infrastructure and backup systems to reduce the likelihood of system outages that would necessitate paper filing.

Stakeholder consultation: Engage with the patent bar and other frequent court users to develop solutions that balance operational needs with practitioner requirements.

Preserving the Foundation of Legal Practice ⚖️

Drop Box Limits Highlight Digital Divide in Federal Courthouse Access.

The Federal Circuit's drop box policy change represents more than an administrative adjustment - it undermines a fundamental principle that the courthouse doors should remain open to all who seek justice. The legal profession has long operated on the understanding that filing deadlines are absolute, and courts have historically provided mechanisms to ensure compliance even under challenging circumstances.

By restricting drop box access, the Federal Circuit sends a troubling message that convenience trumps accessibility. This policy particularly harms the very practitioners who help maintain the patent system's vitality - innovative small businesses, independent inventors, and emerging technology companies that rely on accessible filing procedures.

The court should reverse this decision and either restore 24-hour drop box access or, at a minimum, extend the hours to serve the legal community and the public better. In an era where access to justice faces mounting challenges, courts must resist policies that create additional barriers to legal participation. The integrity of our judicial system depends on maintaining pathways for all parties to present their cases, regardless of their technological capabilities or the timing of their legal needs.

MTC

MTC: The AI-Self-Taught Client Dilemma: Navigating Legal Ethics When Clients Think They Know Better 🤖⚖️

The billing battlefield: Clients question fees for AI-assisted work while attorneys defend the irreplaceable value of professional judgment.

The rise of generative artificial intelligence has created an unprecedented challenge for legal practitioners: clients who believe they understand legal complexities through AI interactions, yet lack the contextual knowledge and professional judgment that distinguishes competent legal counsel from algorithmic output. This phenomenon, which we might call the "AI-self-taught-lawyer" syndrome, has evolved beyond mere client education into a minefield of ethical obligations, fee disputes, and even bar complaints when attorneys fail to properly manage these relationships.

The Pushback Reality: When Clients Think They Know Better

Reuters has documented “AI hallucinations” in court filings that create additional work for attorneys—work, i.e., checking citations, that should have been performed before the filing - and that some clients then may challenge on their bills, claiming they shouldn’t pay for hours spent correcting AI errors. This underscores the importance of clear communication about the distinct professional value attorneys add when verifying or refining AI-generated content.

Without clear communications, attorneys run the risk of being accused of "padding hours" when lawyers spend time verifying or correcting client-generated AI work. The “uninformed" client may view attorney review as unnecessary overhead rather than essential professional service. One particularly challenging scenario involves clients who present AI-generated contracts or legal briefs and expect attorneys to simply file them without substantial review, then dispute billing when attorneys perform due diligence.

The Billing Battlefield: AI Efficiency vs. Professional Value

ABA Model Rule 1.5 requires reasonable fees, but AI creates complex billing dynamics. When clients arrive with AI-generated legal research, attorneys face a paradox: they cannot charge full rates for work essentially completed by the client, yet they must invest significant time in verifying, correcting, and providing professional oversight.

Florida Bar Ethics Opinion 24-1 explicitly addresses this challenge: “lawyer[s] may not ethically engage in any billing practices that duplicate charges or that falsely inflate the lawyer's billable hours". However, the opinion also recognizes that AI verification requires substantial professional time that must be fairly compensated.

The D.C. Bar's Ethics Opinion 388 draws parallels to reused work product: when AI reduces the time needed for a task, attorneys can only bill for actual time spent, regardless of the value generated. This creates tension when clients expect discounted rates for "AI-assisted" work, while attorneys must invest more time in verification than traditional practice methods required.

The Bar Complaint Trap: Failure to Warn

The AI-self-taught dilemma: Confident clients push flawed AI legal theories, leaving attorneys to repair the damage before it reaches court

Perhaps the most dangerous aspect of the AI-self-taught client phenomenon is the potential for bar complaints when attorneys fail to adequately warn clients about AI risks. The pattern is becoming disturbingly common: clients use AI for legal research or document preparation, suffer adverse consequences, then file complaints alleging their attorney should have warned them about AI limitations and ethical concerns.

Recent disciplinary cases illustrate this risk. In People v. Crabill, a Colorado attorney was suspended for “for one year and one day, with ninety days to be served and the remainder to be stayed upon Crabill’s successful completion of a two year period of probation, with conditions”after using AI-generated fake case citations. While this involved attorney AI use, similar principles apply to client AI use that goes unaddressed by counsel. The Colorado Court of Appeals warned in Al-Hamim v. Star Heathstone that they "will not look kindly on similar infractions in the future”, suggesting that attorney oversight duties extend to client AI activities.

The New York State Bar Association's 2024 report emphasizes that attorneys have obligations to ensure paralegals and employees handle AI properly. This supervisory duty logically extends to managing client AI use that affects the representation, particularly when clients share AI-generated work as the basis for legal strategy.

Competence Requirements Under Model Rule 1.1

ABA Model Rule 1.1[8] requires attorneys to maintain knowledge of "the benefits and risks associated with relevant technology". This obligation intensifies when clients use AI tools independently. Attorneys cannot competently represent AI-literate clients without understanding the technology's limitations and potential pitfalls.

Recent sanctions demonstrate the stakes involved. In Wadsworth v. Walmart, attorneys were fined and lost their pro hac vice admissions after submitting AI-generated fake citations, despite being apologetic and forthcoming. The court emphasized that "technology may change, but the requirements of FRCP 11 do not". This principle applies equally when clients generate problematic AI content that attorneys fail to properly verify or address.

The Tech-Savvy Lawyer blog notes that competence now requires "sophisticated technology manage[ment] while maintaining fundamental duties to provide competent, ethical representation". When clients arrive with AI-generated legal theories, attorneys must possess sufficient AI literacy to identify potential hallucinations, bias, and accuracy issues.

Confidentiality Risks and Client Education

Model Rule 1.6 prohibits attorneys from revealing client information without informed consent. However, AI-self-taught clients create unique confidentiality challenges. Many clients have already shared sensitive information with public AI platforms before consulting counsel, potentially compromising attorney-client privilege from the outset.

ZwillGen's analysis reveals that using AI tools can "place a third party – the AI provider – in possession of client information" and risk privilege waiver. When clients continue using public AI tools for legal matters during representation, attorneys face ongoing confidentiality risks that require active management.

The New York State Bar Association warns that the use of AI "must not compromise attorney-client privilege" and requires attorneys to disclose when AI tools are employed in client cases. This obligation extends to educating clients about ongoing confidentiality risks from their independent AI use.

Supervision Challenges Under Model Rule 5.3

Model Rule 5.3, governing responsibilities regarding nonlawyer assistance, has evolved to encompass AI tools. When clients use AI for legal research, attorneys must treat this as unsupervised nonlawyer assistance requiring professional verification and oversight.

The supervision challenge intensifies when clients present AI-generated legal strategies with confidence in their accuracy. As one practitioner notes, "AI isn't a human subordinate, it's a tool. And just like any tool, if a lawyer blindly relies on it without oversight, they're the one on the hook when things go sideways". This principle applies whether the attorney or client operates the AI tool.

Recent malpractice analyses identify three main AI liability risks: "(1) a failure to understand GAI's limitations; (2) a failure to supervise the use of GAI; and (3) data security and confidentiality breaches". These risks amplify when clients use AI independently without attorney guidance or oversight.

Managing Client Overconfidence and Bias

When clients proudly present AI-generated briefs, lawyers face the hidden cost of correcting errors and managing unrealistic expectations.

Research reveals that AI systems can perpetuate historical biases present in legal databases and court decisions. When clients rely on AI-generated advice, they may unknowingly adopt biased perspectives or outdated legal theories that modern practice has evolved beyond.

A recent case example illustrates this danger: an attorney received "an AI generated inquiry from a client claiming there were additional securities filing requirements associated with a transaction," but discovered "the AI model was pulling its information from a proposed change to the law from over ten years ago" that was "never enacted into law". Clients presenting such AI-generated "research" create professional responsibility challenges for attorneys who must diplomatically correct misinformation while maintaining client relationships.

The confidence with which AI presents information compounds this problem. As noted in professional guidance, "AI-generated statements are no substitute for the independent verification and thorough research that an attorney can provide". Clients often struggle to understand this distinction, leading to pushback when attorneys question or contradict their AI-generated conclusions.

Practical Strategies for Ethical Client Management

Successfully navigating AI-self-taught clients requires comprehensive communication strategies that address both ethical obligations and practical relationship management. Attorneys should implement several key practices:

Proactive Client Education: Establish clear policies regarding client AI use and provide written guidance about confidentiality risks. Include specific language in engagement letters addressing client AI activities and their potential impact on representation.

Transparent Billing Practices: Develop clear fee structures that account for AI verification work. Explain to clients that professional review of AI-generated content requires substantial time investment and represents essential professional service, not unnecessary overhead.

Documentation Requirements: Require clients to disclose any AI use related to their legal matter. Create protocols for reviewing and addressing client-generated AI content while maintaining respect for client initiative.

Regular Communication: Implement ongoing check-ins about client AI use to prevent confidentiality breaches and ensure attorney strategy remains properly informed. Address client expectations about AI capabilities and limitations throughout the representation.

The Fee Justification Challenge

When clients present AI-generated research or draft documents, attorneys face complex billing considerations that require careful navigation. They cannot charge full rates for work essentially completed by the client's AI use, yet they must invest significant time in verification and correction.

The key lies in transparent communication about the additional value provided by professional judgment, ethical compliance, and strategic thinking that AI cannot replicate. As DISCO's client communication guide suggests: "Don't position AI as the latest trend. Present it as a way to deliver stronger outcomes" by spending "more time on strategy, insight, and execution and less on repetitive manual tasks".

Successful practitioners reframe the conversation from cost to value: "AI helps us work more efficiently, which means we spend more of our time on strategy, insight, and execution and less on repetitive manual tasks". This positioning helps clients understand that attorney review of AI-generated content enhances rather than duplicates their investment.

The Bar Complaint Prevention Protocol

Verifying AI ‘research’ isn’t padding hours—it’s an ethical obligation that protects clients and preserves professional integrity.

To prevent bar complaints alleging failure to warn about AI risks, attorneys should implement comprehensive documentation practices:

Written AI Policies: Provide clients with written guidance about AI use risks and limitations. Document these communications in client files to demonstrate proactive risk management.

Ongoing Monitoring: Create systems for identifying when clients are using AI tools during representation. Address confidentiality and accuracy concerns promptly when such use is discovered.

Professional Education: Maintain current knowledge of AI capabilities and limitations to provide competent guidance to clients. Document continuing education efforts related to AI and legal technology.

Clear Boundaries: Establish explicit policies about when and how client AI-generated content will be used in the representation. Require independent verification of all AI-generated legal research or documents before incorporation into legal strategy.

Final Thoughts: The Future of Professional Responsibility

The AI-self-taught client phenomenon represents a permanent shift in legal practice dynamics requiring fundamental changes in how attorneys approach client relationships. The legal profession's response will define the next evolution of attorney-client dynamics and professional responsibility standards.

As the D.C. Bar recognized, "clients and counsel must proceed with what we might call a 'collaborative vigilance'". This approach requires "maintaining a shared commitment to transparency, quality, and adaptability" while recognizing both AI's efficiencies and its limitations.

Success demands that attorneys embrace their expanding role as AI educators, technology managers, and ethical guardians. As ABA Formal Opinion 512 emphasizes, lawyers remain fully accountable for all work product, no matter how it is generated. This accountability extends to managing client expectations shaped by AI interactions and ensuring that professional judgment governs all strategic decisions, regardless of their technological origins.

The legal profession must evolve beyond simply tolerating AI-empowered clients to actively managing the ethical, practical, and professional challenges they present. By maintaining ethical vigilance while embracing technological benefits, attorneys can transform this challenge into an opportunity for more informed, efficient, and ultimately more effective legal representation. The key lies in recognizing that AI tools, whether used by attorneys or clients, remain subject to the timeless ethical obligations that protect both professional integrity and client interests.

Those who fail to adapt risk not only client dissatisfaction and fee disputes but also potential disciplinary action for inadequately addressing the AI-related risks that increasingly define modern legal practice.

MTC