MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late!

Lawyers need Digital due diligence in order to say on top of their ethic’s requirements.

Artificial intelligence has infiltrated legal practice in ways most attorneys never anticipated. While lawyers debate whether to adopt AI tools, they've already been using them—often without knowing it. These "hidden AI" features, silently embedded in everyday software, present a compliance crisis that threatens attorney-client privilege, confidentiality obligations, and professional responsibility standards.

The Invisible Assistant Problem

Hidden AI operates in plain sight. Microsoft Word's Copilot suggests edits while you draft pleadings. Adobe Acrobat's AI Assistant automatically identifies contracts and extracts key terms from PDFs you're reviewing. Grammarly's algorithm analyzes your confidential client communications for grammar errors. Zoom's AI Companion transcribes strategy sessions with clients—and sometimes captures what happens after you disconnect.

DocuSign now deploys AI-Assisted Review to analyze agreements against predefined playbooks. Westlaw and Lexis+ embed generative AI directly into their research platforms, with hallucination rates between 17% and 33%. Even practice management systems like Clio and Smokeball have woven AI throughout their platforms, from automated time tracking descriptions to matter summaries.

The challenge isn't whether these tools provide value—they absolutely do. The crisis emerges because lawyers activate features without understanding the compliance implications.

ABA Model Rules Meet Modern Technology

The American Bar Association's Formal Opinion 512, issued in July 2024, makes clear that lawyers bear full responsibility for AI use regardless of whether they actively chose the technology or inherited it through software updates. Several Model Rules directly govern hidden AI features in legal practice.

Model Rule 1.1 requires competence, including maintaining knowledge about the benefits and risks associated with relevant technology. Comment 8 to this rule, adopted by most states, mandates that lawyers understand not just primary legal tools but embedded AI features within those tools. This means attorneys cannot plead ignorance when Microsoft Word's AI Assistant processes privileged documents.

Model Rule 1.6 imposes strict confidentiality obligations. Lawyers must make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". When Grammarly accesses your client emails to check spelling, or when Zoom's AI transcribes confidential settlement discussions, you're potentially disclosing protected information to third-party AI systems.

Model Rule 5.3 extends supervisory responsibilities to "nonlawyer assistance," which includes non-human assistance like AI. The 2012 amendment changing "assistants" to "assistance" specifically contemplated this scenario. Lawyers must supervise AI tools with the same diligence they'd apply to paralegals or junior associates.

Model Rule 1.4 requires communication with clients about the means used to accomplish their objectives. This includes informing clients when AI will process their confidential information, obtaining informed consent, and explaining the associated risks.

Where Hidden AI Lurks in Legal Software

🚨 lawyers don’t breach your ethical duties with AI shortcuts!!!

Microsoft 365 Copilot integrates AI across Word, Outlook, and Teams—applications lawyers use hundreds of times daily. The AI drafts documents, summarizes emails, and analyzes meeting transcripts. Most firms that subscribe to Microsoft 365 have Copilot enabled by default in recent licensing agreements, yet many attorneys remain unaware their correspondence flows through generative AI systems.

Adobe Acrobat now automatically recognizes contracts and generates summaries with AI Assistant. When you open a PDF contract, Adobe's AI immediately analyzes it, extracts key dates and terms, and offers to answer questions about the document. This processing occurs before you explicitly request AI assistance.

Legal research platforms embed AI throughout their interfaces. Westlaw Precision AI and Lexis+ AI process search queries through generative models that hallucinate incorrect case citations 17% to 33% of the time according to Stanford research. These aren't separate features—they're integrated into the standard search experience lawyers rely upon daily.

Practice management systems deploy hidden AI for intake forms, automated time entry descriptions, and matter summaries. Smokeball's AutoTime AI generates detailed billing descriptions automatically. Clio integrates AI into client relationship management. These features activate without explicit lawyer oversight for each instance of use.

Communication platforms present particularly acute risks. Zoom AI Companion and Microsoft Teams AI automatically transcribe meetings and generate summaries. Otter.ai's meeting assistant infamously continued recording after participants thought a meeting ended, capturing investors' candid discussion of their firm's failures. For lawyers, such scenarios could expose privileged attorney-client communications or work product.

The Compliance Framework

Establishing ethical AI use requires systematic assessment. First, conduct a comprehensive technology audit. Inventory every software application your firm uses and identify embedded AI features. This includes obvious tools like research platforms and less apparent sources like PDF readers, email clients, and document management systems.

Second, evaluate each AI feature against confidentiality requirements. Review vendor agreements to determine whether the AI provider uses your data for model training, stores information after processing, or could disclose data in response to third-party requests. Grammarly, for example, offers HIPAA compliance but only for enterprise customers with 100+ seats who execute Business Associate Agreements. Similar limitations exist across legal software.

Third, implement technical safeguards. Disable AI features that lack adequate security controls. Configure settings to prevent automatic data sharing. Adobe and Microsoft both offer options to prevent AI from training on customer data, but these protections require active configuration.

Fourth, establish firm policies governing AI use. Designate responsibility for monitoring AI features in licensed software. Create protocols for evaluating new tools before deployment. Develop training programs ensuring all attorneys understand their obligations when using AI-enabled applications.

Fifth, secure client consent. Update engagement letters to disclose AI use in service delivery. Explain the specific risks associated with processing confidential information through AI systems. Document informed consent for each representation.

The Verification Imperative

ABA Formal Opinion 512 emphasizes that lawyers cannot delegate professional judgment to AI. Every output requires independent verification. When Westlaw Precision AI suggests research authorities, lawyers must confirm those cases exist and accurately reflect the law. When CoCounsel Drafting generates contract language in Microsoft Word, attorneys must review for accuracy, completeness, and appropriateness to the specific client matter.

The infamous Mata v. Avianca case, where lawyers submitted AI-generated briefs citing fabricated cases, illustrates the catastrophic consequences of failing to verify AI output. Every jurisdiction that has addressed AI ethics emphasizes this verification duty.

Cost and Billing Considerations

Formal Opinion 512 addresses whether lawyers can charge the same fees when AI accelerates their work. The opinion suggests lawyers cannot bill for time saved through AI efficiency under traditional hourly billing models. However, value-based and flat-fee arrangements may allow lawyers to capture efficiency gains, provided clients understand AI's role during initial fee negotiations.

Lawyers cannot bill clients for time spent learning AI tools—maintaining technological competence represents a professional obligation, not billable work. As AI becomes standard in legal practice, using these tools may become necessary to meet competence requirements, similar to how electronic research and e-discovery tools became baseline expectations.

Practical Steps for Compliance

Start by examining your Microsoft Office subscription. Determine whether Copilot is enabled and what data sharing settings apply. Review Adobe Acrobat's AI Assistant settings and disable automatic contract analysis if your confidentiality review hasn't been completed.

Contact your Westlaw and Lexis representatives to understand exactly how AI features operate in your research platform. Ask specific questions: Does the AI train on your search queries? How are hallucinations detected and corrected? What happens to documents you upload for AI analysis?

Audit your practice management system. If you use Clio, Smokeball, or similar platforms, identify every AI feature and evaluate its compliance with confidentiality obligations. Automatic time tracking that generates descriptions based on document content may reveal privileged information if billing statements aren't properly redacted.

Review video conferencing policies. Establish protocols requiring explicit disclosure when AI transcription activates during client meetings. Obtain informed consent before recording privileged discussions. Consider disabling AI assistants entirely for confidential matters.

Implement regular training programs. Technology competence isn't achieved once—it requires ongoing education as AI features evolve. Schedule quarterly reviews of new AI capabilities deployed in your software stack.

Final Thoughts 👉 The Path Forward

lawyers must be able to identify and contain ai within the tech tools they use for work!

Hidden AI represents both opportunity and obligation. These tools genuinely enhance legal practice by accelerating research, improving drafting, and streamlining administrative tasks. The efficiency gains translate into better client service and more competitive pricing.

However, lawyers cannot embrace these benefits while ignoring their ethical duties. The Model Rules apply with equal force to hidden AI as to any other aspect of legal practice. Ignorance provides no defense when confidentiality breaches occur or inaccurate AI-generated content damages client interests.

The legal profession stands at a critical juncture. AI integration will only accelerate as software vendors compete to embed intelligent features throughout their platforms. Lawyers who proactively identify hidden AI, assess compliance risks, and implement appropriate safeguards will serve clients effectively while maintaining professional responsibility.

Those who ignore hidden AI features operating in their daily practice face disciplinary exposure, malpractice liability, and potential privilege waivers. The choice is clear: unmask the hidden AI now, or face consequences later.

MTC

MTC: London's iPhone Theft Crisis: Critical Mobile Device Security Lessons for Traveling Lawyers 📱⚖️

lawyers can learn about cyber mobile security from the recent iphone thefts in london

Recent events in London should serve as a wake-up call for every legal professional who carries client data beyond the office walls. London police recently dismantled a sophisticated international theft ring responsible for smuggling approximately 40,000 stolen iPhones to China in just twelve months. This operation revealed thieves earning up to £300 per stolen device, with phones reselling overseas for as much as $5,000. With over 80,000 phones stolen in London last year alone, this crisis underscores critical vulnerabilities that lawyers must address when working remotely.

The sophistication of these operations is alarming. Criminals on electric bikes snatch phones from unsuspecting victims and immediately wrap devices in aluminum foil to block tracking signals. This industrial-scale crime demonstrates that our mobile devices—which contain privileged communications, case strategies, and confidential client data—are valuable targets for organized criminal networks operating globally.

Your Ethical Obligations Are Clear

ABA Model Rule 1.1 requires lawyers to maintain competence, including understanding "the benefits and risks associated with relevant technology". This duty of technological competence has been adopted by over 40 states and isn't optional—it's fundamental to ethical practice. Model Rule 1.6(c) mandates that lawyers "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client".

When your phone disappears—whether through theft, loss, or border seizure—you face potential violations of these ethical duties. Recent data shows U.S. Customs and Border Protection searched 14,899 devices between April and June 2025, a 16.7% increase from previous surges. Lawyers traveling internationally face heightened risks, and a stolen or searched device can compromise attorney-client privilege instantly.

Essential Security Measures for Mobile Lawyers

Before leaving your office, implement these non-negotiable protections. Enable full-device encryption on all smartphones, tablets, and laptops. For iPhones, setting a passcode automatically enables encryption; Android users must manually activate this feature in security settings. Strong passwords matter—use alphanumeric combinations of at least 12 characters, avoiding easily guessed patterns.

lawyer need to know how to protect their client’s pii when crossing the boarder!

Two-factor authentication (2FA) adds critical protection layers. Even if someone obtains your password, 2FA requires secondary verification through your phone or authentication app. This simple step dramatically reduces unauthorized access risks. Configure remote wipe capabilities before traveling. If your device is stolen, you can erase all data remotely, protecting client information even when physical recovery is impossible.

Disable biometric authentication when traveling internationally. Face ID and fingerprint scanners can be used against you at borders where Fourth Amendment protections are diminished. Restart your device before crossing borders to force password-only access. Consider carrying a "clean" device for international travel, accessing files only through encrypted cloud storage rather than storing sensitive data locally.

Coffee Shops, Airports, and Public Spaces

Public Wi-Fi networks pose serious interception risks. Hackers create fake hotspots with legitimate-sounding names, capturing everything you transmit. As lawyers increasingly embrace cloud-based computing for their work, encryption when using public Wi-Fi becomes non-negotiable

Always use a trusted VPN (Virtual Private Network) when connecting to public networks. VPNs encrypt your internet traffic, preventing interception even on compromised networks. Alternatively, use your smartphone's personal hotspot rather than connecting to public Wi-Fi. Turn off file sharing on all mobile devices. Avoid accessing highly sensitive client files in public spaces altogether—save detailed case work for secure, private connections.

Physical security deserves equal attention. Visual privacy screens prevent shoulder surfing. Position yourself with your back to walls in coffee shops so others cannot observe your screen. Be alert to your surroundings and maintain physical control of devices at all times. Never leave laptops, tablets, or phones unattended, even briefly.

Border Crossings and International Travel

Lawyers crossing international borders face unique challenges. CBP policies permit extensive device searches within 100 miles of borders under the border search exception, significantly reducing Fourth Amendment protections. New York State Bar Association Ethics Opinion 2017-5 addresses lawyers' duties when traveling with client data across borders.

The reasonableness standard governs your obligations. Evaluate whether you truly need to bring confidential information across borders. If travel requires client data, bring only materials professionally necessary for your specific purpose. Consider these strategies: store files in encrypted cloud services rather than locally; use strong passwords and disable biometric authentication; carry your bar card to identify yourself as an attorney if questioned; identify which files contain privileged information before reaching the border.

If border agents demand device access, clearly state that you are an attorney and the device contains privileged client communications. Ask whether the request is optional or mandatory. If agents conduct a search, document what occurred and consider whether client notification is required under Rule 1.4. New York Rule 1.6 requires taking reasonable steps to prevent unauthorized disclosure, with heightened precautions necessary when government agencies are opposing parties.

Practical Implementation Today

Create firm policies addressing mobile device security. Require immediate reporting of lost or stolen devices. Implement Mobile Device Management (MDM) software to monitor, secure, and remotely wipe all connected devices. Conduct regular security awareness training covering email practices, phishing recognition, and social engineering tactics.

Develop an Incident Response Plan before breaches occur. Know which experts to contact, document cybersecurity policies, and establish notification protocols. Under various state laws and regulations like California Civil Code § 1.798.82 and HIPAA's Breach Notification Rule, lawyers may be legally required to notify clients of data breaches.

Lawyers are on the front line of cybersecurity when on the go!

Communicate with clients about security measures. Obtain informed consent regarding electronic communications and any security limitations. Some firms include these discussions in engagement letters, setting clear expectations about communication methods and encryption use.

Stay current with evolving threats. Subscribe to legal technology security bulletins. The Tech-Savvy Lawyer blog regularly covers mobile security issues, including recent coverage of the SlopAds malware campaign that compromised 224 Android applications on Google Play Store. Technology competence requires ongoing learning as threats and safeguards evolve.

The Bottom Line

The London iPhone theft crisis demonstrates that our devices are valuable targets for sophisticated criminal networks operating internationally. Every lawyer who works outside the office—whether at coffee shops, client meetings, or international destinations—must take mobile security seriously. Your ethical obligations under Model Rules 1.1 and 1.6 demand it. Your clients' confidential information depends on it. Your professional reputation requires it.

Implementing these security measures isn't complicated or expensive. Enable encryption. Use strong passwords and 2FA. Avoid public Wi-Fi or use VPNs. Disable biometrics when traveling. Maintain physical control of devices. These straightforward steps significantly reduce risks while allowing you to work effectively from anywhere.

The legal profession has embraced mobile technology's benefits—now we must address its risks with equal commitment. Don't wait for a theft, loss, or border seizure to prompt action. Protect your clients' confidential information today.

MTC

MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

MTC: Balancing Digital Transparency and Government Employee Safety: The Legal Profession's Ethical Crossroads in the Age of ICE Tracking Apps

The balance between government employee saftey and the public’s right to know is always in flux.

The intersection of technology, government transparency, and employee safety has created an unprecedented ethical challenge for the legal profession. Recent developments surrounding ICE tracking applications like ICEBlock, People Over Papers, and similar platforms have thrust lawyers into a complex moral and professional landscape where the traditional principle of "sunlight as the best disinfectant" collides with legitimate security concerns for government employees.

The Technology Landscape: A New Era of Crowdsourced Monitoring

The proliferation of ICE tracking applications represents a significant shift in how citizens monitor government activities. ICEBlock, developed by Joshua Aaron, allows users to anonymously report ICE agent sightings within a five-mile radius, functioning essentially as "Waze for immigration enforcement". People Over Papers, created by TikTok user Celeste, operates as a web-based platform using Padlet technology to crowdsource and verify ICE activity reports with photographs and timestamps. Additional platforms include Islip Forward, which provides real-time push notifications for Suffolk County residents, and Coquí, offering mapping and alert systems for ICE activities.

These applications exist within a broader ecosystem of similar technologies. Traditional platforms like Waze, Google Maps, and Apple Maps have long enabled police speed trap reporting. More controversial surveillance tools include Fog Reveal, which allows law enforcement to track civilian movements using advertising IDs from popular apps. The distinction between citizen-initiated transparency tools and government surveillance technologies highlights the complex ethical terrain lawyers must navigate.

The Ethical Framework: ABA Guidelines and Professional Responsibilities

Legal professionals face multiple competing ethical obligations when addressing these technological developments. ABA Model Rule 1.1 requires lawyers to maintain technological competence, understanding both the benefits and risks associated with relevant technology. This competence requirement extends beyond mere familiarity to encompass the ethical implications of technology use in legal practice.

Rule 1.6's confidentiality obligations create additional complexity when lawyers handle cases involving government employees, ICE agents, or immigration-related matters. The duty to protect client information becomes particularly challenging when technology platforms may compromise attorney-client privilege or expose sensitive personally identifiable information to third parties.

The tension between advocacy responsibilities and ethical obligations becomes acute when lawyers represent clients on different sides of immigration enforcement. Attorneys representing undocumented immigrants may view transparency tools as legitimate safety measures, while those representing government employees may consider the same applications as security threats that endanger their clients.

Balancing Transparency and Safety: The Core Dilemma

Who watches whom? Exploring transparency limits in democracy.

The principle of transparency in government operations serves as a cornerstone of democratic accountability. However, the safety of government employees, including ICE agents, presents legitimate counterbalancing concerns. Federal officials have reported significant increases in assaults against ICE agents, citing these tracking applications as contributing factors.

The challenge for legal professionals lies in advocating for their clients while maintaining ethical standards that protect all parties' legitimate interests. This requires nuanced understanding of both technology capabilities and legal boundaries. Lawyers must recognize that the same transparency tools that may protect their immigrant clients could potentially endanger government employees who are simply performing their lawful duties.

Technology Ethics in Legal Practice: Professional Standards

The legal profession's approach to technology ethics must evolve to address these emerging challenges. Lawyers working with sensitive immigration cases must implement robust cybersecurity measures, understand the privacy implications of various communication platforms, and maintain clear boundaries between personal advocacy and professional obligations.

The ABA's guidance on generative AI and technology use provides relevant frameworks for addressing these issues. Legal professionals must ensure that their technology choices do not inadvertently compromise client confidentiality or create security vulnerabilities that could harm any party to legal proceedings.

Jurisdictional and Regulatory Considerations

The removal of ICEBlock from Apple's App Store and People Over Papers from Padlet demonstrates how private platforms exercise content moderation that can significantly impact government transparency tools. These actions raise important questions about the role of technology companies in mediating between transparency advocates and security concerns.

Legal professionals must understand the complex regulatory environment governing these technologies. Federal agencies like CISA recommend encrypted communications for high-value government targets while acknowledging the importance of government transparency. This creates a nuanced landscape where legitimate security measures must coexist with accountability mechanisms.

Professional Recommendations and Best Practices

Legal practitioners working in this environment should adopt several key practices. First, maintain clear separation between personal political views and professional obligations. Second, implement comprehensive cybersecurity measures that protect all client information regardless of their position in legal proceedings proceedings. Third, stay informed about technological developments and their legal implications through continuing education focused on technology law and ethics.

Lawyers should also engage in transparent communication with clients about the risks and benefits of various technology platforms. This includes obtaining informed consent when using technologies that may impact privacy or security, and maintaining awareness of how different platforms handle data security and user privacy.

The legal profession must also advocate for balanced regulatory approaches that protect both government transparency and employee safety. This may involve supporting legislation that creates appropriate oversight mechanisms while maintaining necessary security protections for government workers.

The Path Forward: Ethical Technology Advocacy

The future of legal practice will require increasingly sophisticated approaches to balancing competing interests in our digital age. Legal professionals must serve as informed advocates who understand both the technological landscape and the ethical obligations that govern their profession. This includes recognizing that technology platforms designed for legitimate transparency purposes can be misused, while also acknowledging that government accountability remains essential to democratic governance.

transparency is a balancing act that all lawyers need to be aware of in their practice!

The legal profession's response to ICE tracking applications and similar technologies will establish important precedents for how lawyers navigate future ethical challenges in our increasingly connected world. By maintaining focus on professional ethical standards while advocating effectively for their clients, legal professionals can help ensure that technological advances serve justice rather than undermining it.

Success in this environment requires lawyers to become technologically literate advocates who understand both the promise and perils of digital transparency tools. Only through this balanced approach can the legal profession effectively serve its clients while maintaining the ethical standards that define professional practice in the digital age.

MTC

TSS: Repurpose Your Old Work Tech Into Family Learning Tools This Back-to-School Season 💻📚

repurposing your tech for your children can be a platform for a talk with your school kids on the Safe use of Tech.

The new school year approaches, and your children need reliable technology. Before you head to the electronics store, consider the laptops and tablets gathering dust in your office closet or your current devices that you are about to upgrade. With proper preparation, these work devices can become powerful educational tools while teaching your family essential cybersecurity skills.

Why Lawyer Parents Need This Workshop 🎯

As attorneys, we face unique challenges when transitioning work devices to family use. Attorney-client privilege concerns, firm policy compliance, and data breach liability create legal risks most parents never consider. Our August Tech-Savvy Saturday seminar addresses these challenges head-on with practical solutions.

What You'll Master in This Essential Session 🛡️

Device Sanitization for Legal Professionals: Step-by-step Windows, Mac OS, iOS, and Android procedures that protect privileged information while preparing devices for family use. We cover complete data wiping, software licensing removal, and documentation requirements.

Family Technology Management Systems: Implementation strategies for password managers, shared calendars, and network security configurations that work for legal families. Special focus on co-parenting considerations and court-approved platforms.

Family Cyber Talks should be routine!

Age-Appropriate Cybersecurity Education: From elementary through college-age guidance on digital citizenship, password security, and online safety. Critical discussions about digital permanence and the serious legal consequences of non-consensual intimate image sharing.

Emergency Response Planning: Practical protocols for handling cyberbullying, predator contact, and other digital crises. Know when to involve law enforcement versus school administration.

Register Now for August Tech-Savvy Saturday 🚀

This workshop combines legal ethics with practical family technology management. You'll leave with actionable checklists, template agreements, and the confidence to transform old work devices into safe learning tools.