🎙️TSL Labs! MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late!

📌 Too Busy to Read This Week's Editorial?

Join us for a professional deep dive into essential tech strategies for AI compliance in your legal practice. 🎙️ This AI-powered discussion unpacks the November 17, 2025, editorial, MTC: The Hidden AI Crisis in Legal Practice: Why Lawyers Must Unmask Embedded Intelligence Before It's Too Late! with actionable intelligence on hidden AI detection, confidentiality protocols, ethics compliance frameworks, and risk mitigation strategies. Artificial intelligence has been silently operating inside your most trusted legal software for years, and under ABA Formal Opinion 512, you bear full responsibility for all AI use, whether you knowingly activated it or it came as a default software update. The conversation makes complex technical concepts accessible to lawyers with varying levels of tech expertise—from tech-hesitant solo practitioners to advanced users—so you'll walk away with immediate, actionable steps to protect your practice, your clients, and your professional reputation.

In Our Conversation, We Cover the Following

00:00:00 - Introduction: Overview of TSL Labs initiative and the AI-generated discussion format

00:01:00 - The Silent Compliance Crisis: How AI has been operating invisibly in your software for years

00:02:00 - Core Conflict: Understanding why helpful tools simultaneously create ethical threats to attorney-client privilege

00:03:00 - Document Creation Vulnerabilities: Microsoft Word Co-pilot and Grammarly's hidden data processing

00:04:00 - Communication Tools Risks: Zoom AI Companion and the cautionary Otter.ai incident

00:05:00 - Research Platform Dangers: Westlaw and Lexis+ AI hallucination rates between 17-33%

00:06:00 - ABA Formal Opinion 512: Full lawyer responsibility for AI use regardless of awareness

00:07:00 - Model Rule 1.6 Analysis: Confidentiality breaches through third-party AI systems

00:08:00 - Model Rule 5.3 Requirements: Supervising AI tools with the same diligence as human assistants

00:09:00 - Five-Step Compliance Framework: Technology audits and vendor agreement evaluation

00:10:00 - Firm Policies and Client Consent: Establishing protocols and securing informed consent

00:11:00 - The Verification Imperative: Lessons from the Mata v. Avianca sanctions case

00:12:00 - Billing Considerations: Navigating hourly versus value-based fee models with AI

00:13:00 - Professional Development: Why tool learning time is non-billable competence maintenance

00:14:00 - Ongoing Compliance: The necessity of quarterly reviews as platforms rapidly evolve

00:15:00 - Closing Remarks: Resources and call to action for tech-savvy innovation

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

📖 Word of the Week: The Meaning of “Data Governance” and the Modern Law Practice - Your Essential Guide for 2025

Understanding Data Governance: A Lawyer's Blueprint for Protecting Client Information and Meeting Ethical Obligations

Lawyers need to know about “DAta governance” and how it affects their practice of law.

Data governance has emerged as one of the most critical responsibilities facing legal professionals today. The digital transformation of legal practice brings tremendous efficiency gains but also creates significant risks to client confidentiality and attorney ethical obligations. Every email sent, document stored, and case file managed represents a potential vulnerability that requires careful oversight.

What Data Governance Means for Lawyers

Data governance encompasses the policies, procedures, and practices that ensure information is managed consistently and reliably throughout its lifecycle. For legal professionals, this means establishing clear frameworks for how client information is collected, stored, accessed, shared, retained, and ultimately deleted. The goal is straightforward: protect sensitive client data while maintaining the accessibility needed for effective representation.

The framework defines who can take which actions with specific data assets. It establishes ownership and stewardship responsibilities. It classifies information by sensitivity and criticality. Most importantly for attorneys, it ensures compliance with ethical rules while supporting operational efficiency.

The Ethical Imperative Under ABA Model Rules

The American Bar Association Model Rules of Professional Conduct create clear mandates for lawyers regarding technology and data management. These obligations serve as an excellent source of guidance regardless of whether your state has formally adopted specific technology competence requirements. BUT REMEMBER ALWAYS FOLLOW YOUR STATE’S ETHIC’S RULES FIRST!

Model Rule 1.1 addresses competence and was amended in 2012 to explicitly include technological competence. Comment 8 now requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". This means attorneys must understand the data systems they use for client representation. Ignorance of technology is no longer acceptable.

Model Rule 1.6 governs confidentiality of information. The rule requires lawyers to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". Comment 18 specifically addresses the need to safeguard information against unauthorized access by third parties. This creates a direct ethical obligation to implement appropriate data security measures.

Model Rule 5.3 addresses responsibilities regarding nonlawyer assistants. This rule extends to technology vendors and service providers who handle client data. Lawyers must ensure that third-party vendors comply with the same ethical obligations that bind attorneys. This requires due diligence when selecting cloud storage providers, practice management software, and artificial intelligence tools.

The High Cost of Data Governance Failures

lawyers need to know the multiple facets of data Governance

Law firms face average data breach costs of $5.08 million. These financial losses pale in comparison to the reputational damage and loss of client trust that follows a security incident. A single breach can expose trade secrets, privileged communications, and personally identifiable information.

The consequences extend beyond monetary damages. Ethical violations can result in disciplinary action. Inadequate data security arguably constitutes a failure to fulfill the duty of confidentiality under Rule 1.6. Some jurisdictions have issued ethics opinions requiring attorneys to notify clients of breaches resulting from lawyer negligence.

Recent guidance from state bars emphasizes that lawyers must self-report breaches involving client data exposure. The ABA's Formal Opinion 483 addresses data breach obligations directly. The opinion confirms that lawyers have duties under Rules 1.1, 1.4, 1.6, 5.1, and 5.3 related to cybersecurity.

Building Your Data Governance Framework

Implementing effective data governance requires systematic planning and execution. The process begins with understanding your current data landscape.

Step One: Conduct a Data Inventory

Identify all data assets within your practice. Catalog their sources, types, formats, and locations. Map how data flows through your firm from creation to disposal. This inventory reveals where client information resides and who has access to it.

Step Two: Classify Your Data

Not all information requires the same level of protection. Establish a classification system based on sensitivity and confidentiality. Many firms use four levels: public, internal, confidential, and restricted.

Privileged attorney-client communications require the highest protection level. Publicly filed documents may still be confidential under Rule 1.6, contrary to common misconception. Client identity itself often qualifies as protected information.

Step Three: Define Access Controls

Implement role-based access controls that limit data exposure. Apply the principle of least privilege—users should access only information necessary for their specific responsibilities. Multi-factor authentication adds essential security for sensitive systems.

Step Four: Establish Policies and Procedures

Document clear policies governing data handling. Address encryption requirements for data at rest and in transit. Set retention schedules that balance legal obligations with security concerns. Create incident response plans for potential breaches.

Step Five: Train Your Team

The human element represents the greatest security vulnerability. Sixty-eight percent of data breaches involve human error. Regular training ensures staff understand their responsibilities and can recognize threats. Training should cover phishing awareness, password security, and proper data handling procedures.

Step Six: Monitor and Audit

Continuous oversight maintains governance effectiveness. Regular audits identify vulnerabilities before they become breaches. Review access logs for unusual activity. Update policies as technology and regulations evolve.

Special Considerations for Artificial Intelligence

The rise of generative AI tools creates new data governance challenges. ABA Formal Opinion 512 specifically addresses AI use in legal practice. Lawyers must understand whether AI systems are "self-learning" and use client data for training.

Many consumer AI platforms retain and learn from user inputs. Uploading confidential client information to ChatGPT or similar tools may constitute an ethical violation. Even AI tools marketed to law firms require careful vetting.

Before using any AI system with client data, obtain informed consent. Boilerplate language in engagement letters is insufficient. Clients need clear explanations of how their information will be used and what risks exist.

Vendor Management and Third-Party Risk

Lawyers cannot delegate their ethical obligations to technology vendors. Rule 5.3 requires reasonable efforts to ensure nonlawyer assistants comply with professional obligations. This extends to cloud storage providers, case management platforms, and cybersecurity consultants.

Before engaging any vendor handling client data, conduct thorough due diligence. Verify the vendor maintains appropriate security certifications like SOC 2, ISO 27001, or HIPAA compliance. Review vendor contracts to ensure adequate data protection provisions. Understand where data will be stored and who will have access.

The Path Forward

lawyers need to advocate data governance for their clients!

Data governance is not optional for modern legal practice. It represents a fundamental ethical obligation under multiple Model Rules. Client trust depends on proper data stewardship.

Begin with a realistic assessment of your current practices. Identify gaps between your current state and ethical requirements. Develop policies that address your specific risks and practice areas. Implement controls systematically rather than attempting wholesale transformation overnight.

Remember that data governance is an ongoing process requiring continuous attention. Technology evolves. Threats change. Regulations expand. Your governance framework must adapt accordingly.

The investment in proper data governance protects your clients, your practice, and your professional reputation. More importantly, it fulfills your fundamental ethical duty to safeguard client confidences in an increasingly digital world.

MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

MTC: The End of Dial-Up Internet: A Digital Divide Crisis for Legal Practice 📡⚖️

Dial-up shutdown deepens rural legal digital divide.

The legal profession faces an unprecedented access to justice challenge as AOL officially terminated its dial-up internet service on September 30, 2025, after 34 years of operation. This closure affects approximately 163,401 American households that depended solely on dial-up connections as of 2023, creating barriers to legal services in an increasingly digital world. While other dial-up providers like NetZero, Juno, and DSLExtreme continue operating, they may not cover all geographic areas previously served by AOL and offer limited long-term viability.

While many view dial-up as obsolete, its elimination exposes critical technology gaps that disproportionately impact vulnerable populations requiring legal assistance. Rural residents, low-income individuals, and elderly clients who relied on this affordable connectivity option now face digital exclusion from essential legal services and court systems. The remaining dial-up options provide minimal relief as these smaller providers lack AOL's extensive infrastructure coverage.

Split Courtroom!

Legal professionals must recognize that technology barriers create access to justice issues. When clients cannot afford high-speed internet or live in areas without broadband infrastructure, they lose the ability to participate in virtual court proceedings, access online legal resources, or communicate effectively with their attorneys. This digital divide effectively creates a two-tiered justice system where technological capacity determines legal access.

The legal community faces an implicit ethical duty to address these technology barriers. While no specific ABA Model Rule mandates accommodating clients' internet limitations, the professional responsibility to ensure access to justice flows from fundamental ethical obligations.

This implicit duty derives from several ABA Model Rules that create relevant obligations. Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including how technology barriers affect client representation. Rule 1.4 (Communication) mandates effective client communication, which encompasses understanding technology limitations that prevent meaningful attorney-client interaction. Rule 1.6 (Confidentiality) requires reasonable efforts to protect client information, necessitating awareness of technology security implications. Additionally, 41 jurisdictions have adopted technology competence requirements that obligate lawyers to stay current with technological developments affecting legal practice.

Lawyers are a leader when it comes to calls for action to help narrow the access to justice devide!

The legal community must advocate for affordable internet solutions and develop technology-inclusive practices to fulfill these professional responsibilities and ensure equal access to justice for all clients.

MTC

🚨 Breaking News! Federal Courts Implement Enhanced Security Measures for Sealed Documents Following Sophisticated Nation-State Cyberattacks! What Lawyers Must Know Now!!!

Federal courts have launched sweeping new protocols restricting electronic access to sealed documents after a widespread cyberattack linked to Russian actors exposed critical vulnerabilities in the federal judiciary’s decades-old digital infrastructure. As previously reported here, the breach compromised highly confidential information—such as sealed indictments and informant data—across numerous districts, prompting courts to eliminate electronic viewing of sealed filings and require paper-only procedures for sensitive court documents.

what do lawyers need to do as Federal courts respond to cyber attacks?

Why is this happening?
Nation-state cyber threats and outdated systems left federal courts open to attack, as repeatedly warned by The Tech-Savvy Lawyer.Page. The blog has consistently flagged the risks associated with aging technology, weak authentication, and the need for law firms to adopt advanced cybersecurity practices. The recent breach brings these warnings to life, forcing immediate changes for all legal professionals.

What lawyers must do:
Attorneys must now file sealed documents according to new court protocols—usually paper filings—and cannot access them electronically. This transformation demands lawyers take proactive steps to secure confidential information at all times, in line with ABA Model Rule 1.6. Practitioners should review The Tech-Savvy Lawyer.Page for practical tips on ethical compliance and digital preparedness, such as those featured in its “go bag” guide for legal professionals.

Most importantly, consult your local federal court’s website or clerk for the latest procedures, as requirements may vary by district. Safeguarding client confidentiality remains central to legal ethics—stay vigilant, stay informed, and stay tech-savvy.

🎙️Ep. 118: Essential Legal Tech Competency - Colin S. Levy on Building Foundational Technology Skills for Modern Lawyers!

My next guest is Colin Levy, General Counsel at Malbek. Colin is a leading voice in legal innovation. During our interview, he shared practical insights on building foundational legal tech skills for modern lawyers.

During the conversation, Colin outlines the top three steps every lawyer should take to develop legal tech competency, regardless of their technical background. He emphasizes the ethical responsibilities that lawyers face when utilizing AI, particularly the risks associated with unchecked reliance on generative tools and the need to acknowledge potential inaccuracies. Colin also shared some great tips on how to better utilize legal professionals' use of Microsoft Word to improve efficiency and save time (and money💰). In discussing the adoption of new technology, he underscores the importance of defining problems, clarifying desired outcomes, and fully leveraging existing tools before selecting new solutions strategically.

Join Colin and me as we discuss the following three questions and more!

  1. Based on his extensive experience interviewing legal tech leaders and your role as general counsel at Malbek, Colin provides the top three foundational steps every lawyer should take today to build their legal tech competency, regardless of their current technical skill level.

  2. Colin shares three specific ways lawyers can immediately improve their document drafting efficiency using existing technology tools, and how this foundational competence connects to more advanced legal tech adoption.

  3. Colin has conducted hundreds of interviews with legal tech leaders and now serves as general counsel for a CLM company.  He has seen both the vendor and practitioner perspectives. Colin shares his top three strategic considerations lawyers should evaluate when selecting and implementing new technology solutions to ensure they actually improve client service delivery and practice efficiency rather than just adding complexity.

In our conversation, we covered the following:

[01:28] Colin's Tech Setup

[11:14] The Three Core Steps to Legal Tech Competency

[13:17] AI Tools and Ethical Considerations

[17:29] Improving Document Drafting Efficiency

[23:15] Strategic Considerations for Technology Selection

Resources:

Connect with Colin:

Mentioned in the episode:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation:

Word of the Week: Synthetic Data 🧑‍💻⚖️

What Is Synthetic Data?

Synthetic data is information that is generated by algorithms to mimic the statistical properties of real-world data, but it contains no actual client or case details. For lawyers, this means you can test software, train AI models, or simulate legal scenarios without risking confidential information or breaching privacy regulations. Synthetic data is not “fake” in the sense of being random or useless—it is engineered to be realistic and valuable for analysis.

How Synthetic Data Applies to Lawyers

  • Privacy Protection: Synthetic data allows law firms to comply with strict privacy laws like GDPR and CCPA by removing any real personal identifiers from the datasets used in legal tech projects.

  • AI Training: Legal AI tools need large, high-quality datasets to learn and improve. Synthetic data fills gaps when real data is scarce, sensitive, or restricted by regulation.

  • Software Testing: When developing or testing new legal software, synthetic data lets you simulate real-world scenarios without exposing client secrets or sensitive case details.

  • Cost and Efficiency: It is often faster and less expensive to generate synthetic data than to collect, clean, and anonymize real legal data.

Lawyers know your data source; your license could depend on it!

📢

Lawyers know your data source; your license could depend on it! 📢

Synthetic Data vs. Hallucinations

  • Synthetic Data: Created on purpose, following strict rules to reflect real-world patterns. Used for training, testing, and developing legal tech tools. It is transparent and traceable; you know how and why it was generated.

  • AI Hallucinations: Occur when an AI system generates information that appears plausible but is factually incorrect or entirely fabricated. In law, this can mean made-up case citations, statutes, or legal arguments. Hallucinations are unpredictable and can lead to serious professional risks if not caught.

Key Difference: Synthetic data is intentionally crafted for safe, ethical, and lawful use. Hallucinations are unintentional errors that can mislead and cause harm.

Why Lawyers Should Care

  • Compliance: Using synthetic data helps you stay on the right side of privacy and data protection laws.

  • Risk Management: It reduces the risk of data breaches and regulatory penalties.

  • Innovation: Enables law firms to innovate and improve processes without risking client trust or confidentiality.

  • Professional Responsibility: Helps lawyers avoid the dangers of relying on unverified AI outputs, which can lead to sanctions or reputational damage.

Lawyers know your data source; your license could depend on it!

🚨 BOLO: Critical Chrome Zero-Day Security Alert for Legal Professionals 🚨

URGENT: Chrome Zero-Day CVE-2025-6558 Impacts Law Firms

🚨

URGENT: Chrome Zero-Day CVE-2025-6558 Impacts Law Firms 🚨

Critical browser flaw affects Windows & Apple devices. Attackers escape Chrome's sandbox via malicious web pages. ACTIVELY EXPLOITED.

Lawyers its generally a good idea to keep your software up-to-date in order to prevent security risks!

🔍 WHAT THIS MEANS IN PLAIN TERMS:
Your browser normally acts like a protective barrier between dangerous websites and your computer's files. This vulnerability is like a secret door that bypasses that protection. When you visit a compromised website, even legitimate sites that have been hacked, criminals can potentially access your client files, emails, and sensitive data without you knowing. The attack happens silently in the background while you browse normally.

⚠️ ACTION REQUIRED:

  • Update Chrome to v138+ immediately

  • Update Safari on Apple devices

  • Review cybersecurity protocols

🚨Legal Risks:
✓ Client confidentiality breaches
✓ ABA ethical violations
✓ Malpractice liability
✓ Trust account exposure

Don't wait - update NOW!

🎙️ Ep. #116: Free Versus Paid Legal AI: Conversation with DC Court of Appeals Head Law Librarian Laura Moorer

Laura Moorer, the Law Librarian for the DC Court of Appeals, brings over two decades of experience in legal research, including nearly 14 years with the Public Defender Service for DC. In this conversation, Laura shares her top three tips for crafting precise prompts when using generative AI, emphasizing clarity, specificity, and structure. She also offers insights on how traditional legal research methods—like those used with LexisNexis and Westlaw—can enhance AI-driven inquiries. Finally, Laura offers practical strategies for utilizing generative AI to help lawyers identify and locate physical legal resources, thereby bridging the gap between digital tools and tangible materials.

Join Laura and me as we discuss the following three questions and more!

  • What are your top three tips when it comes to precise prompt engineering, your generative AI inquiries?

  • What are your top three tips or tricks when it comes to using old-school prompts like those from the days of LexisNexis and Westlaw in your generative AI inquiries?

  • What are your top three tips or tricks using generative AI to help lawyers pinpoint actual physical resources?

In our conversation, we cover the following:

[00:40] Laura's Current Tech Setup

[03:27] Top 3 Tips for Crafting Precise Prompts in Generative AI

[13:44] Bringing Old-School Legal Research Tactics to Generative AI Prompting

[20:42] Using Generative AI to Help Lawyers Locate Physical Legal Resources

[24:38] Contact Information

Resources:

Connect with Laura:

Mentioned in the episode:

Software & Cloud Services mentioned in the conversation: