🧪🎧 TSL Labs Bonus Podcast: Open vs. Closed AI — The Hidden Liability Trap in Your Firm ⚖️🤖

Welcome to TSL Labs Podcast Experiment. 🧪🎧 In this special "Deep Dive" bonus episode, we strip away the hype surrounding Generative AI to expose a critical operational risk hiding in plain sight: the dangerous confusion between "Open" and "Closed" AI systems.

Featuring an engaging discussion between our Google Notebook AI hosts, this episode unpacks the "Swiss Army Knife vs. Scalpel" analogy that every managing partner needs to understand. We explore why the "Green Light" tools you pay for are fundamentally different from the "Red Light" public models your staff might be using—and why treating them the same could trigger an immediate breach of ABA Model Rule 5.3. From the "hidden crisis" of AI embedded in Microsoft 365 to the non-negotiable duty to supervise, this is the essential briefing for protecting client confidentiality in the age of algorithms.

In our conversation, we cover the following:

  • [00:00] – Introduction: The hidden danger of AI in law firms.

  • [01:00] – The "AI Gap": Why staff confuse efficiency with confidentiality.

  • [02:00] – The Green Light Zone: Defining secure, "Closed" AI systems (The Scalpel).

  • [03:45] – The Red Light Zone: Understanding "Open" Public LLMs (The Swiss Army Knife).

  • [04:45] – "Feeding the Beast": How public queries actively train the model for everyone else.

  • [05:45]The Duty to Supervise: ABA Model Rules 5.3 and 1.1[8] implications.

  • [07:00] – The Hidden Crisis: AI embedded in ubiquitous tools (Microsoft 365, Adobe, Zoom).

  • [09:00] – The Training Gap: Why digital natives assume all prompt boxes are safe.

  • [10:00] – Actionable Solutions: Auditing tools and the "Elevator vs. Private Room" analogy.

  • [12:00] – Hallucinations: Vendor liability vs. Professional negligence.

  • [14:00] – Conclusion: The final provocative thought on accidental breaches.

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

MTC: The Hidden Danger in Your Firm: Why We Must Teach the Difference Between “Open” and “Closed” AI!

Does your staff understand the difference between “free” and “paid” aI? Your license could depend on it!

I sit on an advisory board for a school that trains paralegals. We meet to discuss curriculum. We talk about the future of legal support. In a recent meeting, a presentation by a private legal research company caught my attention. It stopped me cold. The topic was Artificial Intelligence. The focus was on use and efficiency. But something critical was missing.

The lesson did not distinguish between public-facing and private tools. It treated AI as a monolith. This is a dangerous oversimplification. It is a liability waiting to happen.

We are in a new era of legal technology. It is exciting. It is also perilous. The peril comes from confusion. Specifically, the confusion between paid, closed-system legal research tools and public-facing generative AI.

Your paralegals, law clerks, and staff use these tools. They use them to draft emails. They use them to summarize depositions. Do they know where that data goes? Do you?

The Two Worlds of AI

There are two distinct worlds of AI in our profession.

First, there is the world of "Closed" AI. These are the tools we pay for - i.e., Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. These platforms are built for lawyers. They are walled gardens. You pay a premium for them. (Always check the terms and conditions of your providers.) That premium buys you more than just access. It buys you privacy. It buys you security. When you upload a case file to Westlaw, it stays there. The AI analyzes it. It does not learn from it for the public. It does not share your client’s secrets with the world. The data remains yours. The confidentiality is baked in.

Then, there is the world of "Open" or "Public" AI. This is ChatGPT. This is Perplexity. This is Claude. These tools are miraculous. But they are also voracious learners.

When you type a query into the free version of ChatGPT, you are not just asking a question. You are training the model. You are feeding the beast. If a paralegal types, "Draft a motion to dismiss for John Doe, who is accused of embezzlement at [Specific Company]," that information leaves your firm. It enters a public dataset. It is no longer confidential.

This is the distinction that was missing from the lesson plan. It is the distinction that could cost you your license.

The Duty to Supervise

Do you and your staff know when you can and can’t use free AI in your legal work?

You might be thinking, "I don't use ChatGPT for client work, so I'm safe." You are wrong.

You are not the only one doing the work. Your staff is doing the work. Your paralegals are doing the work.

Under the ABA Model Rules of Professional Conduct, you are responsible for them. Look at Rule 5.3. It covers "Responsibilities Regarding Nonlawyer Assistance." It is unambiguous. You must make reasonable efforts to ensure your staff's conduct is compatible with your professional obligations.

If your paralegal breaches confidentiality using AI, it is your breach. If your associate hallucinates a case citation using a public LLM, it is your hallucination.

This connects directly to Rule 1.1, Comment 8. This represents the duty of technology competence. You cannot supervise what you do not understand. You must understand the risks associated with relevant technology. Today, that means understanding how Large Language Models (LLMs) handle data.

The "Hidden AI" Problem

I have discussed this on The Tech-Savvy Lawyer.Page Podcast. We call it the "Hidden AI" crisis. AI is creeping into tools we use every day. It is in Adobe. It is in Zoom. It is in Microsoft 365.

Public-facing AI is useful. I use it. I love it for marketing. I use it for brainstorming generic topics. I use it to clean up non-confidential text. But I never trust it with a client's name. I never trust it with a very specific fact pattern.

A paid legal research tool is different. It is a scalpel. It is precise. It is sterile. A public chatbot is a Swiss Army knife found on the sidewalk. It might work. But you don't know where it's been.

The Training Gap

The advisory board meeting revealed a gap. Schools are teaching students how to use AI. They are teaching prompts. They are teaching speed. They are not emphasizing the where.

The "where" matters. Where does the data go?

We must close this gap in our own firms. You cannot assume your staff knows the difference. To a digital native, a text box is a text box. They see a prompt window in Westlaw. They see a prompt window in ChatGPT. They look the same. They act the same.

They are not the same.

One protects you. The other exposes you.

A Practical Solution

I have written about this in my blog posts regarding AI ethics. The solution is not to ban AI. That is impossible. It is also foolish. AI is a competitive advantage.

* Always check the terms of use in your agreements with private platforms to determine if your client confidential data and PII are protected.

The solution is policies and training.

  1. Audit Your Tools. Know what you have. Do you have an enterprise license for ChatGPT? If so, your data might be private. If not, assume it is public.

  2. Train on the "Why." Don't just say "No." Explain the mechanism. Explain that public AI learns from inputs. Use the analogy of a confidential conversation in a crowded elevator versus a private conference room.

  3. Define "Open" vs. "Closed." Create a visual guide. List your "Green Light" tools (Westlaw, Lexis, etc.). List your "Red Light" tools for client data (Free ChatGPT, personal Gmail, etc.).

  4. Supervise Output. Review the work. AI hallucinates. Even paid tools can make mistakes. Public tools make up cases entirely. We have all seen the headlines. Don't be the next headline.

The Expert Advantage

The line between “free” and “paid” ai could be a matter of keeping your bar license!

On The Tech-Savvy Lawyer.Page, I often say that technology should make us better lawyers, not lazier ones.

Using Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. is about leveraging a curated, verified database. It is about relying on authority. Using a public LLM for legal research is about rolling the dice.

Your license is hard-earned. Your reputation is priceless. Do not risk them on a free chatbot.

The lesson from the advisory board was clear. The schools are trying to keep up. But the technology moves faster than the curriculum. It is up to us. We are the supervisors. We are the gatekeepers.

Take time this week. Gather your team. Ask them what tools they use. You might be surprised. Then, teach them the difference. Show them the risks.

Be the tech-savvy lawyer your clients deserve. Be the supervisor the Rules require.

The tools are here to stay. Let’s use them effectively. Let’s use them ethically. Let’s use them safely.

MTC

MTC: Google’s Claim Over LSA Client Intake Recordings: Why Lawyers Must Rethink Cloud Service Risks in 2025 ⚖️☁️

Client confidentiality under siege: The legal battle begins!

Google’s recent assertion of ownership rights over Local Services Ads (LSA) client intake recordings should send shockwaves through the legal community. In a quiet but consequential email, Google notified LSA advertisers that, going forward, it claims full creative license and access to all content-ranging from photos and bios to, most alarmingly, recorded phone calls and message conversations with prospective clients routed through Google’s systems. This change, effective June 5, 2025, requires advertisers to opt in or risk losing access to LSA advertising altogether.

The Heart of the Issue: Confidentiality and Control

For lawyers, the implications are profound. The attorney-client privilege is a cornerstone of legal ethics, and the duty to safeguard client confidences is absolute. When a third-party platform like Google claims ownership and unfettered use of intake recordings, the risk to confidentiality is not hypothetical-it is immediate and real. Intake calls often contain sensitive, privileged, or even incriminating information. If Google can analyze, synthesize, and potentially repurpose these recordings for algorithmic optimization or other commercial uses, lawyers may inadvertently breach ethical obligations simply by participating in LSA.  See ABA MRPC 1.1[8] and 1.6.

Cloud Services: Convenience vs. Compliance

Ensure your service providers aren't eavesdropping on confidential client communications!

Cloud-based services have revolutionized law practice, offering flexibility, scalability, and cost savings. However, these benefits come with significant risks, especially when the service provider is not lawyer-centric or fails to prioritize legal ethics. The American Bar Association’s TechReport found that 62% of lawyers cite confidentiality and security as their top concerns with cloud computing. The risk is compounded when vendors can unilaterally change terms of service, as Google has done, or when agencies can accept such terms on behalf of law firms, potentially without direct client notification.

Ethical and Legal Pitfalls

Multiple state bar associations and the ABA have issued opinions permitting cloud adoption-so long as lawyers exercise “reasonable care” to protect client data and maintain ongoing oversight of their providers. ABA MRPC 1.6. This includes:

  • Conducting due diligence on security and privacy practices before signing up. ABA MRPC 1.1[8]

  • Regularly reviewing provider terms and monitoring for changes that may impact confidentiality. ABA MRPC 5.3[8] and

  • Ensuring that cloud vendors do not assert ownership or usage rights over client communications.

Google’s new LSA terms appear to violate the spirit, if not the letter, of these ethical requirements by granting itself broad rights to use, modify, and analyze sensitive client data.

Pricing, Profiling, and AI Risks

Google has no business listening on our conversations!

Google’s access to intake recordings is not just a privacy risk-it’s a competitive one. The company can now aggregate pricing, service details, and other confidential data across the legal industry, potentially using this information to inform its own advertising algorithms or AI-driven pricing models. This could lead to unfair competitive advantages, price manipulation, or even the inadvertent exposure of client strategies.

Practical Steps for Lawyers

Given these developments, lawyers should:

  • Reevaluate participation in LSA and similar platforms where data rights are unclear or unfavorable.

  • Insist on transparency and control over all client communications, especially intake recordings.

  • Choose cloud providers with legal industry expertise and terms that explicitly preserve attorney-client privilege and data ownership.

  • Educate staff and clients about the risks of sharing sensitive information through third-party channels.

Final Thoughts: The Stakes Are Higher Than Ever!

The legal profession’s embrace of technology must not come at the expense of client trust and ethical integrity. Google’s move is a stark reminder that not all cloud services are created equal, and that lawyers must remain vigilant-scrutinizing every vendor relationship for hidden pitfalls. The black box of big tech is only getting darker; it is up to the legal community to demand light.

MTC: Navigating the Legal Landscape of DOGE: Lessons for Lawyers from Ongoing Litigation 🚀

many are worried doge is mishandling citizens’ pii!

The recent involvement of Elon Musk's Department of Government Efficiency (DOGE) in accessing sensitive government databases has sparked a wave of lawsuits, raising significant concerns about data privacy and security 🚨. For lawyers, these legal challenges offer valuable insights into how to protect your clients’ personally identifiable information (PII) in light of DOGE's actions. I’d like to share some of the key takeaways from these lawsuits and explore how lawyers can apply these lessons to safeguard sensitive data, focusing on the ABA Model Rules and best practices for data protection.

Understanding the Legal Challenges:

At least a dozen lawsuits have been filed to stop DOGE from accessing tax records, student loan accounts, and other troves of personal data, often invoking the Privacy Act of 1974 📜. Created in response to the Watergate Scandal, this law restricts the sharing of sensitive information without consent, making it a crucial tool for plaintiffs seeking to limit DOGE's access to personal data 📝.

Legal and Ethical Responsibilities

Lawyers have a legal duty to protect client confidentiality, as outlined in ABA Model Rule 1.6 📜. This rule prohibits revealing information related to a client's representation unless exceptions apply, such as informed client consent or implied authorization to carry out the representation 📝. The duty of confidentiality extends beyond attorney-client privilege, covering all information related to the representation, regardless of its source 🌐.

Key Takeaways for Lawyers

are you ready to help protect your client'S DATA IF THE GOVERNMENT BREACHES Their pii?

  1. Privacy Act of 1974: Lawyers should be aware of the Privacy Act's provisions, which prohibit unauthorized disclosure of personal information from federal systems of records 📊. This law is being used to challenge DOGE's access to sensitive data, highlighting its importance in protecting client confidentiality 🚫.

  2. Standing and Harm: Courts have often ruled that plaintiffs must demonstrate irreparable harm to succeed in these lawsuits 📝. Lawyers should ensure that their clients can establish a clear risk of harm if seeking injunctive relief against similar data access efforts 🚨.

  3. Data Security Protocols: The lawsuits emphasize the need for robust data security measures to prevent unauthorized access. Lawyers should implement strong encryption and access controls to protect client data, as suggested by ABA Formal Opinion 483, which emphasizes the duty to notify clients of data breaches and take reasonable steps to safeguard confidential information 🔒.

  4. Compliance with Data Protection Regulations: Beyond the Privacy Act, lawyers must comply with other data protection laws such as the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and The Personal Information Protection and Electronic Documents Act (PIPEDA) 🌎. Ensuring compliance with these regulations can help prevent unauthorized disclosures and maintain client trust 📨.

  5. Transparency and Consent: The lawsuits highlight the importance of transparency and consent in handling personal information. Lawyers should ensure that clients are informed about how their data is used and processed, as required by ABA Model Rule 1.4, which mandates explaining matters to the extent necessary for clients to make informed decisions regarding the representation 📝.

Lessons from Specific Lawsuits:

Multiple law suits have been filed to enusre doge is not misusing pii - are your client’s pii at risk?

Implementing Best Practices

To safeguard client data effectively, lawyers should:

  1. Conduct Regular Audits: Regularly review data handling practices to ensure compliance with privacy regulations and ethical standards 📊.

  2. Enhance Data Security: Implement robust data encryption and access controls to protect client information, aligning with ABA Model Rule 1.6's requirement to prevent unauthorized disclosure 🔒.

  3. Stay Informed: Keep up-to-date with legal developments and court rulings related to DOGE's access to sensitive data, ensuring compliance with ABA Model Rules 1.1 and 1.1[8], which requires lawyers to stay abreast of the benefits and risks associated with technology used in client services 📰.

Final Thoughts

The ongoing litigation surrounding DOGE provides valuable lessons for lawyers on protecting clients and personally identifiable information. By understanding legal obligations, implementing robust data security measures, and complying with data protection regulations, lawyers can uphold the trust that is fundamental to the client-lawyer relationship 💼.

🚨 MTC: Government Backdoors - A Looming Threat to Attorney-Client Privilege and Data Security 🔐

Legal Cyber Balance: Safeguarding Client Data While Navigating Government Backdoors and Cyber Threats 🚪💻⚖️

The UK government's recent demand for Apple to create a backdoor to iCloud accounts worldwide has sent shockwaves through the legal community. This unprecedented move raises serious concerns for lawyers on both sides of the Atlantic, particularly regarding their ethical obligations to maintain client confidentiality and safeguard sensitive information.

As attorneys, we have a fundamental duty to protect our clients' confidences. The American Bar Association's Model Rule 1.6 explicitly states that lawyers must make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". Similarly, the UK's Solicitors Regulation Authority emphasizes the importance of maintaining client confidentiality.

However, government-mandated backdoors pose a significant threat to these ethical obligations. If implemented, such measures would essentially create a vulnerability that could be exploited not only by law enforcement but also by malicious actors. This puts attorneys in an impossible position: How can we fulfill our duty to safeguard client information when the very systems we rely on are compromised?

Moreover, the implications of such backdoors extend far beyond individual privacy concerns. The attorney-client privilege, a cornerstone of our legal system, could be severely undermined. This privilege exists to encourage open and honest communication between lawyers and their clients, which is essential for effective legal representation. If clients fear that their confidential discussions may be accessed by government agencies, it could have a chilling effect on their willingness to disclose crucial information.

Cybersecurity Crossroads: US & UK Government Interests vs. Hackers vs. Attorney-Client Privilege – The Legal Tightrope in the Digital Age 🌍🔒

To address these challenges, lawyers must take proactive steps to enhance their cybersecurity measures. As discussed in The Tech-Savvy Lawyer.Page Podcast Episode 93, Revolutionizing Law Practice. How Alexander Pakin Leverages Tech 🖥️ for Legal Success! (Part I & Part II), updating security protocols are essential practices for modern law firms. Recall, the ABA MRPC 1.1[8] requires attorneys to be up to date in their use of technology. Additionally, attorneys should consider on-premises storage solutions with zero-trust data access to maintain control over sensitive client data.

It's crucial for legal professionals to stay informed about these developments and advocate for policies that protect client confidentiality. Bar associations and legal organizations should take a strong stance against government-mandated backdoors, emphasizing the potential risks to the justice system and individual rights.

As we navigate this complex landscape, it's clear that the intersection of technology, privacy, and legal ethics will continue to present challenges. However, by remaining vigilant and adapting our practices to meet these challenges, we can uphold our professional responsibilities and protect the fundamental rights of our clients in the digital age.

MTC

🚨 BOLO: Apple's Latest Update Activates AI - Lawyers, Protect Your Clients' Data! 🚨

Attention tech-savvy lawyers! 📱💼 Apple's recent iOS and macOS updates have automatically enabled Apple Intelligence, raising significant concerns about client confidentiality and data privacy. As legal professionals, we must remain vigilant in protecting our clients' sensitive information. Here's what you need to know:

The Stealth Activation 🕵️‍♂️

In the last 24 hours, Apple released iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3, which automatically activate Apple Intelligence on compatible devices. This AI-powered suite offers various features, including rewriting text, generating images, and summarizing emails. While these capabilities may seem enticing, they pose potential risks to client confidentiality. 🚨

Privacy Concerns 🔒

Apple claims that Apple Intelligence uses on-device processing to enhance privacy. However, the system still requires 7GB of local storage and may analyze user interactions to refine its functionality. This level of data access and analysis raises red flags for lawyers bound by ethical obligations to protect client information.

Ethical Obligations ⚖️

Check your apple setting if you want to turn off “Apple Intelligence”!

The ABA Model Rules of Professional Conduct, particularly Rule 1.6, emphasize the duty of confidentiality. This rule extends to all forms of client data, including information stored on devices or accessed remotely. As tech-savvy lawyers, we must exercise reasonable care to prevent unauthorized disclosure of client information.

Potential Risks 🚫

Using AI-powered features without fully understanding their implications could lead to inadvertent breaches of client confidentiality. As we've discussed in our previous blog post, "My Two Cents: With AI Creeping Into Our Computers, Tablets, and Smartphones, Lawyers Need to Be Diligent About The Software They Use," lawyers must be cautious about adopting new technologies without proper vetting.

Lawyers MUST maintain reasonable competency in the use of technology! 🚨 ABA MRPC 1.1 [8] 🚨

Lawyers MUST maintain reasonable competency in the use of technology! 🚨 ABA MRPC 1.1 [8] 🚨

Steps to Take 🛡️

  1. Disable Apple Intelligence: Navigate to Settings > Apple Intelligence & Siri to turn off specific features or disable the entire suite.

  2. Educate Your Team: Ensure all staff members are aware of the potential risks associated with AI-powered features.

  3. Review Privacy Policies: Carefully examine Apple's privacy policies and terms of service related to Apple Intelligence.

  4. Implement Additional Safeguards: Consider using encrypted communication tools and secure cloud storage solutions for client data.

Final Thoughts 🧐

As we navigate this rapidly evolving technological landscape, it's essential to balance innovation with ethical obligations. Lawyers can thrive as tech-savvy professionals by embracing technology to enhance their practice while safeguarding client trust. Remember, maintaining reasonable competency in the use of technology is not just advisable—it’s an ethical duty. See Comment, #8, to ABA Model Rule, #1.1.

Subscribe to The Tech-Savvy Lawyer.Page for updates on this developing situation, news on the evolving impact of AI on the practice of law. Together, we can navigate the complexities of legal technology while upholding our professional responsibilities.

Stay safe, stay informed, and stay tech-savvy! 🚀📚💻

Happy Lawyering!