📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️

My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy.

Join Justin and me as we discuss the following three questions and more!

  1. What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?

  2. What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk? 

  3. Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? 

In our conversation, we cover the following

  • 00:00 – Welcome and guest introduction

    • Justin joins the show and shares his current tech setup at his desk. 

  • 00:00–01:00 – Justin’s current tech stack

    • Lenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.

    • Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making.

  • 01:00–02:00 – Android vs. iPhone for AI use

    • Why Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability.

  • 02:00–05:30 – Q1: Top three ways litigators should be using AI right now

    • Using AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.

    • Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.

    • Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks.

  • 05:30–07:30 – StrongSuit vs. basic tools like Word grammar check

    • How StrongSuit aims to “up-level” a lawyer’s writing, not just catch typos.

    • Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents.

  • 06:00–08:00 – AI context limits and scaling doc review

    • Constraints of large models’ context windows (around ~1M tokens ≈ ~750 pages).

    • How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights.

  • 08:00–09:00 – Handling tens of thousands of documents

    • How StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters.

  • 09:00–11:30 – Origin story of StrongSuit

    • Why Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.

    • StrongSuit’s focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step.

  • 11:30–13:30 – From intake to brief drafting in minutes

    • Generating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.

    • StrongSuit’s long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment.

  • 12:00–14:30 – How StrongSuit tackles hallucinations

    • Building a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.

    • Validating citations by checking whether the Bluebook citation actually exists in StrongSuit’s case database before surfacing it to the user.

    • Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations.

  • 14:30–16:30 – Coverage and jurisdictions

    • Coverage of all U.S. jurisdictions, federal and state, focused on precedential cases.

    • Handling most regulations from administrative agencies, and limits around local ordinances.

    • Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows.

  • 15:00–17:00 – Security and confidentiality for litigators

    • SOC 2 compliance and industry-standard encryption at rest and in transit.

    • No model training on user data.

    • Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys.

  • 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigation

    • Mistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.

    • Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract drafting).

    • How to shortlist tools: look for SOC 2, real product depth, awards, and a focus on your specific workflows.

    • Mistake #2: Expecting immediate mastery instead of moving through predictable adoption stages—from learning the tool, to daily use, to stringing workflows together.

  • 20:30–22:30 – Building firm-wide AI workflows over time

    • Moving from isolated experiments to integrated, low-friction workflows, such as automatic intake-to-research pipelines.

    • Using client intake audio or transcripts to automatically extract facts, issues, and research paths.

  • 22:30–24:30 – Time constraints and “no-time” lawyers

    • Why lawyers don’t need to be “technical” to use StrongSuit.

    • Reframing AI as text-based tools where lawyers’ writing skills and analytical thinking are assets, not obstacles. 

  • 24:00–26:00 – Practical workflows beyond intake

    • Using AI to prepare for expert depositions, including reviewing valuation analyses, flagging departures from market consensus, and generating targeted questions.

    • Reinforcing the value of AI-enhanced legal research and drafting as core litigation workflows.

  • 26:00–29:30 – Q3: 2026 and beyond – AI-driven workflows every litigator should master

    • Rapid improvement of baseline models (e.g., jumping from single-digit to high double-digit performance on difficult benchmarks year over year). 

    • The idea of “tipping points,” where small performance gains turn AI from marginally useful to essential in specific tasks.

    • Why legal research is a great training ground for understanding where AI excels, where it falls short, and how to divide labor between human and machine.

    • The value of learning basic prompting skills to get more from AI systems, even when platforms offer visual workflows.

  • 29:30–32:30 – Will workflows actually change—or just get better?

    • Why Justin expects familiar litigation workflows (doc review, research, drafting) to remain structurally similar, but become far faster and more sophisticated.

    • AI agents handling the grind work while lawyers focus on synthesis, judgment, and strategy.

    • A future where “AI + lawyer vs. AI + lawyer” resembles high-level chess: same rules, but much deeper thinking on both sides.

  • 32:30–End – Where to find Justin and StrongSuit

    • How to connect with Justin and learn more about StrongSuit’s litigation tools.

Resources

Connect with Justin

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

ANNOUNCEMENT: My Book, “The Lawyer’s Guide to Podcasting,” is Amazon #1 New Release (Law Office Technology)

I’m excited to report that The Lawyer’s Guide to Podcasting ranked #1 as a New Release in Amazon’s Law Office Technology category for the week of February 07, 2026, and sales have already doubled since last month. 🎙️📈

For lawyers with limited-to-moderate tech skills, the book focuses on practical, repeatable workflows for launching and sustaining a compliant podcast presence. ⚖️💡

As you plan content, remember ABA Model Rule 1.1 (technology competence) and the related duties of confidentiality (Rule 1.6) and communications about services (Rule 7.1): use secure tools, avoid accidental client disclosures, and ensure marketing statements are accurate. 🔐✅

Get your copy today! 📘🚀

 
 

Word of the week: “Legal AI institutional memory” engages core ethics duties under the ABA Model Rules, so it is not optional “nice to know” tech.⚖️🤖

Institutional Memory Meets the ABA Model Rules

“Legal AI institutional Memory” is AI that remembers how your firm actually practices law, not just what generic precedent says. It captures negotiation history, clause choices, outcomes, and client preferences across matters so each new assignment starts from experience instead of a blank page.

From an ethics perspective, this capability sits directly in the path of ABA Model Rule 1.1 on competence, Rule 1.6 on confidentiality, and Rule 5.3 on responsibilities regarding nonlawyer assistance (which now includes AI systems). Comment 8 to Rule 1.1 stresses that competent representation requires understanding the “benefits and risks associated with relevant technology,” which squarely includes institutional‑memory AI in 2026. Using or rejecting this technology blindly can itself create risk if your peers are using it to deliver more thorough, consistent, and efficient work.🧩

Rule 1.6 requires “reasonable efforts” to prevent unauthorized disclosure or access to information relating to representation. Because institutional memory centralizes past matters and sensitive patterns, it raises the stakes on vendor security, configuration, and firm governance. Rule 5.3 extends supervision duties to “nonlawyer assistance,” which ethics commentators and bar materials now interpret to include AI tools used in client work. In short, if your AI is doing work that would otherwise be done by a human assistant, you must supervise it as such.🛡️

Why Institutional Memory Matters (Competence and Client Service)

Tools like Luminance and Harvey now market institutional‑memory features that retain negotiation patterns, drafting preferences, and matter‑level context across time. They promise faster contract cycles, fewer errors, and better use of a firm’s accumulated know‑how. Used wisely, that aligns with Rule 1.1’s requirement that you bring “thoroughness and preparation” reasonably necessary for the representation, and Comment 8’s directive to keep abreast of relevant technology.

At the same time, ethical competence does not mean turning judgment over to the model. It means understanding how the system makes recommendations, what data it relies on, and how to validate outputs against your playbooks and client instructions. Ethics guidance on generative AI emphasizes that lawyers must review AI‑generated work product, verify sources, and ensure that technology does not substitute for legal judgment. Legal AI institutional memory can enhance competence only if you treat it as an assistant you supervise, not an oracle you obey.⚙️

Legal AI That Remembers Your Practice—Ethics Required, Not Optional

How Legal AI Institutional Memory Works (and Where the Rules Bite)

Institutional‑memory platforms typically:

  • Ingest a corpus of contracts or matters.

  • Track negotiation moves, accepted fall‑backs, and outcomes over time.

  • Expose that knowledge through natural‑language queries and drafting suggestions.

That design engages several ethics touchpoints🫆:

  • Rule 1.1 (Competence): You must understand at a basic level how the AI uses and stores client information, what its limitations are, and when it is appropriate to rely on its suggestions. This may require CLE, vendor training, or collaboration with more technical colleagues until you reach a reasonable level of comfort.

  • Rule 1.6 (Confidentiality): You must ensure that the vendor contract, configuration, and access controls provide “reasonable efforts” to protect confidentiality, including encryption, role‑based access, and breach‑notification obligations. Ethics guidance on cloud and AI use stresses the need to investigate provider security, retention practices, and rights to use or mine your data.

  • Rule 5.3 (Nonlawyer Assistance): Because AI tools are “non‑human assistance,” you must supervise their work as you would a contract review outsourcer, document vendor, or litigation support team. That includes selecting competent providers, giving appropriate instructions, and monitoring outputs for compliance with your ethical obligations.🤖

Governance Checklist: Turning Ethics into Action

For lawyers with limited to moderate tech skills, it helps to translate the ABA Model Rules into a short adoption checklist.✅

When evaluating or deploying legal AI institutional memory, consider:

  1. Define Scope (Rules 1.1 and 1.6): Start with a narrow use case such as NDAs or standard vendor contracts, and specify which documents the system may use to build its memory.

  2. Vet the Vendor (Rules 1.6 and 5.3): Ask about data segregation, encryption, access logs, regional hosting, subcontractors, and incident‑response processes; confirm clear contractual obligations to preserve confidentiality and notify you of incidents.

  3. Configure Access (Rules 1.6 and 5.3): Use role‑based permissions, client or matter scoping, and retention settings that match your existing information‑governance and legal‑hold policies.

  4. Supervise Outputs (Rules 1.1 and 5.3): Require that lawyers review AI suggestions, verify sources, and override recommendations where they conflict with client instructions or risk tolerance.

  5. Educate Your Team (Rule 1.1): Provide short trainings on how the system works, what it remembers, and how the Model Rules apply; document this as part of your technology‑competence efforts.

Educating Your Team Is Core to AI Competence

This approach respects the increasing bar on technological competence while protecting client information and maintaining human oversight.⚖️

This approach respects the increasing bar on technological competence while protecting client information and maintaining human oversight.⚖️

🎙️Ep. 128, Building a Tech-Forward Law Firm: AI Intake, CRM Strategy & Client Experience with Colleen Joyce!

My next guest is Colleen Joyce, CEO of Lawyer.com, a leading legal marketplace that connects over one million consumers monthly with qualified attorneys nationwide. With nearly two decades of experience transforming how law firms leverage technology and marketing, Colleen has pioneered innovations including LawyerLine call intake services, AI-powered matching technology, and the Lawyer Growth Summit. She publishes the Fast Five newsletter every Tuesday, reaching over 20,000 legal professionals with insights on AI trends, business growth strategies, and practice management. In this episode, Colleen shares her expertise on the essential technologies modern law firms need to scale profitably, how AI is revolutionizing client intake processes, and the critical human touchpoints that should never be automated in legal practice.

💬 Join Colleen Joyce and me as we discuss the following three questions and more!

1.     Beyond the essential lead generation that Lawyer.com provides, you see thousands of firms succeed and fail based on their operational efficiency. If you are building a modern law firm from scratch today, what are the top three non-negotiable technologies? For example, specific CRM automations, financial analytics, or project management tools you would implement immediately to ensure the firm scales profitably rather than just chaotically.

2.     We know AI is reshaping the top of the funnel for legal consumers. Based on the data you're seeing from your new AI initiatives, what are the top three specific intake bottlenecks that AI can now solve better than a human receptionist? Allowing attorneys to focus primarily on high-value legal work rather than data entry or basic screening.

3.     Technology can handle logistics, but it can't handle the emotion of legal crisis. From your experience overseeing millions of consumer connections, what are the top three human touchpoints in the client lifecycle that a lawyer should never automate? Because they are crucial for building the trust and transparency that leads to long-term referrals.

In our conversation, we cover the following:

-      00:00:00 - Welcome and Introduction to Colleen Joyce

-      00:00:20 - Colleen's Current Tech Setup: MacBook Pro, iPhone 16, iPad, and Curved Monitor

-      00:01:00 - Discussion about iPhone Models and AppleCare Benefits

-      00:02:00 - Using Plaud AI for Recording Conversations

-      00:03:00 - MacBook Pro Specifications and Upgrade Recommendations

-      00:04:00 - Dell Curved Monitor Benefits for Focus and Productivity

-      00:05:00 - Question 1: Top Three Non-Negotiable Technologies for Modern Law Firms

-      00:06:00 - Intake Technology, CRM, and Practice Management Systems

-      00:07:00 - Balancing Cost and Technology for New Lawyers

-      00:08:00 - Leveraging Freemium Tools and AI for Budget-Conscious Firms

-      00:08:30 - Question 2: AI Solutions for Intake Bottlenecks

-      00:09:00 - Answering Phones with Empathetic AI Agents

-      00:10:00 - Importance of Legal-Specific AI Training

-      00:11:00 - Consumer Adoption and Resistance to AI vs. Human Agents

-      00:12:00 - Using Virtual Receptionists and Calendly for Scheduling

-      00:13:00 - Generational Differences in Technology Adoption

-      00:14:00 - The Evolution of Legal Technology Adoption Over 14 Years

-      00:15:00 - Question 3: Human Touchpoints That Should Never Be Automated

-      00:16:00 - Relationship Building and the Courting Period

-      00:17:00 - Screening Clients Through Your Tech Processes

-      00:18:00 - Where to Find Colleen: LinkedIn and the Fast Five Newsletter - 00:18:30 - Closing Remarks and Gratitude

---

📚 Resources

🤝 Connect with Colleen Joyce

•  LinkedIn: https://www.linkedin.com/in/colleenjoyce

•  Lawyer.com: https://www.lawyer.com

•  Lawyer.com Services: https://services.lawyer.com

•  Fast Five Newsletter (Published Tuesdays): https://www.linkedin.com/newsletters/ fast-five-fridays-7265815097552326656

•  Lawyer Growth Summit: https://lawyergrowthsummit.com

•  Lawyer.com Phone: 800-620-0900

•  Lawyer.com Address: 25 Mountainview Boulevard, Basking Ridge, NJ 07920

📖 Mentioned in the Episode

•  MacRumors Buyer's Guide: https://buyersguide.macrumors.com

•  LawyerLine (24-hour Intake Services) : https://www.lawyerline.ai/

🖥 Hardware Mentioned in the Conversation

•  MacBook Pro : https://www.apple.com/macbook-pro/

•  MacBook Pro with M4/M5 Chips (Upgrade recommendation): https://www.apple.com/macbook-pro/

•  iPhone 16: https://www.apple.com/iphone-16/

•  iPad: https://www.apple.com/ipad/

•  Dell Curved Monitor (22-24 inch, white): https://www.dell.com/monitors

•  HP Printer (with automatic duplex printing): https://www.hp.com/printers

☁ Software & Cloud Services Mentioned in the Conversation

•  Plaud AI (Call Recording & Transcription): https://www.plaud.ai

Slack (Team Communication Platform): https://slack.com

•  iMessage (Apple Messaging): https://support.apple.com/en-us/104969

•  Calendly (Scheduling Software): https://calendly.com

•  Monday.com (Project Management & Team Organization): https://monday.com

•  ChatGPT (AI Assistant): https://openai.com/chatgpt

•  AppleCare (Apple Device Protection): https://www.apple.com/support/applecare/

MTC (Bonus): National Court Technology Rules: Finding Balance Between Guidance and Flexibility ⚖️

Standardizing Tech Guidelines in the Legal System

Lawyers and their staff needs to know the standard and local rules of AI USe in the courtroom - their license could depend on it.

The legal profession stands at a critical juncture where technological capability has far outpaced judicial guidance. Nicole Black's recent commentary on the fragmented approach to technology regulation in our courts identifies a genuine problem—one that demands serious consideration from both proponents of modernization and cautious skeptics alike.

The core tension is understandable. Courts face legitimate concerns about technology misuse. The LinkedIn juror research incident in Judge Orrick's courtroom illustrates real risks: a consultant unknowingly violated a standing order, resulting in a $10,000 sanction despite the attorney's good-faith disclosure and remedial efforts. These aren't theoretical concerns—they reflect actual ethical boundaries that protect litigants and preserve judicial integrity. Yet the response to these concerns has created its own problems.

The current patchwork system places practicing attorneys in an impossible position. A lawyer handling cases across multiple federal districts cannot reasonably track the varying restrictions on artificial intelligence disclosure, social media evidence protocols, and digital research methodologies. When the safe harbor is simply avoiding technology altogether, the profession loses genuine opportunities to enhance accuracy and efficiency. Generative AI's citation hallucinations justify judicial scrutiny, but the ad hoc response by individual judges—ranging from simple guidance to outright bans—creates unpredictability that chills responsible innovation.

SHould there be an international standard for ai use in the courtroom

There are legitimate reasons to resist uniform national rules. Local courts understand their communities and case management needs better than distant regulatory bodies. A one-size-fits-all approach might impose burdensome requirements on rural jurisdictions with fewer tech-savvy practitioners. Furthermore, rapid technological evolution could render national rules obsolete within months, whereas individual judges retain flexibility to respond quickly to emerging problems.

Conversely, the current decentralized approach creates serious friction. The 2006 amendments to Federal Rules of Civil Procedure for electronically stored information succeeded partly because they established predictability across jurisdictions. Lawyers knew what preservation obligations applied regardless of venue. That uniformity enabled the profession to invest in training, software, and processes. Today's lawyers lack that certainty. Practitioners must maintain contact lists tracking individual judge orders, and smaller firms simply cannot sustain this administrative burden.

The answer likely lies between extremes. Rather than comprehensive national legislation, the profession would benefit from model standards developed collaboratively by the Federal Judicial Conference, state supreme courts, and bar associations. These guidelines could allow reasonable judicial discretion while establishing baseline expectations—defining when AI disclosure is mandatory, clarifying which social media research constitutes impermissible contact, and specifying preservation protocols that protect evidence without paralyzing litigation.

Such an approach acknowledges both legitimate judicial concerns and legitimate professional needs. It recognizes that judges require authority to protect courtroom procedures while recognizing that lawyers require predictability to serve clients effectively.

I basically agree with Nicole: The question is not whether courts should govern technology use. They must. The question is whether they govern wisely—with sufficient uniformity to enable compliance, sufficient flexibility to address local concerns, and sufficient clarity to encourage rather than discourage responsible innovation.

TSL Labs 🧪Bonus: 🎙️ From Cyber Compliance to Cyber Dominance: What VA's AI Revolution Means for Government Cybersecurity, Legal Ethics, and ABA Model Rule Compliance!

In this TSL Labs bonus episode, we examine this week’s editorial on how the Department of Veterans Affairs is leading a historic transformation from traditional compliance frameworks to a dynamic, AI-driven approach called "cyber dominance." This conversation unpacks what this seismic shift means for legal professionals across all practice areas—from procurement and contract law to privacy, FOIA, and litigation. Whether you're advising government agencies, representing contractors, or handling cases where data security matters, this discussion provides essential insights into how continuous monitoring, zero trust architecture, and AI-driven threat detection are redefining professional competence under ABA Model Rule 1.1. 💻⚖️🤖

Join our AI hosts and me as we discuss the following three questions and more!

  1. How has federal cybersecurity evolved from the compliance era to the cyber dominance paradigm? 🔒

  2. What are the three technical pillars—continuous monitoring, zero trust architecture, and AI-driven detection—and how do they interconnect? 🛡️

  3. What professional liability and ethical obligations do lawyers now face under ABA Model Rule 1.1 regarding technology competence? ⚖️

In our conversation, we cover the following:

  • [00:00:00] - Introduction: TSL Labs Bonus Podcast on VA's AI Revolution 🎯

  • [00:01:00] - Introduction to Federal Cybersecurity: The End of the Compliance Era 📋

  • [00:02:00] - Legal Implications and Professional Liability Under ABA Model Rules ⚖️

  • [00:03:00] - From Compliance to Continuous Monitoring: Understanding the Static Security Model 🔄

  • [00:04:00] - The False Comfort of Compliance-Only Approaches 🚨

  • [00:05:00] - The Shift to Cyber Dominance: Three Integrated Technical Pillars 💪

  • [00:06:00] - Zero Trust Architecture (ZTA) Explained: Verify Everything, Trust Nothing 🔐

  • [00:07:00] - AI-Driven Detection and Legal Challenges: Professional Competence Under Model Rule 1.1 🤖

  • [00:08:00] - The New Legal Questions: Real-Time Risk vs. Static Compliance 📊

  • [00:09:00] - Evolving Compliance: From Paper Checks to Dynamic Evidence 📈

  • [00:10:00] - Cybersecurity as Operational Discipline: DevSecOps and Security by Design 🔧

  • [00:11:00] - Litigation Risks: Discovery, Red Teaming, and Continuous Monitoring Data ⚠️

  • [00:12:00] - Cyber Governance with AI: Algorithmic Bias and Explainability 🧠

  • [00:13:00] - Synthesis and Future Outlook: Law Must Lead, Not Chase Technology 🚀

  • [00:14:00] - The Ultimate Question: Is Your Advice Ready for Real-Time Risk Management? 💡

  • [00:15:00] - Conclusion and Resources 📚

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

  • AI-Driven Detection Systems - Automated threat detection and response platforms

  • Automated Compliance Platforms - Dynamic evidence generation systems

  • Continuous Monitoring Systems - Real-time security assessment platforms

  • DevSecOps Tools - Automated security testing in software development pipelines

  • Firewalls - Network security hardware devices

  • Google Notebook AI - https://notebooklm.google.com/

  • Penetration Testing Software - Security vulnerability assessment tools

  • Zero Trust Architecture (ZTA) Solutions - Identity and access verification systems

🎙️Ep. 126: AI and Access to Justice With Pearl.com Associate General Counsel Nick Tiger

Our next guest is Nick Tiger, Associate General Counsel at Pearl.com, Nick shares insights on integrating AI into legal practice. Pearl.com champions AI and human expertise for professional services. He outlines practical uses such as market research, content creation, intake automation, and improved billing efficiency, while stressing the need to avoid liability through robust human oversight.

Nick is a legal leader at Pearl.com, partnering on product design, technology, and consumer-protection compliance strategy. He previously served as Head of Product Legal at EarnIn, an earned-wage access pioneer, building practical guidance for responsible feature launches, and as Senior Counsel at Capital One, supporting consumer products and regulatory matters. Nick holds a J.D. from the University of Missouri–Kansas City, lives in Richmond, Virginia, and is especially interested in using technology to expand rural community access to justice.

During the conversation, Nick highlights emerging tools, such as conversation wizards and expert-matching systems, that enhance communication and case preparation. He also explains Pearl AI's unique model, which blends chatbot capabilities with human expert verification to ensure accuracy in high-stakes or subjective matters.

Nick encourages lawyers to adopt human-in-the-loop protocols and consider joining Pearl's expert network to support accessible, reliable legal services.

Join Nick and me as we discuss the following three questions and more!

  1. What are the top three most impactful ways lawyers can immediately implement AI technology in their practices while avoiding the liability pitfalls that have led to sanctions in recent high-profile cases?

  2. Beyond legal research and document review, what are the top three underutilized or emerging AI applications that could transform how lawyers deliver value to clients, and how should firms evaluate which technologies to adopt?

  3. What are the top three criteria Pearl uses to determine when human expert verification is essential versus when AI alone is sufficient? How can lawyers apply this framework to develop their own human-in-the-loop protocols for AI-assisted legal work, and how is Perl different from its competitors?

In our conversation, we cover the following:

[00:56] Nick's Tech Setup

[07:28] Implementing AI in Legal Practices

[17:07] Emerging AI Applications in Legal Services

[26:06] Pearl AI's Unique Approach to AI and Legal Services

[31:42] Developing Human-in-the-Loop Protocols

[34:34] Pearl AI's Advantages Over Competitors

[36:33] Becoming an Expert on Pearl AI

Resources:

Connect with Nick:

Nick's LinkedIn: linkedin.com/in/nicktigerjd

Pearl.com Website: pearl.com

Pearl.com Expert Application Portal: era.justanswer.com/

Pearl.com LinkedIn: linkedin.com/company/pearl-com

Pearl.com X: x.com/Pearldotcom

ABA Resources:

ABA Formal Opinion 512: https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf

Hardware mentioned in the conversation:

Anker Backup Battery / Power Bank: anker.com/collections/power-banks

Software & Cloud Services mentioned in the conversation:

TSL Labs Bonus Podcast: Google’s Notebook LLM “Deep Dive” on December 1st, 2025, editorial on the the Lawyer’s Defense Against Holiday Scams and ‘Bargain’ Tech Traps!

Listen in as Google's Notebook LLM provides an AI-powered conversation unpacks our December 1st, 2025 editorial examining how the holiday digital marketplace transforms into a lucrative hunting ground for device compromise and credential theft. We explore why attorneys and paralegals—trained to spot hidden clauses and anticipate risk—often abandon professional skepticism when faced with shiny gadgets bearing 70% off stickers. Our discussion arms you with actionable strategies to protect your practice, safeguard client confidentiality, and prevent the kind of security breaches that trigger bar complaints and operational shutdowns. Whether you're a solo practitioner or part of a large firm, this episode delivers the technical insights you need without the jargon.

Join Google's Notebook LLM as we discuss the following three questions and more!

  1. How do bargain tech deals create hidden professional liabilities that extend far beyond wasted money, and what specific technical deficits should lawyers avoid in discount hardware?

  2. What free forensic tools can legal professionals use to distinguish genuine discounts from manipulated pricing schemes, and how do these tools apply procurement-level rigor to personal shopping decisions?

  3. Which three active scam vectors target high-value professionals during the holiday season, and what mandatory four-point protocol ensures comprehensive protection against credential theft and device compromise?

In our conversation, we cover the following:

  • [00:00:00] Welcome to TSL Labs Bonus Episode: AI-powered deep dive on holiday shopping risks

  • [00:01:00] Why legal professionals abandon professional skepticism during holiday sales

  • [00:02:00] The high stakes: credential theft, device compromise, and operational lockdown

  • [00:03:00] The bargain trap: understanding technical debt in cheap vs. inexpensive hardware

  • [00:04:00] Processor bottleneck red flags: older generation chips that consume billable time

  • [00:05:00] Screen resolution hazards: how 1366x768 displays create genuine error risks

  • [00:06:00] RAM deficits and security longevity: when devices become e-waste and compliance gaps

  • [00:07:00] Introduction to forensic price tracking tools for procurement-level shopping

  • [00:08:00] CamelCamelCamel, Keepa, and Honey: free tools that reveal true pricing history

  • [00:09:00] Malwarebytes 2025 holiday scam report: three attack vectors targeting professionals

  • [00:10:00] Scam #1: urgent delivery smishing attacks exploiting package expectations

  • [00:11:00] Scam #2: malvertising minefield—when legitimate ads redirect to cloned fraud sites

  • [00:12:00] Scam #3: gift card emergency scams posing as court clerks and government officials

  • [00:13:00] Bonus threat: social media marketplace fraud and payment protection gaps

  • [00:14:00] The mandatory four-point protocol for holiday shopping protection

  • [00:15:00] Final thoughts: applying contract-reading diligence to every link you click

Resources

Hardware Mentioned in the Conversation

Software & Cloud Services Mentioned in the Conversation

MTC: The End of Dial-Up Internet: A Digital Divide Crisis for Legal Practice 📡⚖️

Dial-up shutdown deepens rural legal digital divide.

The legal profession faces an unprecedented access to justice challenge as AOL officially terminated its dial-up internet service on September 30, 2025, after 34 years of operation. This closure affects approximately 163,401 American households that depended solely on dial-up connections as of 2023, creating barriers to legal services in an increasingly digital world. While other dial-up providers like NetZero, Juno, and DSLExtreme continue operating, they may not cover all geographic areas previously served by AOL and offer limited long-term viability.

While many view dial-up as obsolete, its elimination exposes critical technology gaps that disproportionately impact vulnerable populations requiring legal assistance. Rural residents, low-income individuals, and elderly clients who relied on this affordable connectivity option now face digital exclusion from essential legal services and court systems. The remaining dial-up options provide minimal relief as these smaller providers lack AOL's extensive infrastructure coverage.

Split Courtroom!

Legal professionals must recognize that technology barriers create access to justice issues. When clients cannot afford high-speed internet or live in areas without broadband infrastructure, they lose the ability to participate in virtual court proceedings, access online legal resources, or communicate effectively with their attorneys. This digital divide effectively creates a two-tiered justice system where technological capacity determines legal access.

The legal community faces an implicit ethical duty to address these technology barriers. While no specific ABA Model Rule mandates accommodating clients' internet limitations, the professional responsibility to ensure access to justice flows from fundamental ethical obligations.

This implicit duty derives from several ABA Model Rules that create relevant obligations. Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including how technology barriers affect client representation. Rule 1.4 (Communication) mandates effective client communication, which encompasses understanding technology limitations that prevent meaningful attorney-client interaction. Rule 1.6 (Confidentiality) requires reasonable efforts to protect client information, necessitating awareness of technology security implications. Additionally, 41 jurisdictions have adopted technology competence requirements that obligate lawyers to stay current with technological developments affecting legal practice.

Lawyers are a leader when it comes to calls for action to help narrow the access to justice devide!

The legal community must advocate for affordable internet solutions and develop technology-inclusive practices to fulfill these professional responsibilities and ensure equal access to justice for all clients.

MTC