📖 Word of the Week: “Cross‑Tenant” Learning in Legal Practice

Cross-tenant learning helps law firms improve AI tools without exposing data

If your firm uses cloud‑based tools, you are already living in a multi‑tenant world. In that world, cross‑tenant learning is quickly becoming a key concept that every lawyer and legal operations professional should understand. 🧠⚖️

In simple terms, a “tenant” is your firm’s logically separate space inside a cloud platform: your own users, matters, documents, and settings, isolated from everyone else’s. Cross‑tenant learning refers to techniques in which a vendor’s system learns from patterns across multiple tenants (for example, many law firms) to improve its features—such as search, drafting suggestions, or document classification—without exposing any other firm’s confidential data to you or yours to them.

Why cross‑tenant learning matters for law firms

Cross‑tenant learning is especially relevant as generative AI and machine‑learning tools become embedded in e‑discovery platforms, contract review tools, legal research systems, and practice‑management software. Vendors may use aggregated and anonymized usage data to:

  • Improve relevance of search results and recommendations.

  • Enhance clause and issue spotting in contracts and briefs.

  • Reduce false positives in e‑discovery or compliance alerts.

  • Optimize workflows based on how similar firms use the product.

For lawyers, the value proposition is straightforward: your tools can become “smarter” faster, based on lessons learned across many organizations, not just your own firm’s experience. Done properly, cross‑tenant learning can raise the baseline quality and efficiency of technology available to your practice. ⚙️📈

ABA Model Rules: Confidentiality and Competence

Any discussion of cross‑tenant learning for law firms must start with confidentiality and competence.

  • Model Rule 1.6 (Confidentiality of Information) requires lawyers to safeguard information relating to the representation of a client. That obligation extends to how your vendors collect, store, and use your data. You must understand whether and how client data may be used for cross‑tenant learning and ensure that any such use preserves confidentiality through anonymization, aggregation, and strong technical and contractual controls. 🔐

  • Model Rule 1.1 (Competence), including Comment 8, emphasizes that lawyers should keep abreast of the benefits and risks associated with relevant technology. Understanding cross‑tenant learning is now part of that duty. You do not need to become a data scientist, but you should be comfortable asking vendors precise questions and recognizing red flags.

  • Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) applies when you rely on vendors as nonlawyer assistants. You must make reasonable efforts to ensure that their conduct is compatible with your professional obligations, including how they use your data for cross‑tenant learning. 🧾

Key questions to ask your vendors

ABA Model Rules guide ethical use of cross-tenant learning technologies

When evaluating a product that relies on cross‑tenant learning, consider asking:

  1. What data is used?

    • Is it only metadata or usage logs, or are actual document contents included?

    • Is the data aggregated and anonymized before it is used to train shared models?

  1. How is confidentiality protected?

    • Can other tenants ever see prompts, documents, or client‑identifying information from our firm?

    • What technical measures (encryption, access controls, tenant isolation) are in place?

  1. Can cross‑tenant learning be limited or disabled?

    • Do we have opt‑out or configuration controls?

    • Is there a dedicated model or environment for our firm if needed?

  1. What do the contract and policies say?

    • Does the MSA or DPA clearly limit use of client data to defined purposes?

    • How long is data retained, and how is it deleted if we leave?

These questions are not merely IT concerns; they go directly to your obligations under the ABA Model Rules and your firm’s risk profile.

Practical examples in law practice

Consider a cloud‑based contract‑analysis platform used by hundreds of firms. Over time, the provider can see which clauses lawyers routinely flag as risky, which edits are typically made, and what becomes the “preferred” language for certain issues. Through cross‑tenant learning, the system can use that aggregated knowledge to highlight problematic clauses and suggest alternatives more accurately for everyone.

Another example is an e‑discovery platform that uses cross‑tenant learning to distinguish between truly relevant documents and common “noise” such as automatically generated emails. The more matters the system processes across different tenants, the better it gets at ranking documents and reducing review burdens. This can be a material efficiency gain for litigation teams. ⚖️💼

In both scenarios, your ethical comfort depends on whether underlying data is appropriately anonymized, compartmentalized, and contractually protected.

Governance steps for your firm

To align cross‑tenant learning with professional obligations, firms can:

  • Update vendor‑due‑diligence checklists to include explicit questions about cross‑tenant learning, training data use, and model isolation.

  • Involve a cross‑functional team—lawyers, IT, information security, and risk management—in vendor selection and review.

  • Document your analysis of vendor practices and how they satisfy confidentiality, competence, and supervision obligations under the ABA Model Rules.

  • Educate lawyers and staff about how AI‑enabled tools work, what kinds of data they send into the system, and how to avoid unnecessary exposure of client‑identifying details.

Takeaway for busy practitioners

Smart vendor questions reduce risk in cross-tenant legal technology adoption

You do not need to reject cross‑tenant learning to protect your clients. Instead, you should approach it as a powerful capability that demands informed oversight. When well‑implemented, cross‑tenant learning can help your firm deliver faster, more consistent, and more cost‑effective legal services, while still honoring confidentiality and ethical duties. When poorly explained or loosely governed, it becomes an unnecessary and avoidable risk.

Understanding how your tools learn—and from whom—is now part of competent, modern legal practice. ⚖️💡

📢 Your Tech-Savvy Lawyer Blogger and Podcaster, Michael D.J. Eisenberg, Announces His Upcoming Talk on Ethical AI Use in Legal Practice at the 2026 AI Legal Practice Summit!

Saturday, April 18, 2026 | Capital University Law School

As technology continues to transform legal practice, I’m honored to announce that I’ll be speaking at the 2026 AI Legal Practice Summit, hosted by my alma mater, Capital University Law School, in Columbus, Ohio. This event brings together attorneys, educators, and technologists to explore how artificial intelligence is reshaping the legal field — not just operationally, but ethically and professionally as well.

My presentation, “Smart Practice, Smarter Ethics: Navigating AI Tools Under the ABA Model Rules,” focuses on a topic that’s both timely and critically important: how lawyers can use emerging AI technologies responsibly while meeting their professional obligations under the ABA Model Rules of Professional Conduct.

👉 Learn more and view the full schedule at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
🎟️ Register today through Eventbrite: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

Through my work on The Tech-Savvy Lawyer.Page blog and podcast, I’ve had countless conversations with practitioners who want to use AI to streamline tasks such as research, document drafting, and client management — yet remain uncertain about compliance, bias, and confidentiality. Law practice is evolving rapidly, but our ethical foundations must remain strong.

In my session, I’ll walk through key aspects of how the ABA Model Rules, including Rules 1.1 (Competence), 1.6 (Confidentiality of Information), and 5.3 (Responsibilities Regarding Nonlawyer Assistance), apply in an age of intelligent automation. These rules guide us in assessing not just what technology can do, but how and when it should be used.

Your faculty!

We’ll discuss:

  • Reviewing the tech stack you already own;

  • How to vet and implement AI-powered tools while maintaining confidentiality.

  • Questions to ask vendors about data handling and bias;

  • How to document best practices for firm-wide ethical compliance;

  • Ways to blend human legal judgment with algorithmic assistance; and

  • Managing client expectations about AI-enabled legal work.

My goal is to help attorneys approach technology with confidence — to experiment, adopt, and adapt responsibly. Being a “tech‑savvy lawyer” isn’t about mastering every gadget or platform; it’s about understanding how technology fits within the ethical framework of our profession.

The conversation around technological competence has matured since Comment 8 to Rule 1.1 was introduced. It’s no longer optional. Attorneys must understand the benefits, risks, and limitations of relevant technology to provide competent representation. Artificial intelligence highlights that reality better than any emerging tool before it.

Whether you’re a solo practitioner looking to automate administrative tasks, working for a government agency, or part of a large firm implementing AI-assisted legal research or document review, I’ll share specific practices you can adopt immediately.

If you’re attending and seeking Ohio CLE credit, please contact Jenny Wondracek at jwondracek@law.capital.edu for details.

PRogram description of my presentation.

The 2026 AI Legal Practice Summit will feature leading scholars, ethics experts, and seasoned practitioners. I’m looking forward to exchanging ideas, testing assumptions, and continuing a dialogue that helps ensure AI becomes a responsible partner—never a replacement—in the practice of law.

Let’s move forward together, with competence, curiosity, and care.

Learn more about the Summit at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
Register today: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

I look forward to seeing you there! ⚖️

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️

My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy.

Join Justin and me as we discuss the following three questions and more!

  1. What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?

  2. What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk? 

  3. Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? 

In our conversation, we cover the following

  • 00:00 – Welcome and guest introduction

    • Justin joins the show and shares his current tech setup at his desk. 

  • 00:00–01:00 – Justin’s current tech stack

    • Lenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.

    • Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making.

  • 01:00–02:00 – Android vs. iPhone for AI use

    • Why Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability.

  • 02:00–05:30 – Q1: Top three ways litigators should be using AI right now

    • Using AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.

    • Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.

    • Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks.

  • 05:30–07:30 – StrongSuit vs. basic tools like Word grammar check

    • How StrongSuit aims to “up-level” a lawyer’s writing, not just catch typos.

    • Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents.

  • 06:00–08:00 – AI context limits and scaling doc review

    • Constraints of large models’ context windows (around ~1M tokens ≈ ~750 pages).

    • How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights.

  • 08:00–09:00 – Handling tens of thousands of documents

    • How StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters.

  • 09:00–11:30 – Origin story of StrongSuit

    • Why Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.

    • StrongSuit’s focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step.

  • 11:30–13:30 – From intake to brief drafting in minutes

    • Generating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.

    • StrongSuit’s long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment.

  • 12:00–14:30 – How StrongSuit tackles hallucinations

    • Building a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.

    • Validating citations by checking whether the Bluebook citation actually exists in StrongSuit’s case database before surfacing it to the user.

    • Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations.

  • 14:30–16:30 – Coverage and jurisdictions

    • Coverage of all U.S. jurisdictions, federal and state, focused on precedential cases.

    • Handling most regulations from administrative agencies, and limits around local ordinances.

    • Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows.

  • 15:00–17:00 – Security and confidentiality for litigators

    • SOC 2 compliance and industry-standard encryption at rest and in transit.

    • No model training on user data.

    • Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys.

  • 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigation

    • Mistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.

    • Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract drafting).

    • How to shortlist tools: look for SOC 2, real product depth, awards, and a focus on your specific workflows.

    • Mistake #2: Expecting immediate mastery instead of moving through predictable adoption stages—from learning the tool, to daily use, to stringing workflows together.

  • 20:30–22:30 – Building firm-wide AI workflows over time

    • Moving from isolated experiments to integrated, low-friction workflows, such as automatic intake-to-research pipelines.

    • Using client intake audio or transcripts to automatically extract facts, issues, and research paths.

  • 22:30–24:30 – Time constraints and “no-time” lawyers

    • Why lawyers don’t need to be “technical” to use StrongSuit.

    • Reframing AI as text-based tools where lawyers’ writing skills and analytical thinking are assets, not obstacles. 

  • 24:00–26:00 – Practical workflows beyond intake

    • Using AI to prepare for expert depositions, including reviewing valuation analyses, flagging departures from market consensus, and generating targeted questions.

    • Reinforcing the value of AI-enhanced legal research and drafting as core litigation workflows.

  • 26:00–29:30 – Q3: 2026 and beyond – AI-driven workflows every litigator should master

    • Rapid improvement of baseline models (e.g., jumping from single-digit to high double-digit performance on difficult benchmarks year over year). 

    • The idea of “tipping points,” where small performance gains turn AI from marginally useful to essential in specific tasks.

    • Why legal research is a great training ground for understanding where AI excels, where it falls short, and how to divide labor between human and machine.

    • The value of learning basic prompting skills to get more from AI systems, even when platforms offer visual workflows.

  • 29:30–32:30 – Will workflows actually change—or just get better?

    • Why Justin expects familiar litigation workflows (doc review, research, drafting) to remain structurally similar, but become far faster and more sophisticated.

    • AI agents handling the grind work while lawyers focus on synthesis, judgment, and strategy.

    • A future where “AI + lawyer vs. AI + lawyer” resembles high-level chess: same rules, but much deeper thinking on both sides.

  • 32:30–End – Where to find Justin and StrongSuit

    • How to connect with Justin and learn more about StrongSuit’s litigation tools.

Resources

Connect with Justin

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

ANNOUNCEMENT: My Book, “The Lawyer’s Guide to Podcasting,” is Amazon #1 New Release (Law Office Technology)

I’m excited to report that The Lawyer’s Guide to Podcasting ranked #1 as a New Release in Amazon’s Law Office Technology category for the week of February 07, 2026, and sales have already doubled since last month. 🎙️📈

For lawyers with limited-to-moderate tech skills, the book focuses on practical, repeatable workflows for launching and sustaining a compliant podcast presence. ⚖️💡

As you plan content, remember ABA Model Rule 1.1 (technology competence) and the related duties of confidentiality (Rule 1.6) and communications about services (Rule 7.1): use secure tools, avoid accidental client disclosures, and ensure marketing statements are accurate. 🔐✅

Get your copy today! 📘🚀

 
 

Word of the week: “Legal AI institutional memory” engages core ethics duties under the ABA Model Rules, so it is not optional “nice to know” tech.⚖️🤖

Institutional Memory Meets the ABA Model Rules

“Legal AI institutional Memory” is AI that remembers how your firm actually practices law, not just what generic precedent says. It captures negotiation history, clause choices, outcomes, and client preferences across matters so each new assignment starts from experience instead of a blank page.

From an ethics perspective, this capability sits directly in the path of ABA Model Rule 1.1 on competence, Rule 1.6 on confidentiality, and Rule 5.3 on responsibilities regarding nonlawyer assistance (which now includes AI systems). Comment 8 to Rule 1.1 stresses that competent representation requires understanding the “benefits and risks associated with relevant technology,” which squarely includes institutional‑memory AI in 2026. Using or rejecting this technology blindly can itself create risk if your peers are using it to deliver more thorough, consistent, and efficient work.🧩

Rule 1.6 requires “reasonable efforts” to prevent unauthorized disclosure or access to information relating to representation. Because institutional memory centralizes past matters and sensitive patterns, it raises the stakes on vendor security, configuration, and firm governance. Rule 5.3 extends supervision duties to “nonlawyer assistance,” which ethics commentators and bar materials now interpret to include AI tools used in client work. In short, if your AI is doing work that would otherwise be done by a human assistant, you must supervise it as such.🛡️

Why Institutional Memory Matters (Competence and Client Service)

Tools like Luminance and Harvey now market institutional‑memory features that retain negotiation patterns, drafting preferences, and matter‑level context across time. They promise faster contract cycles, fewer errors, and better use of a firm’s accumulated know‑how. Used wisely, that aligns with Rule 1.1’s requirement that you bring “thoroughness and preparation” reasonably necessary for the representation, and Comment 8’s directive to keep abreast of relevant technology.

At the same time, ethical competence does not mean turning judgment over to the model. It means understanding how the system makes recommendations, what data it relies on, and how to validate outputs against your playbooks and client instructions. Ethics guidance on generative AI emphasizes that lawyers must review AI‑generated work product, verify sources, and ensure that technology does not substitute for legal judgment. Legal AI institutional memory can enhance competence only if you treat it as an assistant you supervise, not an oracle you obey.⚙️

Legal AI That Remembers Your Practice—Ethics Required, Not Optional

How Legal AI Institutional Memory Works (and Where the Rules Bite)

Institutional‑memory platforms typically:

  • Ingest a corpus of contracts or matters.

  • Track negotiation moves, accepted fall‑backs, and outcomes over time.

  • Expose that knowledge through natural‑language queries and drafting suggestions.

That design engages several ethics touchpoints🫆:

  • Rule 1.1 (Competence): You must understand at a basic level how the AI uses and stores client information, what its limitations are, and when it is appropriate to rely on its suggestions. This may require CLE, vendor training, or collaboration with more technical colleagues until you reach a reasonable level of comfort.

  • Rule 1.6 (Confidentiality): You must ensure that the vendor contract, configuration, and access controls provide “reasonable efforts” to protect confidentiality, including encryption, role‑based access, and breach‑notification obligations. Ethics guidance on cloud and AI use stresses the need to investigate provider security, retention practices, and rights to use or mine your data.

  • Rule 5.3 (Nonlawyer Assistance): Because AI tools are “non‑human assistance,” you must supervise their work as you would a contract review outsourcer, document vendor, or litigation support team. That includes selecting competent providers, giving appropriate instructions, and monitoring outputs for compliance with your ethical obligations.🤖

Governance Checklist: Turning Ethics into Action

For lawyers with limited to moderate tech skills, it helps to translate the ABA Model Rules into a short adoption checklist.✅

When evaluating or deploying legal AI institutional memory, consider:

  1. Define Scope (Rules 1.1 and 1.6): Start with a narrow use case such as NDAs or standard vendor contracts, and specify which documents the system may use to build its memory.

  2. Vet the Vendor (Rules 1.6 and 5.3): Ask about data segregation, encryption, access logs, regional hosting, subcontractors, and incident‑response processes; confirm clear contractual obligations to preserve confidentiality and notify you of incidents.

  3. Configure Access (Rules 1.6 and 5.3): Use role‑based permissions, client or matter scoping, and retention settings that match your existing information‑governance and legal‑hold policies.

  4. Supervise Outputs (Rules 1.1 and 5.3): Require that lawyers review AI suggestions, verify sources, and override recommendations where they conflict with client instructions or risk tolerance.

  5. Educate Your Team (Rule 1.1): Provide short trainings on how the system works, what it remembers, and how the Model Rules apply; document this as part of your technology‑competence efforts.

Educating Your Team Is Core to AI Competence

This approach respects the increasing bar on technological competence while protecting client information and maintaining human oversight.⚖️

This approach respects the increasing bar on technological competence while protecting client information and maintaining human oversight.⚖️

🎙️Ep. 128, Building a Tech-Forward Law Firm: AI Intake, CRM Strategy & Client Experience with Colleen Joyce!

My next guest is Colleen Joyce, CEO of Lawyer.com, a leading legal marketplace that connects over one million consumers monthly with qualified attorneys nationwide. With nearly two decades of experience transforming how law firms leverage technology and marketing, Colleen has pioneered innovations including LawyerLine call intake services, AI-powered matching technology, and the Lawyer Growth Summit. She publishes the Fast Five newsletter every Tuesday, reaching over 20,000 legal professionals with insights on AI trends, business growth strategies, and practice management. In this episode, Colleen shares her expertise on the essential technologies modern law firms need to scale profitably, how AI is revolutionizing client intake processes, and the critical human touchpoints that should never be automated in legal practice.

💬 Join Colleen Joyce and me as we discuss the following three questions and more!

1.     Beyond the essential lead generation that Lawyer.com provides, you see thousands of firms succeed and fail based on their operational efficiency. If you are building a modern law firm from scratch today, what are the top three non-negotiable technologies? For example, specific CRM automations, financial analytics, or project management tools you would implement immediately to ensure the firm scales profitably rather than just chaotically.

2.     We know AI is reshaping the top of the funnel for legal consumers. Based on the data you're seeing from your new AI initiatives, what are the top three specific intake bottlenecks that AI can now solve better than a human receptionist? Allowing attorneys to focus primarily on high-value legal work rather than data entry or basic screening.

3.     Technology can handle logistics, but it can't handle the emotion of legal crisis. From your experience overseeing millions of consumer connections, what are the top three human touchpoints in the client lifecycle that a lawyer should never automate? Because they are crucial for building the trust and transparency that leads to long-term referrals.

In our conversation, we cover the following:

-      00:00:00 - Welcome and Introduction to Colleen Joyce

-      00:00:20 - Colleen's Current Tech Setup: MacBook Pro, iPhone 16, iPad, and Curved Monitor

-      00:01:00 - Discussion about iPhone Models and AppleCare Benefits

-      00:02:00 - Using Plaud AI for Recording Conversations

-      00:03:00 - MacBook Pro Specifications and Upgrade Recommendations

-      00:04:00 - Dell Curved Monitor Benefits for Focus and Productivity

-      00:05:00 - Question 1: Top Three Non-Negotiable Technologies for Modern Law Firms

-      00:06:00 - Intake Technology, CRM, and Practice Management Systems

-      00:07:00 - Balancing Cost and Technology for New Lawyers

-      00:08:00 - Leveraging Freemium Tools and AI for Budget-Conscious Firms

-      00:08:30 - Question 2: AI Solutions for Intake Bottlenecks

-      00:09:00 - Answering Phones with Empathetic AI Agents

-      00:10:00 - Importance of Legal-Specific AI Training

-      00:11:00 - Consumer Adoption and Resistance to AI vs. Human Agents

-      00:12:00 - Using Virtual Receptionists and Calendly for Scheduling

-      00:13:00 - Generational Differences in Technology Adoption

-      00:14:00 - The Evolution of Legal Technology Adoption Over 14 Years

-      00:15:00 - Question 3: Human Touchpoints That Should Never Be Automated

-      00:16:00 - Relationship Building and the Courting Period

-      00:17:00 - Screening Clients Through Your Tech Processes

-      00:18:00 - Where to Find Colleen: LinkedIn and the Fast Five Newsletter - 00:18:30 - Closing Remarks and Gratitude

---

📚 Resources

🤝 Connect with Colleen Joyce

•  LinkedIn: https://www.linkedin.com/in/colleenjoyce

•  Lawyer.com: https://www.lawyer.com

•  Lawyer.com Services: https://services.lawyer.com

•  Fast Five Newsletter (Published Tuesdays): https://www.linkedin.com/newsletters/ fast-five-fridays-7265815097552326656

•  Lawyer Growth Summit: https://lawyergrowthsummit.com

•  Lawyer.com Phone: 800-620-0900

•  Lawyer.com Address: 25 Mountainview Boulevard, Basking Ridge, NJ 07920

📖 Mentioned in the Episode

•  MacRumors Buyer's Guide: https://buyersguide.macrumors.com

•  LawyerLine (24-hour Intake Services) : https://www.lawyerline.ai/

🖥 Hardware Mentioned in the Conversation

•  MacBook Pro : https://www.apple.com/macbook-pro/

•  MacBook Pro with M4/M5 Chips (Upgrade recommendation): https://www.apple.com/macbook-pro/

•  iPhone 16: https://www.apple.com/iphone-16/

•  iPad: https://www.apple.com/ipad/

•  Dell Curved Monitor (22-24 inch, white): https://www.dell.com/monitors

•  HP Printer (with automatic duplex printing): https://www.hp.com/printers

☁ Software & Cloud Services Mentioned in the Conversation

•  Plaud AI (Call Recording & Transcription): https://www.plaud.ai

Slack (Team Communication Platform): https://slack.com

•  iMessage (Apple Messaging): https://support.apple.com/en-us/104969

•  Calendly (Scheduling Software): https://calendly.com

•  Monday.com (Project Management & Team Organization): https://monday.com

•  ChatGPT (AI Assistant): https://openai.com/chatgpt

•  AppleCare (Apple Device Protection): https://www.apple.com/support/applecare/

MTC (Bonus): National Court Technology Rules: Finding Balance Between Guidance and Flexibility ⚖️

Standardizing Tech Guidelines in the Legal System

Lawyers and their staff needs to know the standard and local rules of AI USe in the courtroom - their license could depend on it.

The legal profession stands at a critical juncture where technological capability has far outpaced judicial guidance. Nicole Black's recent commentary on the fragmented approach to technology regulation in our courts identifies a genuine problem—one that demands serious consideration from both proponents of modernization and cautious skeptics alike.

The core tension is understandable. Courts face legitimate concerns about technology misuse. The LinkedIn juror research incident in Judge Orrick's courtroom illustrates real risks: a consultant unknowingly violated a standing order, resulting in a $10,000 sanction despite the attorney's good-faith disclosure and remedial efforts. These aren't theoretical concerns—they reflect actual ethical boundaries that protect litigants and preserve judicial integrity. Yet the response to these concerns has created its own problems.

The current patchwork system places practicing attorneys in an impossible position. A lawyer handling cases across multiple federal districts cannot reasonably track the varying restrictions on artificial intelligence disclosure, social media evidence protocols, and digital research methodologies. When the safe harbor is simply avoiding technology altogether, the profession loses genuine opportunities to enhance accuracy and efficiency. Generative AI's citation hallucinations justify judicial scrutiny, but the ad hoc response by individual judges—ranging from simple guidance to outright bans—creates unpredictability that chills responsible innovation.

SHould there be an international standard for ai use in the courtroom

There are legitimate reasons to resist uniform national rules. Local courts understand their communities and case management needs better than distant regulatory bodies. A one-size-fits-all approach might impose burdensome requirements on rural jurisdictions with fewer tech-savvy practitioners. Furthermore, rapid technological evolution could render national rules obsolete within months, whereas individual judges retain flexibility to respond quickly to emerging problems.

Conversely, the current decentralized approach creates serious friction. The 2006 amendments to Federal Rules of Civil Procedure for electronically stored information succeeded partly because they established predictability across jurisdictions. Lawyers knew what preservation obligations applied regardless of venue. That uniformity enabled the profession to invest in training, software, and processes. Today's lawyers lack that certainty. Practitioners must maintain contact lists tracking individual judge orders, and smaller firms simply cannot sustain this administrative burden.

The answer likely lies between extremes. Rather than comprehensive national legislation, the profession would benefit from model standards developed collaboratively by the Federal Judicial Conference, state supreme courts, and bar associations. These guidelines could allow reasonable judicial discretion while establishing baseline expectations—defining when AI disclosure is mandatory, clarifying which social media research constitutes impermissible contact, and specifying preservation protocols that protect evidence without paralyzing litigation.

Such an approach acknowledges both legitimate judicial concerns and legitimate professional needs. It recognizes that judges require authority to protect courtroom procedures while recognizing that lawyers require predictability to serve clients effectively.

I basically agree with Nicole: The question is not whether courts should govern technology use. They must. The question is whether they govern wisely—with sufficient uniformity to enable compliance, sufficient flexibility to address local concerns, and sufficient clarity to encourage rather than discourage responsible innovation.

TSL Labs 🧪Bonus: 🎙️ From Cyber Compliance to Cyber Dominance: What VA's AI Revolution Means for Government Cybersecurity, Legal Ethics, and ABA Model Rule Compliance!

In this TSL Labs bonus episode, we examine this week’s editorial on how the Department of Veterans Affairs is leading a historic transformation from traditional compliance frameworks to a dynamic, AI-driven approach called "cyber dominance." This conversation unpacks what this seismic shift means for legal professionals across all practice areas—from procurement and contract law to privacy, FOIA, and litigation. Whether you're advising government agencies, representing contractors, or handling cases where data security matters, this discussion provides essential insights into how continuous monitoring, zero trust architecture, and AI-driven threat detection are redefining professional competence under ABA Model Rule 1.1. 💻⚖️🤖

Join our AI hosts and me as we discuss the following three questions and more!

  1. How has federal cybersecurity evolved from the compliance era to the cyber dominance paradigm? 🔒

  2. What are the three technical pillars—continuous monitoring, zero trust architecture, and AI-driven detection—and how do they interconnect? 🛡️

  3. What professional liability and ethical obligations do lawyers now face under ABA Model Rule 1.1 regarding technology competence? ⚖️

In our conversation, we cover the following:

  • [00:00:00] - Introduction: TSL Labs Bonus Podcast on VA's AI Revolution 🎯

  • [00:01:00] - Introduction to Federal Cybersecurity: The End of the Compliance Era 📋

  • [00:02:00] - Legal Implications and Professional Liability Under ABA Model Rules ⚖️

  • [00:03:00] - From Compliance to Continuous Monitoring: Understanding the Static Security Model 🔄

  • [00:04:00] - The False Comfort of Compliance-Only Approaches 🚨

  • [00:05:00] - The Shift to Cyber Dominance: Three Integrated Technical Pillars 💪

  • [00:06:00] - Zero Trust Architecture (ZTA) Explained: Verify Everything, Trust Nothing 🔐

  • [00:07:00] - AI-Driven Detection and Legal Challenges: Professional Competence Under Model Rule 1.1 🤖

  • [00:08:00] - The New Legal Questions: Real-Time Risk vs. Static Compliance 📊

  • [00:09:00] - Evolving Compliance: From Paper Checks to Dynamic Evidence 📈

  • [00:10:00] - Cybersecurity as Operational Discipline: DevSecOps and Security by Design 🔧

  • [00:11:00] - Litigation Risks: Discovery, Red Teaming, and Continuous Monitoring Data ⚠️

  • [00:12:00] - Cyber Governance with AI: Algorithmic Bias and Explainability 🧠

  • [00:13:00] - Synthesis and Future Outlook: Law Must Lead, Not Chase Technology 🚀

  • [00:14:00] - The Ultimate Question: Is Your Advice Ready for Real-Time Risk Management? 💡

  • [00:15:00] - Conclusion and Resources 📚

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

  • AI-Driven Detection Systems - Automated threat detection and response platforms

  • Automated Compliance Platforms - Dynamic evidence generation systems

  • Continuous Monitoring Systems - Real-time security assessment platforms

  • DevSecOps Tools - Automated security testing in software development pipelines

  • Firewalls - Network security hardware devices

  • Google Notebook AI - https://notebooklm.google.com/

  • Penetration Testing Software - Security vulnerability assessment tools

  • Zero Trust Architecture (ZTA) Solutions - Identity and access verification systems

🎙️Ep. 126: AI and Access to Justice With Pearl.com Associate General Counsel Nick Tiger

Our next guest is Nick Tiger, Associate General Counsel at Pearl.com, Nick shares insights on integrating AI into legal practice. Pearl.com champions AI and human expertise for professional services. He outlines practical uses such as market research, content creation, intake automation, and improved billing efficiency, while stressing the need to avoid liability through robust human oversight.

Nick is a legal leader at Pearl.com, partnering on product design, technology, and consumer-protection compliance strategy. He previously served as Head of Product Legal at EarnIn, an earned-wage access pioneer, building practical guidance for responsible feature launches, and as Senior Counsel at Capital One, supporting consumer products and regulatory matters. Nick holds a J.D. from the University of Missouri–Kansas City, lives in Richmond, Virginia, and is especially interested in using technology to expand rural community access to justice.

During the conversation, Nick highlights emerging tools, such as conversation wizards and expert-matching systems, that enhance communication and case preparation. He also explains Pearl AI's unique model, which blends chatbot capabilities with human expert verification to ensure accuracy in high-stakes or subjective matters.

Nick encourages lawyers to adopt human-in-the-loop protocols and consider joining Pearl's expert network to support accessible, reliable legal services.

Join Nick and me as we discuss the following three questions and more!

  1. What are the top three most impactful ways lawyers can immediately implement AI technology in their practices while avoiding the liability pitfalls that have led to sanctions in recent high-profile cases?

  2. Beyond legal research and document review, what are the top three underutilized or emerging AI applications that could transform how lawyers deliver value to clients, and how should firms evaluate which technologies to adopt?

  3. What are the top three criteria Pearl uses to determine when human expert verification is essential versus when AI alone is sufficient? How can lawyers apply this framework to develop their own human-in-the-loop protocols for AI-assisted legal work, and how is Perl different from its competitors?

In our conversation, we cover the following:

[00:56] Nick's Tech Setup

[07:28] Implementing AI in Legal Practices

[17:07] Emerging AI Applications in Legal Services

[26:06] Pearl AI's Unique Approach to AI and Legal Services

[31:42] Developing Human-in-the-Loop Protocols

[34:34] Pearl AI's Advantages Over Competitors

[36:33] Becoming an Expert on Pearl AI

Resources:

Connect with Nick:

Nick's LinkedIn: linkedin.com/in/nicktigerjd

Pearl.com Website: pearl.com

Pearl.com Expert Application Portal: era.justanswer.com/

Pearl.com LinkedIn: linkedin.com/company/pearl-com

Pearl.com X: x.com/Pearldotcom

ABA Resources:

ABA Formal Opinion 512: https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf

Hardware mentioned in the conversation:

Anker Backup Battery / Power Bank: anker.com/collections/power-banks

Software & Cloud Services mentioned in the conversation: