MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖.  Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool’s Terms of Use can trigger a privilege waiver, and what “tech competence” really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff’s wake-up-call analysis of confidentiality and third-party disclosure risk.

🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters.

In our conversation, we cover the following

  • 00:00 — The “superhuman assistant” promise, and the procedural nightmare risk. 🧠⚖️

  • 00:01 — The core warning: AI use can “blow a hole” in privilege.

  • 00:02 — Editorial overview: “The AI Privilege Trap” by Michael D.J. Eisenberg.

  • 00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.

  • 00:03 — Why Judge Jed Rakoff’s opinion gets attention (tech-literate, influential).

  • 00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.

  • 00:04 — The court’s conclusion: no attorney-client privilege, no work product protection.

  • 00:05 — Privilege basics applied to AI: “confidential + lawyer” and why AI fails that test.

  • 00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾

  • 00:07 — The “stranger on the street” analogy: you can’t retroactively make it confidential.

  • 00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.

  • 00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.

  • 00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.

  • 00:10 — “Reasonable safeguards”: read policies, adjust settings, and know training/logging.

  • 00:11 — Public vs. enterprise AI: why contracts and “walled gardens” matter.

  • 00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.

  • 00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.

  • 00:13 — Redefining “tech-savvy lawyer” in 2026: judgment and restraint. 🧭

  • 00:14 — The “straight-face test”: could you defend confidentiality after a judge reads the policy?

  • 00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.

  • 00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

Word 📖 of the Week: Why Lawyers Need to Know the Term “Constitutional AI”

“Constitutional AI” is a design framework for artificial intelligence that aims to make AI systems helpful, harmless, and honest by training them to follow a defined set of higher‑level rules, much like a constitution. 🤖📜 For lawyers, this is not abstract theory; it connects directly to duties of technological competence, confidentiality, and supervision under the ABA Model Rules.

Most legal professionals now rely on AI‑enabled tools in research, drafting, e‑discovery, document automation, and client communication. These tools may use generative AI in the background even when the marketing materials do not emphasize “AI.” Constitutional AI gives you a practical way to evaluate those tools: are they structured to avoid hallucinations, protect confidential data, and resist being prompted into unethical behavior.

At a high level, a Constitutional AI system is trained to follow explicit principles, such as “do not fabricate legal citations,” “do not disclose confidential information,” and “do not assist in unlawful conduct.” The model learns to critique and revise its own outputs against those principles. For law firms, that aligns with the core expectations in ABA Model Rule 1.1 (competence) and its Comment 8, which require lawyers to understand the benefits and risks of relevant technology and stay current with changes in how these systems work. ⚖️

Constitutional AI also intersects with ABA Model Rule 1.6 on confidentiality. If an AI tool is not designed with strong guardrails, prompts, and outputs can expose sensitive client information to external systems or vendors. When you evaluate an AI platform, you should ask where data is stored, how prompts are logged, whether training data will include your matters, and whether the provider has implemented “constitutional” safeguards against data leakage and unsafe uses.

Supervision is another critical angle. ABA Formal Opinion 512 and Model Rules 5.1 and 5.3 stress that supervising lawyers must set policies and training for how attorneys and staff use generative AI. Constitutional AI can reduce risk, yet it does not replace supervisory duties. You still must review AI‑generated work product, confirm citations, validate factual assertions, and ensure the output is consistent with Rules 3.1, 3.3, and 8.4(c) on meritorious claims, candor to the tribunal, and avoiding dishonesty or misrepresentation.

For practitioners with limited to moderate tech skills, the key is to treat Constitutional AI as a practical checklist rather than a buzzword. ✅ Ask three questions about any AI tool you use:

  1. Is this AI actually helpful to the client’s matter, or is it just saving time while adding risk.

  2. Could this output harm the client through inaccuracy, bias, or disclosure of confidential data.

  3. Is the AI acting honestly, meaning it is not hallucinating cases or claiming certainty where none exists.

If any answer is “no,” you must pause, verify, and revise before relying on the AI output.

In the AI era, your ethical risk often turns on how you select, supervise, and document the use of AI in your practice. Constitutional AI will not make you bulletproof, but it gives you a structured way to align your technology choices with ABA Model Rules while protecting your clients, your license, and your reputation. 

MTC: Clio–Alexi Legal Tech Fight: What CRM Vendor Litigation Means for Your Law Firm, Client Data and ABA Model Rule Compliance ⚖️💻

Competence, Confidentiality, Vendor Oversight!

When the companies behind your CRM and AI research tools start suing each other, the dispute is not just “tech industry drama” — it can reshape the practical and ethical foundations of your practice. At a basic to moderate level, the Clio–Alexi fight is about who controls valuable legal data, how that data can be used to power AI tools, and whether one side is using its market position unfairly. Clio (a major practice‑management and CRM platform) is tied to legal research tools and large legal databases. Alexi is a newer AI‑driven research company that depends on access to caselaw and related materials to train and deliver its products. In broad strokes, one side claims the other misused or improperly accessed data and technology; the other responds that the litigation is “sham” or anticompetitive, designed to limit a smaller rival and protect a dominant ecosystem. There are allegations around trade secrets, data licensing, and antitrust‑style behavior. None of that may sound like your problem — until you remember that your client data, workflows, and deadlines live inside tools these companies own, operate, or integrate with.

For lawyers with limited to moderate technology skills, you do not need to decode every technical claim in the complaints and counterclaims. You do, however, need to recognize that vendor instability, lawsuits, and potential regulatory scrutiny can directly touch: your access to client files and calendars, the confidentiality of matter information stored in the cloud, and the long‑term reliability of the systems you use to serve clients and get paid. Once you see the dispute in those terms, it becomes squarely an ethics, risk‑management, and governance issue — not just “IT.”

ABA Model Rule 1.1: Competence Now Includes Tech and Vendor Risk

Model Rule 1.1 requires “competent representation,” which includes the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. In the modern practice environment, that has been interpreted to include technology competence. That does not mean you must be a programmer. It does mean you must understand, in a practical way, the tools on which your work depends and the risks they bring.

If your primary CRM, practice‑management system, or AI research tool is operated by a company in serious litigation about data, licensing, or competition, that is a material fact about your environment. Competence today includes: knowing which mission‑critical workflows rely on that vendor (intake, docketing, conflicts, billing, research, etc.); having at least a baseline sense of how vendor instability could disrupt those workflows; and building and documenting a plan for continuity — how you would move or access data if the worst‑case scenario occurred (for example, a sudden outage, injunction, or acquisition). Failing to consider these issues can undercut the “thoroughness and preparation” the Rule expects. Even if your firm is small or mid‑sized, and even if you feel “non‑technical,” you are still expected to think through these risks at a reasonable level.

ABA Model Rule 1.6: Confidentiality in a Litigation Spotlight

Model Rule 1.6 is often front of mind when lawyers think about cloud tools, and the Clio–Alexi dispute reinforces why. When a technology company is sued, its systems may become part of discovery. That raises questions like: what types of client‑related information (names, contact details, matter descriptions, notes, uploaded files) reside on those systems; under what circumstances that information could be accessed, even in redacted or aggregate form, by litigants, experts, or regulators; and how quickly and completely you can remove or export client data if a risk materializes.

You remain the steward of client confidentiality, even when data is stored with a third‑party provider. A reasonable, non‑technical but diligent approach includes: understanding where your data is hosted (jurisdictions, major sub‑processors, data‑center regions); reviewing your contracts or terms of service for clauses about data access, subpoenas, law‑enforcement or regulatory requests, and notice to you; and ensuring you have clearly defined data‑export rights — not only if you voluntarily leave, but also if the vendor is sold, enjoined, or materially disrupted by litigation. You are not expected to eliminate all risk, but you are expected to show that you considered how vendor disputes intersect with your duty to protect confidential information.

ABA Model Rule 5.3: Treat Vendors as Supervised Non‑Lawyer Assistants

ABA Rules for Modern Legal Technology can be a factor when legal tech companies fight!

Model Rule 5.3 requires lawyers to make reasonable efforts to ensure that non‑lawyer assistants’ conduct is compatible with professional obligations. In 2026, core technology vendors — CRMs, AI research platforms, document‑automation tools — clearly fall into this category.

You are not supervising individual programmers, but you are responsible for: performing documented diligence before adopting a vendor (security posture, uptime, reputation, regulatory or litigation history); monitoring for material changes (lawsuits like the Clio–Alexi matter, mergers, new data‑sharing practices, or major product shifts); and reassessing risk when those changes occur and adjusting your tech stack or contracts accordingly. A litigation event is a signal that “facts have changed.” Reasonable supervision in that moment might mean: having someone (inside counsel, managing partner, or a trusted advisor) read high‑level summaries of the dispute; asking the vendor for an explanation of how the litigation affects uptime, data security, and long‑term support; and considering whether you need contractual amendments, additional audit rights, or a backup plan with another provider. Again, the standard is not perfection, but reasoned, documented effort.

How the Clio–Alexi Battle Can Create Problems for Users

A dispute at this scale can create practical, near‑term friction for everyday users, quite apart from any final judgment. Even if the platforms remain online, lawyers may see more frequent product changes, tightened integrations, shifting data‑sharing terms, or revised pricing structures as companies adjust to litigation costs and strategy. Any of these changes can disrupt familiar workflows, create confusion around where data actually lives, or complicate internal training and procedures.

There is also the possibility of more subtle instability. For example, if a product roadmap slows down or pivots under legal pressure, features that firms were counting on — for automation, AI‑assisted drafting, or analytics — may be delayed or re‑scoped. That can leave firms who invested heavily in a particular tool scrambling to fill functionality gaps with manual workarounds or additional software. None of this automatically violates any rule, but it can introduce operational risk that lawyers must understand and manage.

In edge cases, such as a court order that forces a vendor to disable key features on short notice or a rapid sale of part of the business, intense litigation can even raise questions about long‑term continuity. A company might divest a product line, change licensing models, or settle on terms that affect how data can be stored, accessed, or used for AI. Firms could then face tight timelines to accept new terms, migrate data, or re‑evaluate how integrated AI features operate on client materials. Without offering any legal advice about what an individual firm should do, it is fair to say that paying attention early — before options narrow — is usually more comfortable than reacting after a sudden announcement or deadline.

Practical Steps for Firms at a Basic–Moderate Tech Level

You do not need a CIO to respond intelligently. For most firms, a short, structured exercise will go a long way:

Practical Tech Steps for Today’s Law Firms

  1. Inventory your dependencies. List your core systems (CRM/practice management, document management, time and billing, conflicts, research/AI tools) and note which vendors are in high‑profile disputes or under regulatory or antitrust scrutiny.

  2. Review contracts for safety valves. Look for data‑export provisions, notice obligations if the vendor faces litigation affecting your data, incident‑response timelines, and business‑continuity commitments; capture current online terms.

  3. Map a contingency plan. Decide how you would export and migrate data if compelled by ethics, client demand, or operational need, and identify at least one alternative provider in each critical category.

  4. Document your diligence. Prepare a brief internal memo or checklist summarizing what you reviewed, what you concluded, and what you will monitor, so you can later show your decisions were thoughtful.

  5. Communicate without alarming. Most clients care about continuity and confidentiality, not vendor‑litigation details; you can honestly say you monitor providers, have export and backup options, and have assessed the impact of current disputes.

From “IT Problem” to Core Professional Skill

The Clio–Alexi litigation is a prominent reminder that law practice now runs on contested digital infrastructure. The real message for working lawyers is not to flee from technology but to fold vendor risk into ordinary professional judgment. If you understand, at a basic to moderate level, what the dispute is about — data, AI training, licensing, and competition — and you take concrete steps to evaluate contracts, plan for continuity, and protect confidentiality, you are already practicing technology competence in a way the ABA Model Rules contemplate. You do not have to be an engineer to be a careful, ethics‑focused consumer of legal tech. By treating CRM and AI providers as supervised non‑lawyer assistants, rather than invisible utilities, you position your firm to navigate future lawsuits, acquisitions, and regulatory storms with far less disruption. That is good risk management, sound ethics, and, increasingly, a core element of competent lawyering in the digital era. 💼⚖️

Word of the Week: "Constitutional AI" for Lawyers - What It Is, Why It Matters for ABA Rules, and How Solo & Small Firms Should Use It!

Constitutional AI’s ‘helpful, harmless, honest’ standard is a solid starting point for lawyers evaluating AI platforms.

The term “Constitutional AI” appeared this week in a Tech Savvy Lawyer post about the MTC/PornHub breach as a cybersecurity wake‑up call for lawyers 🚨. That article used it to highlight how AI systems (like those law firms now rely on) must be built and governed by clear, ethical rules — much like a constitution — to protect client data and uphold professional duties. This week’s Word of the Week unpacks what Constitutional AI really means and explains why it matters deeply for solo, small, and mid‑size law firms.

🔍 What is Constitutional AI?

Constitutional AI is a method for training large language models so they follow a written set of high‑level principles, called a “constitution” 📜. Those principles are designed to make the AI helpful, honest, and harmless in its responses.

As Claude AI from Anthropic explains:
Constitutional AI refers to a set of techniques developed by researchers at Anthropic to align AI systems like myself with human values and make us helpful, harmless, and honest. The key ideas behind Constitutional AI are aligning an AI’s behavior with a ‘constitution’ defined by human principles, using techniques like self‑supervision and adversarial training, developing constrained optimization techniques, and designing training data and model architecture to encode beneficial behaviors.” — Claude AI, Anthropic (July 7th, 2023).

In practice, Constitutional AI uses the model itself to critique and revise its own outputs against that constitution. For example, the model might be told: “Do not generate illegal, dangerous, or unethical content,” “Be honest about what you don’t know,” and “Protect user privacy.” It then evaluates its own answers against those rules before giving a final response.

Think of it like a junior associate who’s been given a firm’s internal ethics manual and told: “Before you send that memo, check it against these rules.” Constitutional AI does that same kind of self‑checking, but at machine speed.

🤝 How Constitutional AI Relates to Lawyers

For lawyers, Constitutional AI is important because it directly shapes how AI tools behave when handling legal work 📚. Many legal AI tools are built on models that use Constitutional AI techniques, so understanding this concept helps lawyers:

  • Judge whether an AI assistant is likely to hallucinate, leak sensitive info, or give ethically problematic advice.

  • Choose tools whose underlying AI is designed to be more transparent, less biased, and more aligned with professional norms.

  • Better supervise AI use in the firm, which is a core ethical duty under the ABA Model Rules.

Solo and small firms, in particular, often rely on off‑the‑shelf AI tools (like chatbots or document assistants). Knowing that a tool is built on Constitutional AI principles can give more confidence that it’s designed to avoid harmful outputs and respect confidentiality.

⚖️ Why It Matters for ABA Model Rules

For solo and small firms, asking whether an AI platform aligns with Constitutional AI’s standards is a practical first step in choosing a trustworthy tool.

The ABA’s Formal Opinion 512 on generative AI makes clear that lawyers remain responsible for all work done with AI, even if an AI tool helped draft it 📝. Constitutional AI is relevant here because it’s one way that AI developers try to build in ethical guardrails that align with lawyers' obligations.

Key connections to the Model Rules:

  • Rule 1.1 (Competence): Lawyers must understand the benefits and risks of the technology they use. Knowing that a tool uses Constitutional AI helps assess whether it’s reasonably reliable for tasks like research, drafting, or summarizing.

  • Rule 1.6 (Confidentiality): Constitutional AI models are designed to refuse to disclose sensitive information and to avoid memorizing or leaking private data. This supports the lawyer’s duty to make “reasonable efforts” to protect client confidences.

  • Rule 5.1 / 5.3 (Supervision): Managing partners and supervising attorneys must ensure that AI tools used by staff are consistent with ethical rules. A tool built on Constitutional AI principles is more likely to support, rather than undermine, those supervisory duties.

  • Rule 3.3 (Candor to the Tribunal): Constitutional AI models are trained to admit uncertainty and avoid fabricating facts or cases, which helps reduce the risk of submitting false or misleading information to a court.

In short, Constitutional AI doesn’t relieve lawyers of their ethical duties, but it can make AI tools safer and more trustworthy when used under proper supervision.

🛡️ The “Helpful, Harmless, and Honest” Principle

The three pillars of Constitutional AI — helpful, harmless, and honest — are especially relevant for lawyers:

  • Helpful: The AI should provide useful, relevant information that advances the client’s matter, without unnecessary or irrelevant content.

  • Harmless: The AI should avoid generating illegal, dangerous, or unethical content, and should respect privacy and confidentiality.

  • Honest: The AI should admit when it doesn’t know something, avoid fabricating facts or cases, and not misrepresent its capabilities.

For law firms, this “helpful, harmless, and honest” standard is a useful mental checklist when using AI:

  • Is this AI output actually helpful to the client’s case?

  • Could this output harm the client (e.g., by leaking confidential info or suggesting an unethical strategy)?

  • Is the AI being honest (e.g., not hallucinating case law or pretending to know facts it can’t know)?

If the answer to any of those questions is “no,” the AI output should not be used without significant human review and correction.

🛠️ Practical Takeaways for Law Firms

For solo, small, and mid‑size firms, here’s how to put this into practice:

Lawyers need to screen AI tools and ensure they are aligned with ABA Model Rules.

  1. Know your tools. When evaluating a legal AI product, ask whether it’s built on a Constitutional AI–style model (e.g., Claude). That tells you it’s designed with explicit ethical constraints.

  2. Treat AI as a supervised assistant. Never let AI make final decisions or file work without a lawyer’s review. Constitutional AI reduces risk, but it doesn’t eliminate the need for human judgment.

  3. Train your team. Make sure everyone in the firm understands that AI outputs must be checked for accuracy, confidentiality, and ethical compliance — especially when using third‑party tools.

  4. Update your engagement letters and policies. Disclose to clients when AI is used in their matters, and explain how the firm supervises it. This supports transparency under Rule 1.4 and Rule 1.6.

  5. Focus on “helpful, honest, harmless.” Use Constitutional AI as a mental checklist: Is this AI being helpful to the client? Is it honest about its limits? Is it harmless (no bias, no privacy leaks)? If not, don’t rely on it.

🎙️Ep. 128, Building a Tech-Forward Law Firm: AI Intake, CRM Strategy & Client Experience with Colleen Joyce!

My next guest is Colleen Joyce, CEO of Lawyer.com, a leading legal marketplace that connects over one million consumers monthly with qualified attorneys nationwide. With nearly two decades of experience transforming how law firms leverage technology and marketing, Colleen has pioneered innovations including LawyerLine call intake services, AI-powered matching technology, and the Lawyer Growth Summit. She publishes the Fast Five newsletter every Tuesday, reaching over 20,000 legal professionals with insights on AI trends, business growth strategies, and practice management. In this episode, Colleen shares her expertise on the essential technologies modern law firms need to scale profitably, how AI is revolutionizing client intake processes, and the critical human touchpoints that should never be automated in legal practice.

💬 Join Colleen Joyce and me as we discuss the following three questions and more!

1.     Beyond the essential lead generation that Lawyer.com provides, you see thousands of firms succeed and fail based on their operational efficiency. If you are building a modern law firm from scratch today, what are the top three non-negotiable technologies? For example, specific CRM automations, financial analytics, or project management tools you would implement immediately to ensure the firm scales profitably rather than just chaotically.

2.     We know AI is reshaping the top of the funnel for legal consumers. Based on the data you're seeing from your new AI initiatives, what are the top three specific intake bottlenecks that AI can now solve better than a human receptionist? Allowing attorneys to focus primarily on high-value legal work rather than data entry or basic screening.

3.     Technology can handle logistics, but it can't handle the emotion of legal crisis. From your experience overseeing millions of consumer connections, what are the top three human touchpoints in the client lifecycle that a lawyer should never automate? Because they are crucial for building the trust and transparency that leads to long-term referrals.

In our conversation, we cover the following:

-      00:00:00 - Welcome and Introduction to Colleen Joyce

-      00:00:20 - Colleen's Current Tech Setup: MacBook Pro, iPhone 16, iPad, and Curved Monitor

-      00:01:00 - Discussion about iPhone Models and AppleCare Benefits

-      00:02:00 - Using Plaud AI for Recording Conversations

-      00:03:00 - MacBook Pro Specifications and Upgrade Recommendations

-      00:04:00 - Dell Curved Monitor Benefits for Focus and Productivity

-      00:05:00 - Question 1: Top Three Non-Negotiable Technologies for Modern Law Firms

-      00:06:00 - Intake Technology, CRM, and Practice Management Systems

-      00:07:00 - Balancing Cost and Technology for New Lawyers

-      00:08:00 - Leveraging Freemium Tools and AI for Budget-Conscious Firms

-      00:08:30 - Question 2: AI Solutions for Intake Bottlenecks

-      00:09:00 - Answering Phones with Empathetic AI Agents

-      00:10:00 - Importance of Legal-Specific AI Training

-      00:11:00 - Consumer Adoption and Resistance to AI vs. Human Agents

-      00:12:00 - Using Virtual Receptionists and Calendly for Scheduling

-      00:13:00 - Generational Differences in Technology Adoption

-      00:14:00 - The Evolution of Legal Technology Adoption Over 14 Years

-      00:15:00 - Question 3: Human Touchpoints That Should Never Be Automated

-      00:16:00 - Relationship Building and the Courting Period

-      00:17:00 - Screening Clients Through Your Tech Processes

-      00:18:00 - Where to Find Colleen: LinkedIn and the Fast Five Newsletter - 00:18:30 - Closing Remarks and Gratitude

---

📚 Resources

🤝 Connect with Colleen Joyce

•  LinkedIn: https://www.linkedin.com/in/colleenjoyce

•  Lawyer.com: https://www.lawyer.com

•  Lawyer.com Services: https://services.lawyer.com

•  Fast Five Newsletter (Published Tuesdays): https://www.linkedin.com/newsletters/ fast-five-fridays-7265815097552326656

•  Lawyer Growth Summit: https://lawyergrowthsummit.com

•  Lawyer.com Phone: 800-620-0900

•  Lawyer.com Address: 25 Mountainview Boulevard, Basking Ridge, NJ 07920

📖 Mentioned in the Episode

•  MacRumors Buyer's Guide: https://buyersguide.macrumors.com

•  LawyerLine (24-hour Intake Services) : https://www.lawyerline.ai/

🖥 Hardware Mentioned in the Conversation

•  MacBook Pro : https://www.apple.com/macbook-pro/

•  MacBook Pro with M4/M5 Chips (Upgrade recommendation): https://www.apple.com/macbook-pro/

•  iPhone 16: https://www.apple.com/iphone-16/

•  iPad: https://www.apple.com/ipad/

•  Dell Curved Monitor (22-24 inch, white): https://www.dell.com/monitors

•  HP Printer (with automatic duplex printing): https://www.hp.com/printers

☁ Software & Cloud Services Mentioned in the Conversation

•  Plaud AI (Call Recording & Transcription): https://www.plaud.ai

Slack (Team Communication Platform): https://slack.com

•  iMessage (Apple Messaging): https://support.apple.com/en-us/104969

•  Calendly (Scheduling Software): https://calendly.com

•  Monday.com (Project Management & Team Organization): https://monday.com

•  ChatGPT (AI Assistant): https://openai.com/chatgpt

•  AppleCare (Apple Device Protection): https://www.apple.com/support/applecare/

📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻

all lawyers need to remember to check ai-generated legal citations

After reviewing a year's worth of content from The Tech-Savvy Lawyer.Page blog and podcast, one word emerged to me as the defining concept for 2025: Verification. This term captures the essential duty that separates competent legal practice from dangerous shortcuts in the age of artificial intelligence.

Throughout 2025, The Tech-Savvy Lawyer consistently emphasized verification across multiple contexts. The blog covered proper redaction techniques following the Jeffrey Epstein files disaster. The podcast explored hidden AI in everyday legal tools. Every discussion returned to one central theme: lawyers must verify everything. 🔍

Verification means more than just checking your work. The concept encompasses multiple layers of professional responsibility. Attorneys must verify AI-generated legal research to prevent hallucinations. Courts have sanctioned lawyers who submitted fictitious case citations created by generative AI tools. One study found error rates of 33% in Westlaw AI and 17% in Lexis+ AI. Note the study's foundation is from May 2024, but a 2025 update confirms these findings remain current—the risk of not checking has not gone away. "Verification" cannot be ignored.

The duty extends beyond research. Lawyers must verify that redactions actually remove confidential information rather than simply hiding it under black boxes. The DOJ's failed redaction of the Epstein files demonstrated what happens when attorneys skip proper verification steps. Tech-savvy readers simply copied text from beneath the visual overlays. ⚠️

use of ai-generated legal work requires “verification”, “Verification”, “Verification”!

ABA Model Rule 1.1 requires technological competence. Comment 8 specifically mandates that lawyers understand "the benefits and risks associated with relevant technology." Verification sits at the heart of this competence requirement. Attorneys cannot claim ignorance about AI features embedded in Microsoft 365, Zoom, Adobe, or legal research platforms. Each tool processes client data differently. Each requires verification of settings, outputs, and data handling practices. 🛡️

The verification duty also applies to cybersecurity. Zero Trust Architecture operates on the principle "never trust, always verify." This security model requires continuous verification of user identity, device health, and access context. Law firms can no longer trust that users inside their network perimeter are authorized. Remote work and cloud-based systems demand constant verification.

Hidden AI poses another verification challenge. Software updates automatically activate AI features in familiar tools. These invisible assistants process confidential client data by default. Lawyers must verify which AI systems operate in their technology stack. They must verify data retention policies. They must verify that AI processing does not waive attorney-client privilege. 🤖

ABA Formal Opinion 512 eliminates the "I didn't know" defense. Lawyers bear responsibility for understanding how their tools use AI. Rule 5.3 requires attorneys to supervise software with the same care they supervise human staff members. Verification transforms from a good practice into an ethical mandate.

verify your ai-generated work like your bar license depends on it!

The year 2025 taught legal professionals that technology competence means verification competence. Attorneys must verify redactions work properly. They must verify AI outputs for accuracy. They must verify security settings protect confidential information. They must verify that hidden AI complies with ethical obligations. ✅

Verification protects clients, preserves attorney licenses, and maintains the integrity of legal practice. As The Tech-Savvy Lawyer demonstrated throughout 2025, every technological advancement creates new verification responsibilities. Attorneys who master verification will thrive in the AI era. Those who skip verification steps risk sanctions, malpractice claims, and disciplinary action.

The legal profession's 2025 Word of the Year is verification. Master it or risk everything. 💼⚖️

🎙️ Ep. 121: Iowa Personal Injury Lawyer Tim Semelroth on AI Expert Testimony Prep, Claude for Legal Research and Client Communications Tech!

My next guest is Tim Semelroth. Tim is an Iowa personal injury attorney from RSH Legal, who leverages cutting-edge AI tools, including Notebook LM for expert testimony preparation, Claude AI for dictation, and SIO for medical records analysis. He shares practical strategies for maintaining client relationships through e-signatures, texting integration, and automated birthday card systems while embracing legal technology. All this and more, enjoy.

Join Tim Semelroth and me as we discuss the following three questions and more!

  1. What are the top three ways lawyers can leverage AI tools like ChatGPT and Notebook LM to prepare for expert testimony or cross-examination? And how do you ensure client confidentiality when using these tools?

  2. What are the top three technology tools or systems that personal injury attorneys should implement to streamline their practice when handling cases involving trucking accidents, medical records analysis, and insurance negotiations?

  3. What are the top three strategies you recommend for attorneys to maintain personal relationships with clients and community involvement, while also embracing cutting-edge legal technology to improve practice efficiency?

In our conversation, we cover the following:

[00:01:00] Introduction and guest tech setup discussion

[00:02:00] Dell hardware specifications and IT outsourcing strategy

[00:03:00] Smartphone preferences - iPhone 16 and iPad Pro

[00:04:00] Cross-platform compatibility between Windows and Mac environments

[00:05:00] Web-based software solutions for remote work flexibility

[00:06:00] Plaud AI dictation hardware - features and use cases

[00:07:00] Dictation while exercising and driving - mobile workflows

[00:08:00] Essential software stack - File Vine, Lead Docket, and SIO

[00:09:00] AI tools for expert testimony preparation and HIPAA compliance

[00:10:00] Simplifying complex legal language for jury comprehension

[00:11:00] Using AI to brainstorm cross-examination topics and preparation

[00:12:00] Notebook LM audio overview feature for testimony preparation

[00:13:00] Client communication preferences - e-signatures and texting

[00:14:00] File Vine texting integration for client communications

[00:15:00] Case management alerts and notification systems

[00:17:00] Client preferences for phone vs. video communication

[00:18:00] Rural client challenges and electronic communication benefits

[00:20:00] SIO AI platform for medical records analysis

[00:21:00] Medical chronology automation and document management

[00:22:00] Jurisdiction-specific customization for demand letters

[00:23:00] Content repurposing strategy across multiple platforms

[00:24:00] LinkedIn marketing for lawyer referral relationships

[00:25:00] Multi-channel newsletter approach - digital and print

[00:26:00] Print newsletter effectiveness for legal professionals

[00:27:00] SEO benefits and peer recognition from content marketing

[00:28:00] Client communication policy - 30-day contact requirements

[00:29:00] Proactive client outreach through text messaging

[00:30:00] Automated birthday card system for client retention

[00:31:00] The Marv Stallman Rule - personal marketing through cards

[00:32:00] Technology-enabled client relationship management

[00:33:00] Contact information and social media presence

RESOURCES

Connect with Tim!

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

Subscribe to The Tech-Savvy Lawyer.Page podcast on Apple Podcasts, Spotify, or wherever you get your podcasts. Don't forget to leave us a five-star review! ⭐️⭐️⭐️⭐️⭐️

📢 ANNOUNCEMENT: Tech-Savvy Saturdays Takes a Brief Hiatus - Continuing to Empower Lawyers with Legal Tech Insights Through Blogs and Podcasts.

Hey everyone!

My goal with Tech-Savvy Saturdays (TSS) is to consistently serve as a cornerstone resource for legal professionals seeking to navigate the evolving landscape of legal technology. Due to other obligations, I need to take a pause on TSS.  But fear not, TSS will return in several months. Meanwhile, you can still stay updated on all things legal tech through the Tech-Savvy Lawyer Blog and Podcast.

Stay safe and Tech-Savvy!

Your Friend,
Michael D.J.

📖 Word of the Week: RAG (Retrieval-Augmented Generation) - The Legal AI Breakthrough Eliminating Hallucinations. 📚⚖️

What is RAG?

USEd responsibly, rag can be a great tool for lawyers!

Retrieval-Augmented Generation (RAG) is a groundbreaking artificial intelligence technique that combines information retrieval with text generation. Unlike traditional AI systems that rely solely on pre-trained data, RAG dynamically retrieves relevant information from external legal databases before generating responses.

Why RAG Matters for Legal Practice

RAG addresses the most significant concern with legal AI: fabricated citations and "hallucinations." By grounding AI responses in verified legal sources, RAG systems dramatically reduce the risk of generating fictional case law. Recent studies show RAG-powered legal tools produce hallucination rates comparable to human-only work.

Key Benefits

RAG technology offers several advantages for legal professionals:

Enhanced Accuracy: RAG systems pull from authoritative legal databases, ensuring responses are based on actual statutes, cases, and regulations rather than statistical patterns.

Real-Time Updates: Unlike static AI models, RAG can access current legal information, making it valuable for rapidly evolving areas of law.

Source Attribution: RAG provides clear citations and references, enabling attorneys to verify and build upon AI-generated research.

Practical Applications

lawyers who don’t use ai technology like rag will be replaced those who do!

Law firms are implementing RAG for case law research, contract analysis, and legal memo drafting. The technology excels at tasks requiring specific legal authorities and performs best when presented with clearly defined legal issues.

Professional Responsibility Under ABA Model Rules

ABA Model Rule 1.1 (Competence): Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." This mandates understanding RAG capabilities and limitations before use.

ABA Model Rule 1.6 (Confidentiality): Lawyers must "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." When using RAG systems, attorneys must verify data security measures and understand how client information is processed and stored.

ABA Model Rule 5.3 (Supervision of Nonlawyer Assistants): ABA Formal Opinion 512 clarifies that AI tools may be considered "nonlawyer assistants" requiring supervision. Lawyers must establish clear policies for RAG usage and ensure proper training on ethical obligations.

ABA Formal Opinion 512: This 2024 guidance emphasizes that lawyers cannot abdicate professional judgment to AI systems. While RAG systems offer improved reliability over general AI tools, attorneys remain responsible for verifying outputs and maintaining competent oversight.

Final Thoughts: Implementation Considerations

lawyers must consider their ethical responsibilities when using generative ai, large language models, and rag.

While RAG significantly improves AI reliability, attorneys must still verify outputs and exercise professional judgment. The technology enhances rather than replaces legal expertise. Lawyers should understand terms of service, consult technical experts when needed, and maintain "human-in-the-loop" oversight consistent with professional responsibility requirements.

RAG represents a crucial step toward trustworthy legal AI, offering attorneys powerful research capabilities while maintaining the accuracy standards essential to legal practice and compliance with ABA Model Rules. Just make sure you use it correctly and check your work!