MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC

MTC: Everyday Tech, Extraordinary Evidence—Again: How Courts Are Punishing Fake Digital and AI Data ⚖️📱

Check your Ai work - AI fraud can meet courtroom consequences.

In last month’s editorial, “Everyday Tech, Extraordinary Evidence,” we walked through how smartphones, dash cams, and wearables turned the Minnesota ICE shooting into a case study in modern evidence practice, from rapid preservation orders to multi‑angle video timelines.📱⚖️ We focused on the positive side: how deliberate intake, early preservation, and basic synchronization tools can turn ordinary devices into case‑winning proof.📹 This follow‑up tackles the other half of the equation—what happens when “evidence” itself is fake, AI‑generated, or simply unverified slop, and how courts are starting to respond with serious sanctions.⚠️

From Everyday Tech to Everyday Scrutiny

The original article urged you to treat phones and wearables as critical evidentiary tools, not afterthoughts: ask about devices at intake, cross‑reference GPS trails, and treat cars as rolling 360‑degree cameras.🚗⌚ We also highlighted the Minnesota Pretti shooting as an example of how rapid, court‑ordered preservation of video and other digital artifacts can stop crucial evidence from “disappearing” before the facts are fully understood.📹 Those core recommendations still stand—if anything, they are more urgent now that generative AI makes it easier to fabricate convincing “evidence” that never happened.🤖

The same tools that helped you build robust, data‑driven reconstructions—synchronized bystander clips, GPS logs, wearables showing movement or inactivity—are now under heightened scrutiny for authenticity.📊 Judges and opposing counsel are no longer satisfied with “the video speaks for itself”; they want to know who created it, how it was stored, whether metadata shows AI editing, and what steps counsel took to verify that the file is what it purports to be.📁

When “Evidence” Is Fake: Sanctions Arrive

We have moved past the hypothetical stage. Courts are now issuing sanctions—sometimes terminating sanctions—when parties present fake or AI‑generated “evidence” or unverified AI research.💥

These are not “techie” footnotes; they are vivid warnings that falsified or unverified digital and AI data can end careers and destroy cases.🚨

ABA Model Rules: The Safety Rails You Ignore at Your Peril

Train to verify—defend truth in the age of AI.

Your original everyday‑tech playbook already fits neatly within ABA Model Rule 1.1 and Comment 8’s duty of technological competence; the new sanctions landscape simply clarifies the stakes.📚

  • Rule 1.1 (Competence): You must understand the benefits and risks of relevant technology, which now clearly includes generative AI and deepfake tools.⚖️ Using AI to draft or “enhance” without checking the output is not a harmless shortcut—it is a competence problem.

  • Rule 1.6 (Confidentiality): Uploading client videos, wearable logs, or sensitive communications to consumer‑grade AI sites can expose them to unknown retention and training practices, risking confidentiality violations.🔐

  • Rule 3.3 (Candor to the Tribunal) and Rule 4.1 (Truthfulness): Presenting AI‑altered video or fake citations as if they were genuine is the very definition of misrepresentation, as the New York and California sanction orders make clear.⚠️ Even negligent failure to verify can be treated harshly once the court’s patience for AI excuses runs out.

  • Rules 5.1–5.3 (Supervision): Supervising lawyers must ensure that associates, law clerks, and vendors understand that AI outputs are starting points, not trustworthy final products, and that fake or manipulated digital evidence will not be tolerated.👥

Bridging Last Month’s Playbook With Today’s AI‑Risk Reality

In Last month’s editorial, we urged three practical habits: ask about devices, move fast on preservation, and build a vendor bench for extraction and authentication.📱⌚🚗 This month, the job is to wrap those habits in explicit AI‑risk controls that lawyers with modest tech skills can realistically follow.🧠

  1. Never treat AI as a silent co‑counsel. If you use AI to draft research, generate timelines, or “enhance” video, you must independently verify every factual assertion and citation, just as you would double‑check a new associate’s memo.📑 “The AI did it” is not a defense; courts have already said so.

  2. Preserve the original, disclose the enhancement. Our earlier advice to keep raw smartphone files and dash‑cam footage now needs one more step: if you use any enhancement (AI or otherwise), label it clearly and be prepared to explain what was done, why, and how you ensured that the content did not change.📹

  3. Use vendors and examiners as authenticity firewalls. Just as we suggested, bringing in digital forensics vendors to extract phone and wearable data, you should now consider them for authenticity challenges as well—especially where the opposing side may have incentives or tools to create deepfakes.🔍 A simple expert declaration that a file shows signs of AI manipulation can be the difference between a credibility battle and a terminating sanction.

  4. Train your team using real sanction orders. Nothing clarifies the risk like reading Judge Castel’s order in the ChatGPT‑citation case or Judge Kolakowski’s deepfake ruling in Mendones.⚖️ Incorporate those cases into short internal trainings and CLEs; they translate abstract “AI ethics” into concrete, courtroom‑tested consequences.

  5. Document your verification steps. For everyday tech evidence, a simple log—what files you received, how you checked metadata, whether you compared against other sources, which AI tools (if any) you used, and what you did to confirm their outputs—can demonstrate good faith if a judge later questions your process.📋

Final Thoughts: Authenticity as a First‑Class Question

be the rock star! know how to use ai responsibly in your work!

In the first editorial, the core message was that everyday devices are quietly turning into your best witnesses.📱⌚ The new baseline is that every such “witness” will be examined for signs of AI contamination, and you will be expected to have an answer when the court asks, “What did you do to make sure this is real?”🔎

Lawyers with limited to moderate tech skills do not need to reverse‑engineer neural networks or master forensic software. Instead, they must combine the practical habits from January’s piece—asking, preserving, synchronizing—with a disciplined refusal to outsource judgment to AI.⚖️ In an era of deepfakes and hallucinated case law, authenticity is no longer a niche evidentiary issue; it is the moral center of digital advocacy.✨

Handled wisely, your everyday tech strategy can still deliver “extraordinary evidence.” Handled carelessly, it can just as quickly produce extraordinary sanctions.🚨

MTC

MTC: Clio–Alexi Legal Tech Fight: What CRM Vendor Litigation Means for Your Law Firm, Client Data and ABA Model Rule Compliance ⚖️💻

Competence, Confidentiality, Vendor Oversight!

When the companies behind your CRM and AI research tools start suing each other, the dispute is not just “tech industry drama” — it can reshape the practical and ethical foundations of your practice. At a basic to moderate level, the Clio–Alexi fight is about who controls valuable legal data, how that data can be used to power AI tools, and whether one side is using its market position unfairly. Clio (a major practice‑management and CRM platform) is tied to legal research tools and large legal databases. Alexi is a newer AI‑driven research company that depends on access to caselaw and related materials to train and deliver its products. In broad strokes, one side claims the other misused or improperly accessed data and technology; the other responds that the litigation is “sham” or anticompetitive, designed to limit a smaller rival and protect a dominant ecosystem. There are allegations around trade secrets, data licensing, and antitrust‑style behavior. None of that may sound like your problem — until you remember that your client data, workflows, and deadlines live inside tools these companies own, operate, or integrate with.

For lawyers with limited to moderate technology skills, you do not need to decode every technical claim in the complaints and counterclaims. You do, however, need to recognize that vendor instability, lawsuits, and potential regulatory scrutiny can directly touch: your access to client files and calendars, the confidentiality of matter information stored in the cloud, and the long‑term reliability of the systems you use to serve clients and get paid. Once you see the dispute in those terms, it becomes squarely an ethics, risk‑management, and governance issue — not just “IT.”

ABA Model Rule 1.1: Competence Now Includes Tech and Vendor Risk

Model Rule 1.1 requires “competent representation,” which includes the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. In the modern practice environment, that has been interpreted to include technology competence. That does not mean you must be a programmer. It does mean you must understand, in a practical way, the tools on which your work depends and the risks they bring.

If your primary CRM, practice‑management system, or AI research tool is operated by a company in serious litigation about data, licensing, or competition, that is a material fact about your environment. Competence today includes: knowing which mission‑critical workflows rely on that vendor (intake, docketing, conflicts, billing, research, etc.); having at least a baseline sense of how vendor instability could disrupt those workflows; and building and documenting a plan for continuity — how you would move or access data if the worst‑case scenario occurred (for example, a sudden outage, injunction, or acquisition). Failing to consider these issues can undercut the “thoroughness and preparation” the Rule expects. Even if your firm is small or mid‑sized, and even if you feel “non‑technical,” you are still expected to think through these risks at a reasonable level.

ABA Model Rule 1.6: Confidentiality in a Litigation Spotlight

Model Rule 1.6 is often front of mind when lawyers think about cloud tools, and the Clio–Alexi dispute reinforces why. When a technology company is sued, its systems may become part of discovery. That raises questions like: what types of client‑related information (names, contact details, matter descriptions, notes, uploaded files) reside on those systems; under what circumstances that information could be accessed, even in redacted or aggregate form, by litigants, experts, or regulators; and how quickly and completely you can remove or export client data if a risk materializes.

You remain the steward of client confidentiality, even when data is stored with a third‑party provider. A reasonable, non‑technical but diligent approach includes: understanding where your data is hosted (jurisdictions, major sub‑processors, data‑center regions); reviewing your contracts or terms of service for clauses about data access, subpoenas, law‑enforcement or regulatory requests, and notice to you; and ensuring you have clearly defined data‑export rights — not only if you voluntarily leave, but also if the vendor is sold, enjoined, or materially disrupted by litigation. You are not expected to eliminate all risk, but you are expected to show that you considered how vendor disputes intersect with your duty to protect confidential information.

ABA Model Rule 5.3: Treat Vendors as Supervised Non‑Lawyer Assistants

ABA Rules for Modern Legal Technology can be a factor when legal tech companies fight!

Model Rule 5.3 requires lawyers to make reasonable efforts to ensure that non‑lawyer assistants’ conduct is compatible with professional obligations. In 2026, core technology vendors — CRMs, AI research platforms, document‑automation tools — clearly fall into this category.

You are not supervising individual programmers, but you are responsible for: performing documented diligence before adopting a vendor (security posture, uptime, reputation, regulatory or litigation history); monitoring for material changes (lawsuits like the Clio–Alexi matter, mergers, new data‑sharing practices, or major product shifts); and reassessing risk when those changes occur and adjusting your tech stack or contracts accordingly. A litigation event is a signal that “facts have changed.” Reasonable supervision in that moment might mean: having someone (inside counsel, managing partner, or a trusted advisor) read high‑level summaries of the dispute; asking the vendor for an explanation of how the litigation affects uptime, data security, and long‑term support; and considering whether you need contractual amendments, additional audit rights, or a backup plan with another provider. Again, the standard is not perfection, but reasoned, documented effort.

How the Clio–Alexi Battle Can Create Problems for Users

A dispute at this scale can create practical, near‑term friction for everyday users, quite apart from any final judgment. Even if the platforms remain online, lawyers may see more frequent product changes, tightened integrations, shifting data‑sharing terms, or revised pricing structures as companies adjust to litigation costs and strategy. Any of these changes can disrupt familiar workflows, create confusion around where data actually lives, or complicate internal training and procedures.

There is also the possibility of more subtle instability. For example, if a product roadmap slows down or pivots under legal pressure, features that firms were counting on — for automation, AI‑assisted drafting, or analytics — may be delayed or re‑scoped. That can leave firms who invested heavily in a particular tool scrambling to fill functionality gaps with manual workarounds or additional software. None of this automatically violates any rule, but it can introduce operational risk that lawyers must understand and manage.

In edge cases, such as a court order that forces a vendor to disable key features on short notice or a rapid sale of part of the business, intense litigation can even raise questions about long‑term continuity. A company might divest a product line, change licensing models, or settle on terms that affect how data can be stored, accessed, or used for AI. Firms could then face tight timelines to accept new terms, migrate data, or re‑evaluate how integrated AI features operate on client materials. Without offering any legal advice about what an individual firm should do, it is fair to say that paying attention early — before options narrow — is usually more comfortable than reacting after a sudden announcement or deadline.

Practical Steps for Firms at a Basic–Moderate Tech Level

You do not need a CIO to respond intelligently. For most firms, a short, structured exercise will go a long way:

Practical Tech Steps for Today’s Law Firms

  1. Inventory your dependencies. List your core systems (CRM/practice management, document management, time and billing, conflicts, research/AI tools) and note which vendors are in high‑profile disputes or under regulatory or antitrust scrutiny.

  2. Review contracts for safety valves. Look for data‑export provisions, notice obligations if the vendor faces litigation affecting your data, incident‑response timelines, and business‑continuity commitments; capture current online terms.

  3. Map a contingency plan. Decide how you would export and migrate data if compelled by ethics, client demand, or operational need, and identify at least one alternative provider in each critical category.

  4. Document your diligence. Prepare a brief internal memo or checklist summarizing what you reviewed, what you concluded, and what you will monitor, so you can later show your decisions were thoughtful.

  5. Communicate without alarming. Most clients care about continuity and confidentiality, not vendor‑litigation details; you can honestly say you monitor providers, have export and backup options, and have assessed the impact of current disputes.

From “IT Problem” to Core Professional Skill

The Clio–Alexi litigation is a prominent reminder that law practice now runs on contested digital infrastructure. The real message for working lawyers is not to flee from technology but to fold vendor risk into ordinary professional judgment. If you understand, at a basic to moderate level, what the dispute is about — data, AI training, licensing, and competition — and you take concrete steps to evaluate contracts, plan for continuity, and protect confidentiality, you are already practicing technology competence in a way the ABA Model Rules contemplate. You do not have to be an engineer to be a careful, ethics‑focused consumer of legal tech. By treating CRM and AI providers as supervised non‑lawyer assistants, rather than invisible utilities, you position your firm to navigate future lawsuits, acquisitions, and regulatory storms with far less disruption. That is good risk management, sound ethics, and, increasingly, a core element of competent lawyering in the digital era. 💼⚖️

MTC: Everyday Tech, Extraordinary Evidence: How Lawyers Can Turn Smartphones, Dash Cams, and Wearables Into Case‑Winning Proof After the Minnesota ICE Shooting 📱⚖️

Smartphone evidence: Phone as Proof!

The recent fatal shooting of ICU nurse Alex Pretti by a federal immigration officer in Minneapolis has become a defining example of how everyday technology can reshape a high‑stakes legal narrative. 📹 Federal officials claimed Pretti “brandished” a weapon, yet layered cellphone videos from bystanders, later analyzed by major news outlets, appear to show an officer disarming him moments before multiple shots were fired while he was already on the ground. In a world where such encounters are documented from multiple angles, lawyers who ignore ubiquitous tech risk missing powerful, and sometimes exonerating, evidence.

Smartphones: The New Star Witness

In the Minneapolis shooting, multiple smartphone videos captured the encounter from different perspectives, and a visual analysis highlighted discrepancies between official statements and what appears on camera. One video reportedly shows an officer reaching into Pretti’s waistband, emerging with a handgun, and then, barely a second later, shots erupt as he lies prone on the sidewalk, still being fired upon. For litigators, this is not just news; it is a case study in how to treat smartphones as critical evidentiary tools, not afterthoughts.

Practical ways to leverage smartphone evidence include:

  • Identifying and preserving bystander footage early through public calls, client outreach, and subpoenas to platforms when appropriate.

  • Synchronizing multiple clips to create a unified timeline, revealing who did what, when, and from where.

  • Using frame‑by‑frame analysis to test or challenge claims about “brandishing,” “aggressive resistance,” or imminent threat, as occurred in the Pretti shooting controversy.

In civil rights, criminal defense, and personal‑injury practice, this kind of video can undercut self‑defense narratives, corroborate witness accounts, or demonstrate excessive force, all using tech your clients already carry every day. 📲

GPS Data and Location Trails: Quiet but Powerful Proof

The same smartphones that record video also log location data, which can quietly become as important as any eyewitness. Modern phones can provide time‑stamped GPS histories that help confirm where a client was, how long they stayed, and in some instances approximate movement speed—details that matter in shootings, traffic collisions, and kidnapping cases. Lawyers increasingly use this location data to:

Dash cam / cameras: Dashcam Truth!

  • Corroborate or challenge alibis by matching GPS trails with claimed timelines.

  • Reconstruct movement patterns in protest‑related incidents, showing whether someone approached officers or was simply present, as contested in the Minneapolis shooting narrative.

  • Support or refute claims that a vehicle was fleeing, chasing, or unlawfully following another party.

In complex matters with multiple parties, cross‑referencing GPS from several phones, plus vehicle telematics, can create a robust, data‑driven reconstruction that a fact‑finder can understand without a computer science degree.

Dash Cams and 360‑Degree Vehicle Video: Replaying the Scene

Cars now function as rolling surveillance systems. Many new vehicles ship with factory cameras, and after‑market 360‑degree dash‑cam systems are increasingly common, capturing impacts, near‑misses, and police encounters in real time. In a Minneapolis‑style protest environment, vehicle‑mounted cameras can document:

  • How a crowd formed, whether officers announced commands, and whether a driver accelerated or braked before an alleged assault.

  • The precise position of pedestrians or officers relative to a car at the time of a contested shooting.

  • Sound cues (shouts of “he’s got a gun!” or “where’s the gun?”) that provide crucial context to the video, like those reportedly heard in footage of the Pretti shooting.

For injury and civil rights litigators, requesting dash‑cam footage from all involved vehicles—clients, third parties, and law‑enforcement—should now be standard practice. 🚗 A single 360‑degree recording might capture the angle that police‑worn cameras miss or omit.

Wearables and Smartwatches: Biometrics as Evidence

GPS & wearables: Data Tells All!

Smartwatches and fitness trackers add a new dimension: heart‑rate, step counts, sleep data, and sometimes even blood‑oxygen metrics. In use‑of‑force incidents or violent encounters, this information can be unusually persuasive. Imagine:

  • A heart‑rate spike precisely at the time of an assault, followed by a sustained elevation that reinforces trauma testimony.

  • Step‑count and GPS data confirming that a client was running away, standing still, or immobilized as claimed.

  • Sleep‑pattern disruptions and activity changes supporting damages in emotional‑distress claims.

These devices effectively turn the body into a sensor network. When combined with phone video and location data, they help lawyers build narratives supported by objective, machine‑created logs rather than only human recollection. ⌚

Creative Strategies for Integrating Everyday Tech

To move from concept to courtroom, lawyers should adopt a deliberate strategy for everyday tech evidence:

  • Build intake questions that explicitly ask about phones, car cameras, smartwatches, home doorbell cameras, and even cloud backups.

  • Move quickly for preservation orders, as Minnesota officials did when a judge issued a temporary restraining order to prevent alteration or removal of shooting‑related evidence in the Pretti case.

  • Partner with reputable digital‑forensics professionals who can extract, authenticate, and, when needed, recover deleted or damaged files.

  • Prepare demonstrative exhibits that overlay video, GPS points, and timelines in a simple visual, so judges and juries understand the story without technical jargon.

The Pretti shooting also underscores the need to anticipate competing narratives: federal officials asserted he posed a threat, while video and witness accounts cast doubt on that framing, fueling protests and calls for accountability. Lawyers on all sides must learn to dissect everyday tech evidence critically—scrutinizing what it shows, what it omits, and how it fits with other proof.

Ethical and Practical Guardrails

Ethics-focused image: Ethics First!

With this power comes real ethical responsibility. Lawyers must align their use of everyday tech with core duties under the ABA Model Rules of Professional Conduct.

  • Competence (ABA Model Rule 1.1)
    Rule 1.1 requires “competent representation,” and Comment 8 now expressly includes a duty to keep abreast of the benefits and risks of relevant technology. When you rely on smartphone video, GPS logs, or wearable data, you must either develop sufficient understanding yourself or associate with or consult someone who does.

  • Confidentiality and Data Security (ABA Model Rule 1.6)
    Rule 1.6 obligates lawyers to make reasonable efforts to prevent unauthorized access to or disclosure of client information. This extends to sensitive video, location trails, and biometric data stored on phones, cloud accounts, or third‑party platforms. Lawyers should use secure storage, limit access, and, where appropriate, obtain informed consent about how such data will be used and shared.

  • Preservation and Integrity of Evidence (ABA Model Rules 3.4, 4.1, and related e‑discovery ethics)
    ABA ethics guidance and case law emphasize that lawyers must not unlawfully alter, destroy, or conceal evidence. That means clients should be instructed not to edit, trim, or “clean up” recordings, and that any forensic work should follow accepted chain‑of‑custody protocols.

  • Candor and Avoiding Cherry‑Picking (ABA Model Rule 3.3, 4.1)
    Rule 3.3 requires candor toward the tribunal, and Rule 4.1 prohibits knowingly making false statements of fact. Lawyers should present digital evidence in context, avoiding selective clips that distort timing, perspective, or sound. A holistic, transparent approach builds credibility and protects both the client and the profession.

  • Respect for Privacy and Non‑Clients (ABA Model Rule 4.4 and related guidance)
    Rule 4.4 governs respect for the rights of third parties, including their privacy interests. When you obtain bystander footage or data from non‑clients, you should consider minimizing unnecessary exposure of their identities and, where feasible, seek consent or redact sensitive information.

FINAL THOUGHTS

Handled with these rules in mind, everyday tech can reduce factual ambiguity and support more just outcomes. Misused, it can undermine trust, compromise admissibility, and trigger disciplinary scrutiny. ⚖️

MTC: Why Lawyers Should Podcast in 2026: Human Connection, Authority Building, and Tech-Smart Growth for Your Law Practice 🎙️⚖️

For nearly six years, podcasting has been more than a business development tool to me; it has been a way to talk about topics that matter, in a format that feels natural, conversational, and—even for lawyers—fun. 🎧 Podcasting lets the public, and potential clients, get to know you as a person instead of just a name on a website or a face on a billboard, and that human connection is rapidly becoming the real differentiator in a crowded legal marketplace.

podcasting can be a key to a lawyer’s marketing strategy and maybe allow lawyers to have a little fun too!

At PODFEST EXPO 2026 in Orlando, I sat down with a remarkable panel of lawyers, former lawyers, and legal professionals for a “pop‑up” roundtable on why lawyers should podcast. My guests included Dennis “DM” Meador of The Legal Podcast Network, Louis Goodman of Love Thy Lawyer, previous podcast guest Robert Ingalls of LawPods, personal‑branding expert Wendi Weiner of The Writing Guru, and Elizabeth Gearhart of Passage to Profit. Together, we explored not just why lawyers should podcast, but how podcasting can support branding, authenticity, and even your visibility in search engines and large language models (LLMs).

Several themes emerged. First, podcasting is now a trusted medium for younger generations; DM noted that for Gen Z and Gen Alpha, podcasts and short-form video are top information sources, and if you “don’t want to dance on TikTok, get a podcast.” Second, a show can function as an “electronic résumé,” as Louis described, demonstrating your consistency, curiosity, and staying power far better than a static bio ever will. Third, a podcast is a powerful filter: by sharing your real voice—salty language and all, if that is true to you—your audience quickly learns whether you are “their” lawyer or not, which matters in multi‑year relationships such as injury or family law matters.

Podcasting is also a networking and authority engine. Elizabeth emphasized how Passage to Profit has grown from a radio show into a nationally syndicated podcast that not only builds trust with human listeners but also increases her firm’s presence in tools like ChatGPT, Gemini, and Perplexity. By repurposing podcast transcripts and show notes intelligently, she has observed measurable traffic from LLMs to the Gerhardt Law website—proof that conversational content can improve your visibility in the emerging “language-based internet.” Wendi highlighted that podcasting dovetails perfectly with personal branding: it is a scalable way to tell your story, show your “superpower,” and convey your unique value beyond the four corners of a résumé.

Lawyers can gain invaluable insights from podcasting conferences like Podfest, enhancing their firms’ marketing, online visibility, and overall digital presence.

Of course, podcasting is not only about business. For many of us, it began as a hobby or a creative outlet that happened to support SEO, referrals, and professional relationships along the way. The lawyers on the panel repeatedly stressed that you do not have to talk exclusively about black‑letter law: you can focus on entrepreneurship, technology, careers, politics, or any niche that authentically reflects who you are and the clients you want to serve.

That balance between enjoyment and strategy is exactly why The Lawyer’s Tech Guide: The Lawyer’s Guide to Podcasting exists. 📘 This new book, just released today, breaks down the who, what, why, where, and how of podcasting for lawyers—from equipment and workflow to ethics, marketing, and monetization—so you can launch a show that is both sustainable and aligned with your practice and values. You can grab your copy on Amazon and start turning your expertise and personality into a discoverable, binge‑worthy asset for your clients, colleagues, and community.

📢 Stay tuned! The roundtable episode from Podfest 2026 drops tomorrow. You will hear directly from DM, Louis, Robert, Wendi, and Elizabeth as they share the candid, practical advice that every tech‑curious lawyer thinking about podcasting needs to hear. 🎙️

MTC: PornHub Breach: Cybersecurity Wake-Up Call for Lawyers

Lawyers are the first line defenders for their clientS’ pii.

It's the start of the New Year, and as good a time as any to remind the legal profession of their cybersecurity obligations! The recent PornHub data exposure reveals critical vulnerabilities every lawyer must address under ABA ethical obligations. Third-party analytics provider Mixpanel suffered a breach compromising user email addresses, triggering targeted sextortion campaigns. This incident illuminates three core security domains for legal professionals while highlighting specific duties under ABA Model Rules 1.1, 1.6, 5.1, 5.3, and Formal Opinion 483.

Understanding the Breach and Its Legal Implications

The PornHub incident demonstrates how failures by third-party vendors can lead to cascading security consequences. When Mixpanel's systems were compromised, attackers gained access to email addresses that now fuel sextortion schemes. Criminals threaten to expose purported adult site usage unless victims pay cryptocurrency ransoms. For law firms, this scenario is not hypothetical—your practice management software, cloud storage providers, and analytics tools present identical vulnerabilities. Each third-party vendor represents a potential entry point for attackers targeting your client data.

ABA Model Rule 1.1: The Foundation of Technology Competence

ABA Model Rule 1.1 requires lawyers to provide competent representation, and Comment 8 explicitly extends this duty to technology: "To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". This is not a suggestion—it is an ethical mandate. Thirty-one states have adopted this technology competence requirement into their professional conduct rules.

What does this mean practically? You must understand the security implications of every technology tool your firm uses. Before onboarding any platform, conduct due diligence on the vendor's security practices. Require SOC 2 compliance, cyber insurance verification, and detailed security questionnaires. The "reasonable efforts" standard does not demand perfection, but it does require informed decision-making. You cannot delegate technology competence entirely to IT consultants. You must understand enough to ask the right questions and evaluate the answers meaningfully.

ABA Model Rule 1.6: Safeguarding Client Information in Digital Systems

Rule 1.6 establishes your duty of confidentiality, and Comment 18 requires "reasonable efforts to prevent [the inadvertent or unauthorized] access or disclosure” to information relating to the representation of a client. This duty extends beyond privileged communications to all client-related information stored digitally.

The PornHub breach illustrates why this matters. Your firm's email system, document management platform, and client portals contain information criminals actively target. The "reasonable efforts" analysis considers the sensitivity of information, likelihood of disclosure without additional safeguards, cost of safeguards, and difficulty of implementation. For most firms, this means mandatory multi-factor authentication (MFA) on all systems, encryption for data at rest and in transit, and secure file-sharing platforms instead of email attachments.

You must also address third-party vendor access under Rule 1.6. When you grant a case management platform access to client data, you remain ethically responsible for protecting that information. Your engagement letters should specify security expectations, and vendor contracts must include confidentiality obligations and breach notification requirements.

ABA Model Rules 5.1 and 5.3: Supervisory Responsibilities Extend to Technology

lawyers need to stay up to date on the security protocOls for their firm’s software!

Rule 5.1 imposes duties on partners and supervisory lawyers to ensure the firm has measures giving "reasonable assurance that all lawyers in the firm conform to the Rules of Professional Conduct". Rule 5.3 extends this duty to nonlawyer assistants, which courts and ethics opinions have interpreted to include technology vendors and cloud service providers.

If you manage a firm or supervise other lawyers, you must implement technology policies and training programs. This includes security awareness training, password management requirements, and incident reporting procedures. You cannot assume your younger associates understand cybersecurity best practices—they need explicit training and clear policies.

For nonlawyer assistance, you must "make reasonable efforts to ensure that the person's conduct is compatible with the professional obligations of the lawyer". This means vetting your IT providers, requiring them to maintain appropriate security certifications, and ensuring they understand their confidentiality obligations. Your vendor management program is an ethical requirement, not just a business best practice.

ABA Formal Opinion 483: Data Breach Response Requirements

ABA Formal Opinion 483 establishes clear obligations when a data breach occurs. Lawyers have a duty to monitor for breaches, stop and mitigate damage promptly, investigate what occurred, and notify affected clients. This duty arises from Rules 1.1 (competence), 1.6 (confidentiality), and 1.4 (communication).

The Opinion requires you to have a written incident response plan before a breach occurs. Your plan must identify who will coordinate the response, how you will communicate with affected clients (including backup communication methods if email is compromised), and what steps you will take to assess and remediate the breach. You must document what data was accessed, whether malware was used, and whether client information was taken, altered, or destroyed.

Notification to clients is mandatory when a breach involves material client confidential information. The notification must be prompt and include what happened, what information was involved, what you are doing in response, and what clients should do to protect themselves. This duty extends to former clients in many circumstances, as their files may still contain sensitive information subject to state data breach laws.

Three Security Domains: Personal, Practice, and Client Protection

Your Law Practice's Security
Under Rules 5.1 and 5.3, you must implement reasonable security measures throughout your firm. Conduct annual cybersecurity risk assessments. Require MFA on all systems. Implement data minimization principles—only share what vendors absolutely need. Establish incident response protocols before breaches occur. Your supervisory duties require you to ensure that all firm personnel, including non-lawyer staff, understand and follow the firm's security policies.

Client Security Obligations
Rule 1.4 requires you to keep clients reasonably informed, which includes advising them on security matters relevant to their representation. Clients experiencing sextortion need immediate, informed guidance. Preserve all threatening emails with headers intact. Document timestamps and demands. Advise clients never to pay or respond—payment confirms active monitoring and often leads to additional demands. Report incidents to the FBI's IC3 unit and local cybercrime divisions. For family law practitioners, understand that sextortion often targets vulnerable individuals during contentious proceedings. Criminal defense attorneys must recognize these threats as extortion, not embarrassment issues. Your competence under Rule 1.1 requires you to understand these threats well enough to provide effective guidance.

Personal Digital Hygiene
Your personal email account is your digital identity's master key. Enable MFA on all professional and personal accounts. Use unique, complex passwords managed through a password manager. Consider pseudonymous email addresses for sensitive subscriptions. Separate your litigation communications from personal browsing activities. The STOP framework applies: Slow down, Test suspicious contacts, Opt out of high-pressure conversations, and Prove identities through independent channels. Your personal security failures can compromise your professional obligations under Rule 1.6.

Practical Implementation Steps

THere are five Practical Implementation Steps lawyers can do today to get their practice cyber compliant!

First, conduct a technology audit to map every system that stores or accesses client information. Identify all third-party vendors and assess their security practices against industry standards.

Second, implement MFA across all systems immediately—this is one of the most effective and cost-efficient security controls available.

Third, develop written security policies covering password management, device encryption, remote work procedures, and incident response.

Fourth, train all firm personnel on these policies and conduct simulated phishing exercises to test awareness.

Fifth, review and update your engagement letters to include technology provisions and breach notification procedures.

Conclusion

The PornHub breach is not an isolated incident—it is a template for how modern attacks occur through third-party vendors. Your ethical duties under ABA Model Rules require proactive cybersecurity measures, not reactive responses after a breach. Technology competence under Rule 1.1, confidentiality protection under Rule 1.6, supervisory responsibilities under Rules 5.1 and 5.3, and breach response obligations under Formal Opinion 483 together create a comprehensive framework for protecting your practice and your clients. Cybersecurity is no longer an IT issue delegated to consultants; it is a core professional competency that affects your license to practice law. The time to act is before your firm appears in a breach notification headline.

MTC🪙🪙:  When Reputable Databases Fail: What Lawyers Must Do After AI Hallucinations Reach the Court

What should a lawyer do when they inadvertENTLY USE A HALLUCINATED CITE?

In a sobering December 2025 filing in Integrity Investment Fund, LLC v. Raoul, plaintiff's counsel disclosed what many in the legal profession feared: even reputable legal research platforms can generate hallucinated citations. The Motion to Amend Complaint revealed that "one of the cited cases in the pending Amended Complaint could not be found," along with other miscited cases, despite the legal team using LexisNexis and LEXIS+ Document Analysis tools rather than general-purpose AI like ChatGPT. The attorney expressed being "horrified" by these inexcusable errors, but horror alone does not satisfy ethical obligations.

This case crystallizes a critical truth for the legal profession: artificial intelligence remains a tool requiring rigorous human oversight, not a substitute for attorney judgment. When technology fails—and Stanford research confirms it fails at alarming rates—lawyers must understand their ethical duties and remedial obligations.

The Scope of the Problem: Even Premium Tools Hallucinate

Legal AI vendors marketed their products as hallucination-resistant, leveraging retrieval-augmented generation (RAG) technology to ground responses in authoritative legal databases. Yet as reported in our 📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻, independent testing by Stanford's Human-Centered Artificial Intelligence program and RegLab reveals persistent accuracy problems. Lexis+ AI produced incorrect information 17% of the time, while Westlaw's AI-Assisted Research hallucinated at nearly double that rate—34% of queries.

These statistics expose a dangerous misconception: that specialized legal research platforms eliminate fabrication risks. The Integrity Investment Fund case demonstrates that attorneys using established, subscription-based legal databases still face citation failures. Courts nationwide have documented hundreds of cases involving AI-generated hallucinations, with 324 incidents in U.S. federal, state, and tribal courts as of late 2025. Legal professionals can no longer claim ignorance about AI limitations.

The consequences extend beyond individual attorneys. As one federal court warned, hallucinated citations that infiltrate judicial opinions create precedential contamination, potentially "sway[ing] an actual dispute between actual parties"—an outcome the court described as "scary". Each incident erodes public confidence in the justice system and, as one commentator noted, "sets back the adoption of AI in law".

The Ethical Framework: Three Foundational Rules

When attorneys discover AI-generated errors in court filings, three Model Rules of Professional Conduct establish clear obligations.

ABA Model Rule 1.1 mandates technological competence. The 2012 amendment to Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". Forty-one jurisdictions have adopted this technology competence requirement. This duty is ongoing and non-delegable. Attorneys cannot outsource their responsibility to understand the tools they deploy, even when those tools carry premium price tags and prestigious brand names.

Technological competence means understanding that current AI legal research tools hallucinate at rates ranging from 17% to 34%. It means recognizing that longer AI-generated responses contain more falsifiable propositions and therefore pose a greater risk of hallucination. It means implementing verification protocols rather than accepting AI output as authoritative.

ABA Model Rule 3.3 requires candor toward the tribunal. This rule prohibits knowingly making false statements of law or fact to a court and imposes an affirmative duty to correct false statements previously made. The duty continues until the conclusion of the proceeding. Critically, courts have held that the standard under Federal Rule of Civil Procedure 11 is objective reasonableness, not subjective good faith. As one court stated, "An attorney who acts with 'an empty head and a pure heart' is nonetheless responsible for the consequences of his actions".

When counsel in Integrity Investment Fund discovered the miscitations, filing a Motion to Amend Complaint fulfilled this corrective duty. The attorney took responsibility and sought to rectify the record before the court relied on fabricated authority. This represents the ethical minimum. Waiting for opposing counsel or the court to discover errors invites sanctions and disciplinary referrals.

The duty of candor applies regardless of how the error originated. In Kaur v. Desso, a Northern District of New York court rejected an attorney's argument that time pressure justified inadequate verification, stating that "the need to check whether the assertions and quotations generated were accurate trumps all". Professional obligations do not yield to convenience or deadline stress.

ABA Model Rules 5.1 and 5.3 establish supervisory responsibilities. Managing attorneys must ensure that subordinate lawyers and non-lawyer staff comply with the Rules of Professional Conduct. When a supervising attorney has knowledge of specific misconduct and ratifies it, the supervisor bears responsibility. This principle extends to AI-assisted work product.

The Integrity Investment Fund matter reportedly involved an experienced attorney assisting with drafting. Regardless of delegation, the signing attorney retains ultimate accountability. Law firms must implement training programs on AI limitations, establish mandatory review protocols for AI-generated research, and create policies governing which tools may be used and under what circumstances. Partners reviewing junior associate work must apply heightened scrutiny to AI-assisted documents, treating them as first drafts requiring comprehensive validation.

Federal Rule of Civil Procedure 11: The Litigation Hammer

Reputable databases can hallucinate too!

Beyond professional responsibility rules, Federal Rule of Civil Procedure 11 authorizes courts to impose sanctions on attorneys who submit documents without a reasonable inquiry into the facts and law. Courts may sanction the attorney, the party, or both. Sanctions range from monetary penalties paid to the court or opposing party to non-monetary directives, including mandatory continuing legal education, public reprimands, and referrals to disciplinary authorities.

Rule 11 contains a 21-day safe harbor provision. Before filing a sanctions motion, the moving party must serve the motion on opposing counsel, who has 21 days to withdraw or correct the challenged filing. If counsel promptly corrects the error during this window, sanctions may be avoided. This procedural protection rewards attorneys who implement monitoring systems to catch mistakes early.

Courts have imposed escalating consequences as AI hallucination cases proliferate. Early cases resulted in warnings or modest fines. Recent sanctions have grown more severe. A Colorado attorney received a 90-day suspension after admitting in text messages that he failed to verify ChatGPT-generated citations. An Arizona federal judge sanctioned an attorney and required her to personally notify three federal judges whose names appeared on fabricated opinions, revoked her pro hac vice admission, and referred her to the Washington State Bar Association. A California appellate court issued a historic fine after discovering 21 of 23 quotes in an opening brief were fake.

Morgan & Morgan—the 42nd largest law firm by headcount—faced a $5,000 sanction when attorneys filed a motion citing eight nonexistent cases generated by an internal AI platform. The court divided the sanction among three attorneys, with the signing attorney bearing the largest portion. The firm's response acknowledged "great embarrassment" and promised reforms, but the reputational damage extends beyond the individual case.

What Attorneys Must Do: A Seven-Step Protocol

Legal professionals who discover AI-generated errors in filed documents must act decisively. The following protocol aligns with ethical requirements and minimizes sanctions risk:

First, immediately cease relying on the affected research. Do not file additional briefs or make oral arguments based on potentially fabricated citations. If a hearing is imminent, notify the court that you are withdrawing specific legal arguments pending verification.

Second, conduct a comprehensive audit. Review every citation in the affected filing. Retrieve and read the full text of each case or statute cited. Verify that quoted language appears in the source and that the legal propositions match the authority's actual holding. Check citation accuracy using Shepard's or KeyCite to confirm cases remain good law. This process cannot be delegated to the AI tool that generated the original errors.

Third, assess the materiality of errors. Determine whether fabricated citations formed the basis for legal arguments or appeared as secondary support. In Integrity Investment Fund, counsel noted that "the main precedents...and the...statutory citations are correct, and none of the Plaintiffs' claims were based on the mis-cited cases". This distinction affects the appropriate remedy but does not eliminate the obligation to correct the record.

Fourth, notify opposing counsel immediately. Candor extends to adversaries. Explain that you have discovered citation errors and are taking corrective action. This transparency may forestall sanctions motions and demonstrates good faith to the court.

Fifth, file a corrective pleading or motion. In Integrity Investment Fund, counsel filed a Motion to Amend Complaint under Federal Rule of Civil Procedure 15(a)(2). Alternative vehicles include motions to correct the record, errata sheets, or supplemental briefs. The filing should acknowledge the errors explicitly, explain how they occurred without shifting blame to technology, take personal responsibility, and specify the corrections being made.

Sixth, notify the court in writing. Even if opposing counsel does not move for sanctions, attorneys have an independent duty to inform the tribunal of material misstatements. The notification should be factual and direct. In cases where fabricated citations attributed opinions to real judges, courts have required attorneys to send personal letters to those judges clarifying that the citations were fictitious.

Seventh, implement systemic reforms. Review firm-wide AI usage policies. Provide training on verification requirements. Establish mandatory review checkpoints for AI-assisted work product. Consider technology solutions such as citation validation software that flags cases not found in authoritative databases. Document these reforms in any correspondence with the court or bar authorities to demonstrate that the incident prompted institutional change.

The Duty to Supervise: Training the Humans and the Machines

The Integrity Investment Fund case involved an experienced attorney assisting with drafting, yet errors reached the court. This pattern appears throughout AI hallucination cases. In the Chicago Housing Authority litigation, the responsible attorney had previously published an article on ethical considerations of AI in legal practice, yet still submitted a brief citing the nonexistent case Mack v. Anderson. Knowledge about AI risks does not automatically translate into effective verification practices.

Law firms must treat AI tools as they would junior associates—competent at discrete tasks but requiring supervision. Partners should review AI-generated research as they would first-year associate work, assuming errors exist and exercising vigilant attention to detail. Unlike human associates who learn from corrections, AI systems may perpetuate errors across multiple matters until their underlying models are retrained.

Training programs should address specific hallucination patterns. AI tools frequently fabricate case citations with realistic-sounding names, accurate-appearing citation formats, and plausible procedural histories. They misrepresent legal holdings, confuse arguments made by litigants with court rulings, and fail to respect the hierarchy of legal authority. They cite proposed legislation as enacted law and rely on overturned precedents as current authority. Attorneys must learn to identify these red flags.

Supervisory duties extend to non-lawyer staff. If a paralegal uses an AI grammar checker on a document containing confidential case strategy, the supervising attorney bears responsibility for any confidentiality breach. When legal assistants use AI research tools, attorneys must verify their work with the same rigor applied to traditional research methods.

Client Communication and Informed Consent

watch out for ai hallucinations!

Ethical obligations to clients intersect with AI usage in multiple ways. ABA Model Rule 1.4 requires attorneys to keep clients reasonably informed and to explain matters to the extent necessary for clients to make informed decisions. Several state bar opinions suggest that attorneys should obtain informed consent before inputting confidential client information into AI tools, particularly those that use data for model training.

The confidentiality analysis turns on the AI tool's data-handling practices. Many general-purpose AI platforms explicitly state in their terms of service that they use input data for model training and improvement. This creates significant privilege and confidentiality risks. Even legal-specific platforms may share data with third-party vendors or retain information on servers outside the firm's control. Attorneys must review vendor agreements, understand data flow, and ensure adequate safeguards exist before using AI tools on client matters.

When AI-generated errors reach a court filing, clients deserve prompt notification. The errors may affect litigation strategy, settlement calculations, or case outcome predictions. In extreme cases, such as when a court dismisses claims or imposes sanctions, malpractice liability may arise. Transparent communication preserves the attorney-client relationship and demonstrates that the lawyer prioritizes the client's interests over protecting their reputation.

Jurisdictional Variations: Illinois Sets the Standard

While the ABA Model Rules provide a national framework, individual jurisdictions have begun addressing AI-specific issues. Illinois, where the Integrity Investment Fund case was filed, has taken proactive steps.

The Illinois Supreme Court adopted a Policy on Artificial Intelligence effective January 1, 2025. The policy recognizes that AI presents challenges for protecting private information, avoiding bias and misrepresentation, and maintaining judicial integrity. The court emphasized "upholding the highest ethical standards in the administration of justice" as a primary concern.

In September 2025, Judge Sarah D. Smith of Madison County Circuit Court issued a Standing Order on Use of Artificial Intelligence in Civil Cases, later extended to other Madison County courtrooms. The order "embraces the advancement of AI" while mandating that tools "remain consistent with professional responsibilities, ethical standards and procedural rules". Key provisions include requirements for human oversight and legal judgment, verification of all AI-generated citations and legal statements, disclosure of expert reliance on AI to formulate opinions, and potential sanctions for submissions including "case law hallucinations, [inappropriate] statements of law, or ghost citations".

Arizona has been particularly active given the high number of AI hallucination cases in the state—second only to the Southern District of Florida. The State Bar of Arizona issued guidance calling on lawyers to verify all AI-generated research before submitting it to courts or clients. The Arizona Supreme Court's Steering Committee on AI and the Courts issued similar guidance emphasizing that judges and attorneys, not AI tools, are responsible for their work product.

Other states are following suit. California issued Formal Opinion 2015-93 interpreting technological competence requirements. The District of Columbia Bar issued Ethics Opinion 388 in April 2024, specifically addressing generative artificial intelligence in client matters. These opinions converge on several principles: competence includes understanding AI technology sufficiently to be confident it advances client interests, all AI output requires verification before use, and technology assistance does not diminish attorney accountability.

The Path Forward: Responsible AI Integration

The legal profession stands at a crossroads. AI tools offer genuine efficiency gains—automated document review, pattern recognition in discovery, preliminary legal research, and jurisdictional surveys. Rejecting AI entirely would place practitioners at a competitive disadvantage and potentially violate the duty to provide competent, efficient representation.

Yet uncritical adoption invites the disasters documented in hundreds of cases nationwide. The middle path provided by the Illinois courts requires human oversight and legal judgment at every stage.

Attorneys should adopt a "trust but verify" approach. Use AI for initial research, document drafting, and analytical tasks, but implement mandatory verification protocols before any work product leaves the firm. Treat AI-generated citations as provisional until independently confirmed. Read cases rather than relying on AI summaries. Check the currency of legal authorities. Confirm that quotations appear in the cited sources.

Law firms should establish tiered AI usage policies. Low-risk applications such as document organization or calendar management may require minimal oversight. High-risk applications, including legal research, brief writing, and client advice, demand multiple layers of human review. Some uses—such as inputting highly confidential information into general-purpose AI platforms—should be prohibited entirely.

Billing practices must evolve. If AI reduces the time required for legal research from eight hours to two hours, the efficiency gain should benefit clients through lower fees rather than inflating attorney profits. Clients should not pay both for AI tool subscriptions and for the same number of billable hours as traditional research methods would require. Transparent billing practices build client trust and align with fiduciary obligations.

Lessons from Integrity Investment Fund

The Integrity Investment Fund case offers several instructive elements. First, the attorney used a reputable legal database rather than a general-purpose AI. This demonstrates that brand name and subscription fees do not guarantee accuracy. Second, the attorney discovered the errors and voluntarily sought to amend the complaint rather than waiting for opposing counsel or the court to raise the issue. This proactive approach likely mitigated potential sanctions. Third, the attorney took personal responsibility, describing himself as "horrified" rather than deflecting blame to the technology.

The court's response also merits attention. Rather than immediately imposing sanctions, the court directed defendants to respond to the motion to amend and address the effect on pending motions to dismiss. This measured approach recognizes that not all AI-related errors warrant the most severe consequences, particularly when counsel acts promptly to correct the record. Defendants agreed that "the striking of all miscited and non-existent cases [is] proper", suggesting that cooperation and candor can lead to reasonable resolutions.

The fact that "the main precedents...and the...statutory citations are correct" and "none of the Plaintiffs' claims were based on the mis-cited cases" likely influenced the court's analysis. This underscores the importance of distinguishing between errors in supporting citations versus errors in primary authorities. Both require correction, but the latter carries greater risk of case-dispositive consequences and sanctions.

The Broader Imperative: Preserving Professional Judgment

Lawyers must verify their AI work!

Judge Castel's observation in Mata v. Avianca that "many harms flow from the submission of fake opinions" captures the stakes. Beyond individual case outcomes, AI hallucinations threaten systemic values: judicial efficiency, precedential reliability, adversarial fairness, and public confidence in legal institutions.

Attorneys serve as officers of the court with special obligations to the administration of justice. This role cannot be automated. AI lacks the judgment to balance competing legal principles, to assess the credibility of factual assertions, to understand client objectives in their full context, or to exercise discretion in ways that advance both client interests and systemic values.

The attorney in Integrity Investment Fund learned a costly lesson that the profession must collectively absorb: reputable databases, sophisticated algorithms, and expensive subscriptions do not eliminate the need for human verification. AI remains a tool—powerful, useful, and increasingly indispensable—but still just a tool. The attorney who signs a pleading, who argues before a court, and who advises a client bears professional responsibility that technology cannot assume.

As AI capabilities expand and integration deepens, the temptation to trust automated output will intensify. The profession must resist that temptation. Every citation requires verification. Every legal proposition demands confirmation. Every AI-generated document needs human review. These are not burdensome obstacles to efficiency but essential guardrails protecting clients, courts, and the justice system itself.

When errors occur—and the statistics confirm they will occur with disturbing frequency—attorneys must act immediately to correct the record, accept responsibility, and implement reforms preventing recurrence. Horror at one's mistakes, while understandable, satisfies no ethical obligation. Action does.

MTC

MTC: 2025 Year in Review: The "AI Squeeze," Redaction Disasters, and the Return of Hardware!

As we close the book on 2025, the legal profession finds itself in a dramatically different landscape than the one we predicted back in January. If 2023 was the year of "AI Hype" and 2024 was the year of "AI Experimentation," 2025 has undeniably been the year of the "AI Reality Check."

Here at The Tech-Savvy Lawyer.Page, we have spent the last twelve months documenting the friction between rapid innovation and the stubborn realities of legal practice. From our podcast conversations with industry leaders like Seth Price and Chris Dralla to our deep dives into the ethics of digital practice, one theme has remained constant: Competence is no longer optional; it is survival.

Looking back at our coverage from this past year, three specific highlights stand out as defining moments for legal technology in 2025. These aren't just news items; they are signals of where our profession is heading.

Highlight #1: The "Black Box" Redaction Wake-Up Call

Just days ago, on December 23, 2025, the legal world learned of a catastrophic failure of basic technological competence. As we covered in our recent post, How To: Redact PDF Documents Properly and Recover Data from Failed Redactions: A Guide for Lawyers After the DOJ Epstein Files Release “Leak”, the Department of Justice’s release of the Jeffrey Epstein files became a case study in what not to do.

The failure was simple but devastating: relying on visual "masks" rather than true data sanitization. Tech-savvy readers—and let’s be honest, anyone with a basic knowledge of copy-paste—were able to lift the "redacted" names of associates and victims directly from the PDF.

Why this matters for you: This event shattered the illusion that "good enough" tech skills are acceptable in high-stakes litigation. In 2025, we learned that the duty of confidentiality (Model Rule 1.6) is inextricably linked to the duty of technical competence (Model Rule 1.1 and its Comment 8). As we move into 2026, firms must move beyond basic PDF tools and invest in purpose-built redaction software that "burns in" changes and scrubs metadata. If the DOJ can fail this publicly, your firm is not immune.

Highlight #2: The "AI Squeeze" on Hardware

Throughout the year, we’ve heard complaints about sluggish laptops and crashing applications. In our December 22nd post, The 2026 Hardware Hike: Why Law Firms Must Budget for the 'AI Squeeze' Now, we identified the culprit. It isn’t just your imagination—it’s the supply chain.

We are currently facing a global shortage of DRAM (Dynamic Random Access Memory), driven by the insatiable appetite of data centers powering the very AI models we use daily. Manufacturers like Dell and Lenovo are pivoting their supply to these high-profit enterprise clients, leaving consumer and business laptops with a supply deficit.

Why this matters for you: The era of the 16GB RAM laptop for lawyers is dead. Running local, privacy-focused AI models (a major trend in 2025) and heavy eDiscovery platforms now requires 32GB or even 64GB of RAM as a baseline (which means you may want more than the “baseline”). The "AI Squeeze" means that in 2026, hardware will be 15-20% more expensive and harder to find. The lesson? Buy now. If your firm has a hardware refresh cycle planned for Q2 2026, accelerate it to Q1. Budgeting for technology is no longer just about software subscriptions; it’s about securing the physical silicon needed to do your job.

Highlight #3: From "Chat" to "Doing" (The Rise of Agentic AI)

Earlier this year, on the Tech-Savvy Lawyer Podcast, we spoke with Chris Dralla of TypeLaw and discussed the evolution of AI tools. 2025 marked the shift from "Chatbot AI" (asking a bot a question) to "Agentic AI" (telling a bot to do a job).

Tools like TypeLaw didn't just "summarize" cases this year; they actively formatted briefs, checked citations against local court rules, and built tables of authorities with minimal human intervention. This is the "boring" automation we have always advocated for—technology that doesn't try to be a robot lawyer, but acts as a tireless paralegal.

Why this matters for you: The novelty of chatting with an LLM has worn off. The firms winning in 2025 were the ones adopting tools that integrated directly into Microsoft Word and Outlook to automate specific, repetitive workflows. The "Generalist AI" is being replaced by the "Specialist Agent."

Moving Forward: What We Can Learn Today for 2026

As we look toward the new year, the profession must internalize a critical lesson: Technology is a supply chain risk.

Whether it is the supply of affordable memory chips or the supply of secure software that properly handles redactions, you are dependent on your tools. The "Tech-Savvy" lawyer of 2026 is not just a user of technology but a manager of technology risk.

What to Expect in 2026:

Is your firm budgeted for the anticipated 2026 hardware price hike?

  1. The Rise of the "Hybrid Builder": I predict that mid-sized firms will stop waiting for vendors to build the perfect tool and start building their own "micro-apps" on top of secure, private AI models.

  2. Mandatory Tech Competence CLEs: rigorous enforcement of tech competence rules will likely follow the high-profile data breaches and redaction failures of 2025.

  3. The Death of the Billable Hour (Again?): With "Agentic AI" handling the grunt work of drafting and formatting, clients will aggressively push back on bills for "document review" or "formatting." 2026 will force firms to bill for judgment, not just time.

As we sign off for the last time in 2025, remember our motto: Technology should make us better lawyers, not lazier ones. Check your redactions, upgrade your RAM, and we’ll see you in 2026.

Happy Lawyering and Happy New Year!

MTC: The 2026 Hardware Hike: Why Law Firms Must Budget for the "AI Squeeze" Now!

Lawyers need to be ready for $prices$ in tech to go up next year due to increased AI use!

A perfect storm is brewing in the hardware market. It will hit law firm budgets harder than expected in 2026. Reports from December 2025 confirm that major manufacturers like Dell, Lenovo, and HP are preparing to raise PC and laptop prices by 15% to 20% early next year. The catalyst is a global shortage of DRAM (Dynamic Random Access Memory). This shortage is driven by the insatiable appetite of AI servers.

While recent headlines note that giants like Apple and Samsung have the supply chain power to weather this surge, the average law firm does not. This creates a critical strategic challenge for managing partners and legal administrators.

The timing is unfortunate. Legal professionals are adopting AI tools at a record pace. Tools for eDiscovery, contract analysis, and generative drafting require significant computing power to run smoothly. In 2024, a laptop with 16GB of RAM was standard. Today, running local privacy-focused AI models or heavy eDiscovery platforms makes 32GB the new baseline. 64GB is becoming the standard for power users.

Don’t just meet today’s AI demands—exceed them. Upgrade to 32GB or 64GB of RAM now, not later. AI adoption in legal practice is accelerating exponentially. The memory you think is “enough” today will be the bottleneck tomorrow. Firms that overspec their hardware now will avoid costly mid-cycle replacements and gain a competitive edge in speed and efficiency.
— 💡 PRO TIP: Future-Proof Your Firm's Hardware Now

We face a paradox. We need more memory to remain competitive, but that memory is becoming scarce and expensive. The "AI Squeeze" is real. Chipmakers are prioritizing high-profit memory for data center AI over the standard memory used in law firm laptops. This supply shift drives up the bill of materials for every new workstation (low end when you compare them “high-profit memory data centers) you plan to buy.

Update your firm’s tech budget for 2026 by prioritizing ram for your next technology upgrade.

Law firms should act immediately. First, audit your hardware refresh cycles. If you planned to upgrade machines in Q1 or Q2 of 2026, accelerate those purchases to the current quarter. You could save 20% per unit by buying before the price hikes take full effect.

Second, adjust your 2026 technology budget. A flat budget will buy you less power next year. You cannot afford to downgrade specifications. Buying underpowered laptops will frustrate fee earners and throttle the efficiency gains you expect from your AI investments.

Finally, prioritize RAM over storage. Cloud storage is cheap and abundant. Memory is not. When configuring new machines, allocate your budget to 32GB or 64GB (or more) of RAM rather than a larger hard drive.

The hardware market is shifting. The cost of innovation is rising. Smart firms will plan for this reality today rather than paying the premium tomorrow.

🧪🎧 TSL Labs Bonus Podcast: Open vs. Closed AI — The Hidden Liability Trap in Your Firm ⚖️🤖

Welcome to TSL Labs Podcast Experiment. 🧪🎧 In this special "Deep Dive" bonus episode, we strip away the hype surrounding Generative AI to expose a critical operational risk hiding in plain sight: the dangerous confusion between "Open" and "Closed" AI systems.

Featuring an engaging discussion between our Google Notebook AI hosts, this episode unpacks the "Swiss Army Knife vs. Scalpel" analogy that every managing partner needs to understand. We explore why the "Green Light" tools you pay for are fundamentally different from the "Red Light" public models your staff might be using—and why treating them the same could trigger an immediate breach of ABA Model Rule 5.3. From the "hidden crisis" of AI embedded in Microsoft 365 to the non-negotiable duty to supervise, this is the essential briefing for protecting client confidentiality in the age of algorithms.

In our conversation, we cover the following:

  • [00:00] – Introduction: The hidden danger of AI in law firms.

  • [01:00] – The "AI Gap": Why staff confuse efficiency with confidentiality.

  • [02:00] – The Green Light Zone: Defining secure, "Closed" AI systems (The Scalpel).

  • [03:45] – The Red Light Zone: Understanding "Open" Public LLMs (The Swiss Army Knife).

  • [04:45] – "Feeding the Beast": How public queries actively train the model for everyone else.

  • [05:45]The Duty to Supervise: ABA Model Rules 5.3 and 1.1[8] implications.

  • [07:00] – The Hidden Crisis: AI embedded in ubiquitous tools (Microsoft 365, Adobe, Zoom).

  • [09:00] – The Training Gap: Why digital natives assume all prompt boxes are safe.

  • [10:00] – Actionable Solutions: Auditing tools and the "Elevator vs. Private Room" analogy.

  • [12:00] – Hallucinations: Vendor liability vs. Professional negligence.

  • [14:00] – Conclusion: The final provocative thought on accidental breaches.

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation