MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

MTC: Everyday Tech, Extraordinary Evidence: How Lawyers Can Turn Smartphones, Dash Cams, and Wearables Into Case‑Winning Proof After the Minnesota ICE Shooting 📱⚖️

Smartphone evidence: Phone as Proof!

The recent fatal shooting of ICU nurse Alex Pretti by a federal immigration officer in Minneapolis has become a defining example of how everyday technology can reshape a high‑stakes legal narrative. 📹 Federal officials claimed Pretti “brandished” a weapon, yet layered cellphone videos from bystanders, later analyzed by major news outlets, appear to show an officer disarming him moments before multiple shots were fired while he was already on the ground. In a world where such encounters are documented from multiple angles, lawyers who ignore ubiquitous tech risk missing powerful, and sometimes exonerating, evidence.

Smartphones: The New Star Witness

In the Minneapolis shooting, multiple smartphone videos captured the encounter from different perspectives, and a visual analysis highlighted discrepancies between official statements and what appears on camera. One video reportedly shows an officer reaching into Pretti’s waistband, emerging with a handgun, and then, barely a second later, shots erupt as he lies prone on the sidewalk, still being fired upon. For litigators, this is not just news; it is a case study in how to treat smartphones as critical evidentiary tools, not afterthoughts.

Practical ways to leverage smartphone evidence include:

  • Identifying and preserving bystander footage early through public calls, client outreach, and subpoenas to platforms when appropriate.

  • Synchronizing multiple clips to create a unified timeline, revealing who did what, when, and from where.

  • Using frame‑by‑frame analysis to test or challenge claims about “brandishing,” “aggressive resistance,” or imminent threat, as occurred in the Pretti shooting controversy.

In civil rights, criminal defense, and personal‑injury practice, this kind of video can undercut self‑defense narratives, corroborate witness accounts, or demonstrate excessive force, all using tech your clients already carry every day. 📲

GPS Data and Location Trails: Quiet but Powerful Proof

The same smartphones that record video also log location data, which can quietly become as important as any eyewitness. Modern phones can provide time‑stamped GPS histories that help confirm where a client was, how long they stayed, and in some instances approximate movement speed—details that matter in shootings, traffic collisions, and kidnapping cases. Lawyers increasingly use this location data to:

Dash cam / cameras: Dashcam Truth!

  • Corroborate or challenge alibis by matching GPS trails with claimed timelines.

  • Reconstruct movement patterns in protest‑related incidents, showing whether someone approached officers or was simply present, as contested in the Minneapolis shooting narrative.

  • Support or refute claims that a vehicle was fleeing, chasing, or unlawfully following another party.

In complex matters with multiple parties, cross‑referencing GPS from several phones, plus vehicle telematics, can create a robust, data‑driven reconstruction that a fact‑finder can understand without a computer science degree.

Dash Cams and 360‑Degree Vehicle Video: Replaying the Scene

Cars now function as rolling surveillance systems. Many new vehicles ship with factory cameras, and after‑market 360‑degree dash‑cam systems are increasingly common, capturing impacts, near‑misses, and police encounters in real time. In a Minneapolis‑style protest environment, vehicle‑mounted cameras can document:

  • How a crowd formed, whether officers announced commands, and whether a driver accelerated or braked before an alleged assault.

  • The precise position of pedestrians or officers relative to a car at the time of a contested shooting.

  • Sound cues (shouts of “he’s got a gun!” or “where’s the gun?”) that provide crucial context to the video, like those reportedly heard in footage of the Pretti shooting.

For injury and civil rights litigators, requesting dash‑cam footage from all involved vehicles—clients, third parties, and law‑enforcement—should now be standard practice. 🚗 A single 360‑degree recording might capture the angle that police‑worn cameras miss or omit.

Wearables and Smartwatches: Biometrics as Evidence

GPS & wearables: Data Tells All!

Smartwatches and fitness trackers add a new dimension: heart‑rate, step counts, sleep data, and sometimes even blood‑oxygen metrics. In use‑of‑force incidents or violent encounters, this information can be unusually persuasive. Imagine:

  • A heart‑rate spike precisely at the time of an assault, followed by a sustained elevation that reinforces trauma testimony.

  • Step‑count and GPS data confirming that a client was running away, standing still, or immobilized as claimed.

  • Sleep‑pattern disruptions and activity changes supporting damages in emotional‑distress claims.

These devices effectively turn the body into a sensor network. When combined with phone video and location data, they help lawyers build narratives supported by objective, machine‑created logs rather than only human recollection. ⌚

Creative Strategies for Integrating Everyday Tech

To move from concept to courtroom, lawyers should adopt a deliberate strategy for everyday tech evidence:

  • Build intake questions that explicitly ask about phones, car cameras, smartwatches, home doorbell cameras, and even cloud backups.

  • Move quickly for preservation orders, as Minnesota officials did when a judge issued a temporary restraining order to prevent alteration or removal of shooting‑related evidence in the Pretti case.

  • Partner with reputable digital‑forensics professionals who can extract, authenticate, and, when needed, recover deleted or damaged files.

  • Prepare demonstrative exhibits that overlay video, GPS points, and timelines in a simple visual, so judges and juries understand the story without technical jargon.

The Pretti shooting also underscores the need to anticipate competing narratives: federal officials asserted he posed a threat, while video and witness accounts cast doubt on that framing, fueling protests and calls for accountability. Lawyers on all sides must learn to dissect everyday tech evidence critically—scrutinizing what it shows, what it omits, and how it fits with other proof.

Ethical and Practical Guardrails

Ethics-focused image: Ethics First!

With this power comes real ethical responsibility. Lawyers must align their use of everyday tech with core duties under the ABA Model Rules of Professional Conduct.

  • Competence (ABA Model Rule 1.1)
    Rule 1.1 requires “competent representation,” and Comment 8 now expressly includes a duty to keep abreast of the benefits and risks of relevant technology. When you rely on smartphone video, GPS logs, or wearable data, you must either develop sufficient understanding yourself or associate with or consult someone who does.

  • Confidentiality and Data Security (ABA Model Rule 1.6)
    Rule 1.6 obligates lawyers to make reasonable efforts to prevent unauthorized access to or disclosure of client information. This extends to sensitive video, location trails, and biometric data stored on phones, cloud accounts, or third‑party platforms. Lawyers should use secure storage, limit access, and, where appropriate, obtain informed consent about how such data will be used and shared.

  • Preservation and Integrity of Evidence (ABA Model Rules 3.4, 4.1, and related e‑discovery ethics)
    ABA ethics guidance and case law emphasize that lawyers must not unlawfully alter, destroy, or conceal evidence. That means clients should be instructed not to edit, trim, or “clean up” recordings, and that any forensic work should follow accepted chain‑of‑custody protocols.

  • Candor and Avoiding Cherry‑Picking (ABA Model Rule 3.3, 4.1)
    Rule 3.3 requires candor toward the tribunal, and Rule 4.1 prohibits knowingly making false statements of fact. Lawyers should present digital evidence in context, avoiding selective clips that distort timing, perspective, or sound. A holistic, transparent approach builds credibility and protects both the client and the profession.

  • Respect for Privacy and Non‑Clients (ABA Model Rule 4.4 and related guidance)
    Rule 4.4 governs respect for the rights of third parties, including their privacy interests. When you obtain bystander footage or data from non‑clients, you should consider minimizing unnecessary exposure of their identities and, where feasible, seek consent or redact sensitive information.

FINAL THOUGHTS

Handled with these rules in mind, everyday tech can reduce factual ambiguity and support more just outcomes. Misused, it can undermine trust, compromise admissibility, and trigger disciplinary scrutiny. ⚖️