MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC

Words of the Week: “ANTHROPIC” VS. “AGENTIC”: UNDERSTANDING THE DISTINCTION IN LEGAL TECHNOLOGY 🔍

lawyers need to know the difference anthropic v. agentic

The terms "Anthropic" and "agentic" circulate frequently in legal technology discussions. They sound similar. They appear in the same articles. Yet they represent fundamentally different concepts. Understanding the distinction matters deeply for legal practitioners seeking to leverage artificial intelligence effectively.

Anthropic is a company—specifically, an AI safety-focused organization that develops large language models, most notably Claude. Think of Anthropic as a technology provider. The company pioneered "Constitutional AI," a training methodology that embeds explicit principles into AI systems to guide their behavior toward helpfulness, harmlessness, and honesty. When you use Claude for legal research or document drafting, you are using a product built by Anthropic.

Agentic describes a category of AI system architecture and capability—not a company or product. Agentic systems operate autonomously, plan multi-step tasks, make decisions dynamically, and execute workflows with minimal human intervention. An agentic system can break down complex assignments, gather information, refine outputs, and adjust its approach based on changing circumstances. It exercises judgment about which tools to deploy and when to escalate matters to human oversight.

"Constitutional AI" is an ai training methodology promoting helpfulness, harmlessness, and honesty in ai programing

The relationship between these concepts becomes clearer through a practical scenario. Imagine you task an AI system with analyzing merger agreements from a target company. A non-agentic approach requires you to provide explicit instructions for each step: search the database, extract key clauses, compare terms against templates, and prepare a summary. You guide the process throughout. An agentic approach allows you to assign a goal—Review these contracts, flag risks, and prepare a risk summary—and the AI system formulates its own research plan, prioritizes which documents to examine first, identifies gaps requiring additional information, and works through the analysis independently, pausing only when human judgment becomes necessary.

Anthropic builds AI models capable of agentic behavior. Claude, Anthropic's flagship model, can function as an agentic system when configured appropriately. However, Anthropic's models can also operate in simpler, non-agentic modes. You might use Claude to answer a direct question or draft a memo without any agentic capability coming into play. The capability exists within Anthropic's models, but agentic functionality remains optional depending on your implementation.

They work together as follows: Anthropic provides the underlying AI model and the training methodology emphasizing constitutional principles. That foundation becomes the engine powering agentic systems. The Constitutional AI approach matters specifically for agentic applications because autonomous systems require robust safeguards. As AI systems operate more independently, explicit principles embedded during training help ensure they remain aligned with human values and institutional requirements. Legal professionals cannot simply deploy an autonomous AI agent without trust in its underlying decision-making framework.

Agentic vs. Anthropic: Know the Difference. Shape the Future of Law!

For legal practitioners, the distinction carries practical implications. You evaluate Anthropic as a vendor when selecting which AI provider's tools to adopt. You evaluate agentic architecture when deciding whether your specific use case requires autonomous task execution or whether simpler, more directed AI assistance suffices. Many legal workflows benefit from direct AI support without requiring full autonomy. Others—such as high-volume contract analysis during due diligence—leverage agentic capabilities to move work forward rapidly.

Both elements represent genuine advances in legal technology. Recognizing the difference positions you to make informed decisions about tool adoption and appropriate implementation for your practice. ✅

📖 Word of the Week: RAG (Retrieval-Augmented Generation) - The Legal AI Breakthrough Eliminating Hallucinations. 📚⚖️

What is RAG?

USEd responsibly, rag can be a great tool for lawyers!

Retrieval-Augmented Generation (RAG) is a groundbreaking artificial intelligence technique that combines information retrieval with text generation. Unlike traditional AI systems that rely solely on pre-trained data, RAG dynamically retrieves relevant information from external legal databases before generating responses.

Why RAG Matters for Legal Practice

RAG addresses the most significant concern with legal AI: fabricated citations and "hallucinations." By grounding AI responses in verified legal sources, RAG systems dramatically reduce the risk of generating fictional case law. Recent studies show RAG-powered legal tools produce hallucination rates comparable to human-only work.

Key Benefits

RAG technology offers several advantages for legal professionals:

Enhanced Accuracy: RAG systems pull from authoritative legal databases, ensuring responses are based on actual statutes, cases, and regulations rather than statistical patterns.

Real-Time Updates: Unlike static AI models, RAG can access current legal information, making it valuable for rapidly evolving areas of law.

Source Attribution: RAG provides clear citations and references, enabling attorneys to verify and build upon AI-generated research.

Practical Applications

lawyers who don’t use ai technology like rag will be replaced those who do!

Law firms are implementing RAG for case law research, contract analysis, and legal memo drafting. The technology excels at tasks requiring specific legal authorities and performs best when presented with clearly defined legal issues.

Professional Responsibility Under ABA Model Rules

ABA Model Rule 1.1 (Competence): Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." This mandates understanding RAG capabilities and limitations before use.

ABA Model Rule 1.6 (Confidentiality): Lawyers must "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." When using RAG systems, attorneys must verify data security measures and understand how client information is processed and stored.

ABA Model Rule 5.3 (Supervision of Nonlawyer Assistants): ABA Formal Opinion 512 clarifies that AI tools may be considered "nonlawyer assistants" requiring supervision. Lawyers must establish clear policies for RAG usage and ensure proper training on ethical obligations.

ABA Formal Opinion 512: This 2024 guidance emphasizes that lawyers cannot abdicate professional judgment to AI systems. While RAG systems offer improved reliability over general AI tools, attorneys remain responsible for verifying outputs and maintaining competent oversight.

Final Thoughts: Implementation Considerations

lawyers must consider their ethical responsibilities when using generative ai, large language models, and rag.

While RAG significantly improves AI reliability, attorneys must still verify outputs and exercise professional judgment. The technology enhances rather than replaces legal expertise. Lawyers should understand terms of service, consult technical experts when needed, and maintain "human-in-the-loop" oversight consistent with professional responsibility requirements.

RAG represents a crucial step toward trustworthy legal AI, offering attorneys powerful research capabilities while maintaining the accuracy standards essential to legal practice and compliance with ABA Model Rules. Just make sure you use it correctly and check your work!

🎙️ Ep. 117: Legal Tech Revolution,  How Dorna Moini Built Gavel.ai to Transform the Practice of Law with AI and Automation.

Dorna Moini, CEO and Founder of Gavel, discusses how generative AI is transforming the way legal professionals work. She explains how Gavel helps lawyers automate their work, save time, and reach more clients without needing to know how to code. In the conversation, she shares the top three ways AI has improved Gavel's tools and operations. She also highlights the most significant security risks that lawyers should be aware of when using AI tools. Lastly, she provides simple tips to ensure AI-generated results are accurate and reliable, as well as how to avoid false or misleading information.

Join Dorna and me as we discuss the following three questions and more!

  1. What are the top three ways generative AI has transferred Gavel's offerings and operations?

  2. What are the top three most critical security concerns legal professionals should be aware of when using AI-integrated products like Gavel?

  3. What are the top three ways to ensure the accuracy and reliability of AI-generated results, including measures to prevent false or misleading information or hallucinations?

In our conversation, we cover the following:

[01:16] Dorna's Tech Setup and Upgrades

[03:56] Discussion on Computer and Smartphone Upgrades

[08:31] Exploring Additional Tech and Sleeping Technology

[09:32] Generative AI's Impact on Gavel's Operations

[13:13] Critical Security Concerns in AI-Integrated Products

[16:44] Playbooks and Redline Capabilities in Gavel Exec

[20:45] Contact Information

Resources

Connect with Dorna:

Websites & SaaS Products:

  • Apple Podcasts — Podcast platform (for reviews)

  • Apple Podcasts — Podcast platform (for reviews)

  • ChatGPT — AI conversational assistant by OpenAI

  • ChatGPT — AI conversational assistant by OpenAI

  • Gavel — AI-powered legal automation platform (formerly Documate)

  • Gavel Exec — AI assistant for legal document review and redlining (part of Gavel)

  • MacRumors — Apple news and product cycle information

  • MacRumors — Apple news and product cycle information

  • Notion — Workspace for notes, databases, and project management

  • Notion — Workspace for notes, databases, and project management

  • Slack — Team communication and collaboration platform 

Hardware:

Other: