Word of the Week: "Constitutional AI" for Lawyers - What It Is, Why It Matters for ABA Rules, and How Solo & Small Firms Should Use It!

Constitutional AI’s ‘helpful, harmless, honest’ standard is a solid starting point for lawyers evaluating AI platforms.

The term “Constitutional AI” appeared this week in a Tech Savvy Lawyer post about the MTC/PornHub breach as a cybersecurity wake‑up call for lawyers 🚨. That article used it to highlight how AI systems (like those law firms now rely on) must be built and governed by clear, ethical rules — much like a constitution — to protect client data and uphold professional duties. This week’s Word of the Week unpacks what Constitutional AI really means and explains why it matters deeply for solo, small, and mid‑size law firms.

🔍 What is Constitutional AI?

Constitutional AI is a method for training large language models so they follow a written set of high‑level principles, called a “constitution” 📜. Those principles are designed to make the AI helpful, honest, and harmless in its responses.

As Claude AI from Anthropic explains:
Constitutional AI refers to a set of techniques developed by researchers at Anthropic to align AI systems like myself with human values and make us helpful, harmless, and honest. The key ideas behind Constitutional AI are aligning an AI’s behavior with a ‘constitution’ defined by human principles, using techniques like self‑supervision and adversarial training, developing constrained optimization techniques, and designing training data and model architecture to encode beneficial behaviors.” — Claude AI, Anthropic (July 7th, 2023).

In practice, Constitutional AI uses the model itself to critique and revise its own outputs against that constitution. For example, the model might be told: “Do not generate illegal, dangerous, or unethical content,” “Be honest about what you don’t know,” and “Protect user privacy.” It then evaluates its own answers against those rules before giving a final response.

Think of it like a junior associate who’s been given a firm’s internal ethics manual and told: “Before you send that memo, check it against these rules.” Constitutional AI does that same kind of self‑checking, but at machine speed.

🤝 How Constitutional AI Relates to Lawyers

For lawyers, Constitutional AI is important because it directly shapes how AI tools behave when handling legal work 📚. Many legal AI tools are built on models that use Constitutional AI techniques, so understanding this concept helps lawyers:

  • Judge whether an AI assistant is likely to hallucinate, leak sensitive info, or give ethically problematic advice.

  • Choose tools whose underlying AI is designed to be more transparent, less biased, and more aligned with professional norms.

  • Better supervise AI use in the firm, which is a core ethical duty under the ABA Model Rules.

Solo and small firms, in particular, often rely on off‑the‑shelf AI tools (like chatbots or document assistants). Knowing that a tool is built on Constitutional AI principles can give more confidence that it’s designed to avoid harmful outputs and respect confidentiality.

⚖️ Why It Matters for ABA Model Rules

For solo and small firms, asking whether an AI platform aligns with Constitutional AI’s standards is a practical first step in choosing a trustworthy tool.

The ABA’s Formal Opinion 512 on generative AI makes clear that lawyers remain responsible for all work done with AI, even if an AI tool helped draft it 📝. Constitutional AI is relevant here because it’s one way that AI developers try to build in ethical guardrails that align with lawyers' obligations.

Key connections to the Model Rules:

  • Rule 1.1 (Competence): Lawyers must understand the benefits and risks of the technology they use. Knowing that a tool uses Constitutional AI helps assess whether it’s reasonably reliable for tasks like research, drafting, or summarizing.

  • Rule 1.6 (Confidentiality): Constitutional AI models are designed to refuse to disclose sensitive information and to avoid memorizing or leaking private data. This supports the lawyer’s duty to make “reasonable efforts” to protect client confidences.

  • Rule 5.1 / 5.3 (Supervision): Managing partners and supervising attorneys must ensure that AI tools used by staff are consistent with ethical rules. A tool built on Constitutional AI principles is more likely to support, rather than undermine, those supervisory duties.

  • Rule 3.3 (Candor to the Tribunal): Constitutional AI models are trained to admit uncertainty and avoid fabricating facts or cases, which helps reduce the risk of submitting false or misleading information to a court.

In short, Constitutional AI doesn’t relieve lawyers of their ethical duties, but it can make AI tools safer and more trustworthy when used under proper supervision.

🛡️ The “Helpful, Harmless, and Honest” Principle

The three pillars of Constitutional AI — helpful, harmless, and honest — are especially relevant for lawyers:

  • Helpful: The AI should provide useful, relevant information that advances the client’s matter, without unnecessary or irrelevant content.

  • Harmless: The AI should avoid generating illegal, dangerous, or unethical content, and should respect privacy and confidentiality.

  • Honest: The AI should admit when it doesn’t know something, avoid fabricating facts or cases, and not misrepresent its capabilities.

For law firms, this “helpful, harmless, and honest” standard is a useful mental checklist when using AI:

  • Is this AI output actually helpful to the client’s case?

  • Could this output harm the client (e.g., by leaking confidential info or suggesting an unethical strategy)?

  • Is the AI being honest (e.g., not hallucinating case law or pretending to know facts it can’t know)?

If the answer to any of those questions is “no,” the AI output should not be used without significant human review and correction.

🛠️ Practical Takeaways for Law Firms

For solo, small, and mid‑size firms, here’s how to put this into practice:

Lawyers need to screen AI tools and ensure they are aligned with ABA Model Rules.

  1. Know your tools. When evaluating a legal AI product, ask whether it’s built on a Constitutional AI–style model (e.g., Claude). That tells you it’s designed with explicit ethical constraints.

  2. Treat AI as a supervised assistant. Never let AI make final decisions or file work without a lawyer’s review. Constitutional AI reduces risk, but it doesn’t eliminate the need for human judgment.

  3. Train your team. Make sure everyone in the firm understands that AI outputs must be checked for accuracy, confidentiality, and ethical compliance — especially when using third‑party tools.

  4. Update your engagement letters and policies. Disclose to clients when AI is used in their matters, and explain how the firm supervises it. This supports transparency under Rule 1.4 and Rule 1.6.

  5. Focus on “helpful, honest, harmless.” Use Constitutional AI as a mental checklist: Is this AI being helpful to the client? Is it honest about its limits? Is it harmless (no bias, no privacy leaks)? If not, don’t rely on it.

📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻

all lawyers need to remember to check ai-generated legal citations

After reviewing a year's worth of content from The Tech-Savvy Lawyer.Page blog and podcast, one word emerged to me as the defining concept for 2025: Verification. This term captures the essential duty that separates competent legal practice from dangerous shortcuts in the age of artificial intelligence.

Throughout 2025, The Tech-Savvy Lawyer consistently emphasized verification across multiple contexts. The blog covered proper redaction techniques following the Jeffrey Epstein files disaster. The podcast explored hidden AI in everyday legal tools. Every discussion returned to one central theme: lawyers must verify everything. 🔍

Verification means more than just checking your work. The concept encompasses multiple layers of professional responsibility. Attorneys must verify AI-generated legal research to prevent hallucinations. Courts have sanctioned lawyers who submitted fictitious case citations created by generative AI tools. One study found error rates of 33% in Westlaw AI and 17% in Lexis+ AI. Note the study's foundation is from May 2024, but a 2025 update confirms these findings remain current—the risk of not checking has not gone away. "Verification" cannot be ignored.

The duty extends beyond research. Lawyers must verify that redactions actually remove confidential information rather than simply hiding it under black boxes. The DOJ's failed redaction of the Epstein files demonstrated what happens when attorneys skip proper verification steps. Tech-savvy readers simply copied text from beneath the visual overlays. ⚠️

use of ai-generated legal work requires “verification”, “Verification”, “Verification”!

ABA Model Rule 1.1 requires technological competence. Comment 8 specifically mandates that lawyers understand "the benefits and risks associated with relevant technology." Verification sits at the heart of this competence requirement. Attorneys cannot claim ignorance about AI features embedded in Microsoft 365, Zoom, Adobe, or legal research platforms. Each tool processes client data differently. Each requires verification of settings, outputs, and data handling practices. 🛡️

The verification duty also applies to cybersecurity. Zero Trust Architecture operates on the principle "never trust, always verify." This security model requires continuous verification of user identity, device health, and access context. Law firms can no longer trust that users inside their network perimeter are authorized. Remote work and cloud-based systems demand constant verification.

Hidden AI poses another verification challenge. Software updates automatically activate AI features in familiar tools. These invisible assistants process confidential client data by default. Lawyers must verify which AI systems operate in their technology stack. They must verify data retention policies. They must verify that AI processing does not waive attorney-client privilege. 🤖

ABA Formal Opinion 512 eliminates the "I didn't know" defense. Lawyers bear responsibility for understanding how their tools use AI. Rule 5.3 requires attorneys to supervise software with the same care they supervise human staff members. Verification transforms from a good practice into an ethical mandate.

verify your ai-generated work like your bar license depends on it!

The year 2025 taught legal professionals that technology competence means verification competence. Attorneys must verify redactions work properly. They must verify AI outputs for accuracy. They must verify security settings protect confidential information. They must verify that hidden AI complies with ethical obligations. ✅

Verification protects clients, preserves attorney licenses, and maintains the integrity of legal practice. As The Tech-Savvy Lawyer demonstrated throughout 2025, every technological advancement creates new verification responsibilities. Attorneys who master verification will thrive in the AI era. Those who skip verification steps risk sanctions, malpractice claims, and disciplinary action.

The legal profession's 2025 Word of the Year is verification. Master it or risk everything. 💼⚖️

🎙️ Ep. # 124: AI Governance Expert Nikki Mehrpoo Shares the Triple E Protocol for Implementing Responsible AI and Legal Practice While Maintaining Ethical Compliance and Protecting Client Data.

My next guest is Nikki Mehrpoo. She is a nationally recognized leader in AI governance for law practices, known for her practical, ethical, and innovation-focused strategies. Today, she details her Triple-E Protocol and shares key steps for safely leveraging AI in legal work.

Join Nikki Mehrpoo and me as we discuss the following three questions and more!

  1. Based on your pioneering work with “Govern Before You Automate,” what are the top three foundational steps every lawyer should take to implement AI responsibly, and what are the top three mistakes lawyers make with AI?

  2. What are your top three tips or tricks when using AI in your work?

  3. When assessing the next AI platform from a service provider, what are the top three questions lawyers should be asking?

In our conversation, we cover the following:

  • 00:00:00 – Welcome and guest’s background 🌟

  • 00:01:00 – Current tech setup and cloud-based workflows ☁️

  • 00:02:00 – Privacy and IP management, not client confidentiality 🔐

  • 00:03:00 – Document deduplication with Effingo 📄

  • 00:04:00 – Hardware: HP Omni Book 7 Laptop, HP monitors, iPhone 💻📱

  • 00:05:00 – Efficiency tools: Text Expander, personal workflow shortcuts ⌨️

  • 00:06:00 – Balancing technology innovation and risk management ⚖️

  • 00:07:00 – Adapting to change, ongoing legal tech education 🧑‍💻

  • 00:08:00 – Triple-E Framework: Educate, Empower, Elevate 🚀

  • 00:09:00 – Governance, supervision duties, policy setting 🛡️

  • 00:10:00 – Human verification as a standard for all legal AI output 🧑‍⚖️

  • 00:12:00 – Real-world examples: AI hallucinations, bias, and due diligence ⚠️

  • 00:13:00 – IT vs. AI expertise, communicating across teams 🛠️

  • 00:14:00 – Chief AI Governance Officer, governance in legal innovation 🏛️

  • 00:15:00 – Global compliance, EU AI Act, international standards 🌐

  • 00:16:00 – Hidden AI in legacy software, policy gaps 🔎

  • 00:17:00 – Education as continuous legal responsibility 📚

  • 00:18:00 – Better results through prompt engineering 🔤

  • 00:19:00 – Verify, verify, verify: never trust without review ✔️

  • 00:20:00 – ABA Formal Opinion 512: standards for responsible legal AI 📜

  • 00:21:00 – Nikki’s Triple-E Protocol, governance best practices 📊

  • 00:22:00 – Data origin, bias, and auditability in legal AI systems 🧩

  • 00:23:00 – Frameworks for “govern before you automate” in legal workflows 🔒

  • 00:24:00 – Importance of internal hosting and zero retention policies 🏢

  • 00:25:00 – Maintaining confidentiality with third-party AI and HIPAA compliance 🤫

  • 00:26:00 – Where to find Nikki and connect 🌐

Resources

Connect with Nikki Mehrpoo

Mentioned in the episode

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

🎙️ Bonus Episode: TSL Lab’s Notebook.AI Commentary on June 23, 2025, TSL Editorial!

Hey everyone, welcome to this bonus episode!

As you know, in this podcast we explore the future of law through engaging interviews with lawyers, judges, and legal tech professionals on the cutting edge of legal innovation. As part of our Labs initiative, I am experimenting with AI-generated discussions—this episode features two Google Notebook.AI hosts who dive deep into our latest Editorial: "Lawyers, Generative AI, and the Right to Privacy: Navigating Ethics, Client Confidentiality, and Public Data in the Digital Age." If you’re a busy legal professional, join us for an insightful, AI-powered conversation that unpacks the editorial’s key themes, ethical challenges, and practical strategies for safeguarding privacy in the digital era.

Enjoy!

In our conversation, the "Bots" covered the following:

00:00 Introduction to the Bonus Episode

01:01 Exploring Generative AI in Law

01:24 Ethical Challenges and Client Confidentiality

01:42 Deep Dive into the Editorial

09:31 Practical Strategies for Lawyers

13:03 Conclusion and Final Thoughts

Resources:

Google Notebook.AI - https://notebooklm.google/