📢 Your Tech-Savvy Lawyer Blogger and Podcaster, Michael D.J. Eisenberg, Announces His Upcoming Talk on Ethical AI Use in Legal Practice at the 2026 AI Legal Practice Summit!

Saturday, April 18, 2026 | Capital University Law School

As technology continues to transform legal practice, I’m honored to announce that I’ll be speaking at the 2026 AI Legal Practice Summit, hosted by my alma mater, Capital University Law School, in Columbus, Ohio. This event brings together attorneys, educators, and technologists to explore how artificial intelligence is reshaping the legal field — not just operationally, but ethically and professionally as well.

My presentation, “Smart Practice, Smarter Ethics: Navigating AI Tools Under the ABA Model Rules,” focuses on a topic that’s both timely and critically important: how lawyers can use emerging AI technologies responsibly while meeting their professional obligations under the ABA Model Rules of Professional Conduct.

👉 Learn more and view the full schedule at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
🎟️ Register today through Eventbrite: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

Through my work on The Tech-Savvy Lawyer.Page blog and podcast, I’ve had countless conversations with practitioners who want to use AI to streamline tasks such as research, document drafting, and client management — yet remain uncertain about compliance, bias, and confidentiality. Law practice is evolving rapidly, but our ethical foundations must remain strong.

In my session, I’ll walk through key aspects of how the ABA Model Rules, including Rules 1.1 (Competence), 1.6 (Confidentiality of Information), and 5.3 (Responsibilities Regarding Nonlawyer Assistance), apply in an age of intelligent automation. These rules guide us in assessing not just what technology can do, but how and when it should be used.

Your faculty!

We’ll discuss:

  • Reviewing the tech stack you already own;

  • How to vet and implement AI-powered tools while maintaining confidentiality.

  • Questions to ask vendors about data handling and bias;

  • How to document best practices for firm-wide ethical compliance;

  • Ways to blend human legal judgment with algorithmic assistance; and

  • Managing client expectations about AI-enabled legal work.

My goal is to help attorneys approach technology with confidence — to experiment, adopt, and adapt responsibly. Being a “tech‑savvy lawyer” isn’t about mastering every gadget or platform; it’s about understanding how technology fits within the ethical framework of our profession.

The conversation around technological competence has matured since Comment 8 to Rule 1.1 was introduced. It’s no longer optional. Attorneys must understand the benefits, risks, and limitations of relevant technology to provide competent representation. Artificial intelligence highlights that reality better than any emerging tool before it.

Whether you’re a solo practitioner looking to automate administrative tasks, working for a government agency, or part of a large firm implementing AI-assisted legal research or document review, I’ll share specific practices you can adopt immediately.

If you’re attending and seeking Ohio CLE credit, please contact Jenny Wondracek at jwondracek@law.capital.edu for details.

PRogram description of my presentation.

The 2026 AI Legal Practice Summit will feature leading scholars, ethics experts, and seasoned practitioners. I’m looking forward to exchanging ideas, testing assumptions, and continuing a dialogue that helps ensure AI becomes a responsible partner—never a replacement—in the practice of law.

Let’s move forward together, with competence, curiosity, and care.

Learn more about the Summit at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
Register today: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

I look forward to seeing you there! ⚖️

Word(s) of the Week: Understanding the Evolution of Artificial Intelligence: From AI to Generative AI to AI LLMs — and Why It Matters for Today’s Legal Professionals ⚖️🤖

lawyers need to understand what AL LLM can and can’t do!

Artificial Intelligence (AI) is transforming the legal industry, yet confusion still exists about what different terms mean — and why they matter. Terms like AI, Generative AI, and AI LLM (Large Language Model) are often used interchangeably, but they describe very different levels of capability. Understanding these distinctions is essential for attorneys navigating new professional responsibilities and compliance expectations under the ABA Model Rules. Let’s break down what each term means, why the progression matters, and what the next step—AI LLMs—means for legal practice.

AI: The Foundation of Machine Intelligence

Traditional AI refers to systems designed to perform tasks that require human-like intelligence. These tasks include pattern recognition, data sorting, predictive analytics, and document classification. For example, early e-discovery tools that identify relevant documents in large datasets use AI algorithms to flag patterns.

In legal practice, this type of AI boosted efficiency but remained narrow in function. Lawyers controlled the inputs and closely supervised the outcomes. Under ABA Model Rule 1.1 (Competence), using such tools responsibly required understanding their purpose and reliability, not their coding. Attorneys had to ensure that outputs were accurate and ethically sound.

Generative AI: Creating, Not Just Sorting

As technology evolved, so did AI’s capabilities. Generative AI differs from basic AI because it creates content instead of just classifying it. These models generate text, images, code, and even legal-style drafts based on training data. Tools like ChatGPT, which fall under this category, can draft letters, summarize cases, or brainstorm argument strategies.

Generative AI introduces profound efficiency benefits. A solo practitioner, for example, can use AI to prepare first drafts of client letters or marketing content quickly. The risk, however, is accuracy. Because these models generate content probabilistically, they can “hallucinate” — producing incorrect or fabricated information that sounds authoritative.

Generative ai is great at creating contENt - just watch out for hallucinations!

Under ABA Model Rule 5.3 (Supervision of Nonlawyer Assistants), lawyers must exercise oversight over tools like these since they function similarly to an assistant. Lawyers must verify all AI-generated output before use, maintaining professional independence and ethical standards.

AI LLMs: The Next Step in Practice Intelligence

AI LLMs — large language models — represent the next and most transformative step. Unlike earlier forms of AI, LLMs process massive datasets and can understand nuance, intent, and context in human language. This allows them to perform legal research, summarize filings, analyze contracts, and even simulate case strategies.

The key difference is scale and sophistication. LLMs learn not only from pre-set instructions but also by understanding the relationships between words and concepts. This contextual learning enables attorneys to interact with these systems conversationally. For example, an LLM-based research assistant can respond to a query such as, “Find Illinois cases interpreting non-compete clauses after 2023,” and then produce accurate summaries or citations.

Yet with great capability comes heightened responsibility. ABA Model Rule 1.6 (Confidentiality) applies when attorneys input client data into online tools. If the platform is public or cloud-based, lawyers must assess data handling, encryption, and privacy policies. Additionally, per Model Rule 1.1, competence now includes understanding how LLMs generate and manage information.

Why the Distinction Matters

The distinction between AI, Generative AI, and AI LLMs matters because it affects how attorneys use the technology within ethical, secure boundaries. A misstep in understanding can result in breached confidentiality, inaccurate filings, or ethical violations.

✅ AI assists.
✅ Generative AI creates.
✅ AI LLMs reason and interact.

In practical terms, lawyers need to update policies, train staff, and disclose use of these tools when appropriate. Law firms that adopt LLM-based platforms responsibly will gain a competitive advantage through increased efficiency and improved client service — without compromising professional duties.

Looking Ahead

Lawyers who use ai llms can save hours of menial work - always check your work!

AI LLMs are not replacing lawyers; they are amplifying their insight and reach. Attorneys who stay informed and practice technological competence will thrive in this next phase of digital legal service. The evolution from AI to Generative AI to LLMs represents not just a technological shift, but a professional one — requiring careful balance between innovation, ethics, and human judgment. ⚖️

Words of the Week: “ANTHROPIC” VS. “AGENTIC”: UNDERSTANDING THE DISTINCTION IN LEGAL TECHNOLOGY 🔍

lawyers need to know the difference anthropic v. agentic

The terms "Anthropic" and "agentic" circulate frequently in legal technology discussions. They sound similar. They appear in the same articles. Yet they represent fundamentally different concepts. Understanding the distinction matters deeply for legal practitioners seeking to leverage artificial intelligence effectively.

Anthropic is a company—specifically, an AI safety-focused organization that develops large language models, most notably Claude. Think of Anthropic as a technology provider. The company pioneered "Constitutional AI," a training methodology that embeds explicit principles into AI systems to guide their behavior toward helpfulness, harmlessness, and honesty. When you use Claude for legal research or document drafting, you are using a product built by Anthropic.

Agentic describes a category of AI system architecture and capability—not a company or product. Agentic systems operate autonomously, plan multi-step tasks, make decisions dynamically, and execute workflows with minimal human intervention. An agentic system can break down complex assignments, gather information, refine outputs, and adjust its approach based on changing circumstances. It exercises judgment about which tools to deploy and when to escalate matters to human oversight.

"Constitutional AI" is an ai training methodology promoting helpfulness, harmlessness, and honesty in ai programing

The relationship between these concepts becomes clearer through a practical scenario. Imagine you task an AI system with analyzing merger agreements from a target company. A non-agentic approach requires you to provide explicit instructions for each step: search the database, extract key clauses, compare terms against templates, and prepare a summary. You guide the process throughout. An agentic approach allows you to assign a goal—Review these contracts, flag risks, and prepare a risk summary—and the AI system formulates its own research plan, prioritizes which documents to examine first, identifies gaps requiring additional information, and works through the analysis independently, pausing only when human judgment becomes necessary.

Anthropic builds AI models capable of agentic behavior. Claude, Anthropic's flagship model, can function as an agentic system when configured appropriately. However, Anthropic's models can also operate in simpler, non-agentic modes. You might use Claude to answer a direct question or draft a memo without any agentic capability coming into play. The capability exists within Anthropic's models, but agentic functionality remains optional depending on your implementation.

They work together as follows: Anthropic provides the underlying AI model and the training methodology emphasizing constitutional principles. That foundation becomes the engine powering agentic systems. The Constitutional AI approach matters specifically for agentic applications because autonomous systems require robust safeguards. As AI systems operate more independently, explicit principles embedded during training help ensure they remain aligned with human values and institutional requirements. Legal professionals cannot simply deploy an autonomous AI agent without trust in its underlying decision-making framework.

Agentic vs. Anthropic: Know the Difference. Shape the Future of Law!

For legal practitioners, the distinction carries practical implications. You evaluate Anthropic as a vendor when selecting which AI provider's tools to adopt. You evaluate agentic architecture when deciding whether your specific use case requires autonomous task execution or whether simpler, more directed AI assistance suffices. Many legal workflows benefit from direct AI support without requiring full autonomy. Others—such as high-volume contract analysis during due diligence—leverage agentic capabilities to move work forward rapidly.

Both elements represent genuine advances in legal technology. Recognizing the difference positions you to make informed decisions about tool adoption and appropriate implementation for your practice. ✅