🎙️ Ep. 117: Legal Tech Revolution,  How Dorna Moini Built Gavel.ai to Transform the Practice of Law with AI and Automation.

Dorna Moini, CEO and Founder of Gavel, discusses how generative AI is transforming the way legal professionals work. She explains how Gavel helps lawyers automate their work, save time, and reach more clients without needing to know how to code. In the conversation, she shares the top three ways AI has improved Gavel's tools and operations. She also highlights the most significant security risks that lawyers should be aware of when using AI tools. Lastly, she provides simple tips to ensure AI-generated results are accurate and reliable, as well as how to avoid false or misleading information.

Join Dorna and me as we discuss the following three questions and more!

  1. What are the top three ways generative AI has transferred Gavel's offerings and operations?

  2. What are the top three most critical security concerns legal professionals should be aware of when using AI-integrated products like Gavel?

  3. What are the top three ways to ensure the accuracy and reliability of AI-generated results, including measures to prevent false or misleading information or hallucinations?

In our conversation, we cover the following:

[01:16] Dorna's Tech Setup and Upgrades

[03:56] Discussion on Computer and Smartphone Upgrades

[08:31] Exploring Additional Tech and Sleeping Technology

[09:32] Generative AI's Impact on Gavel's Operations

[13:13] Critical Security Concerns in AI-Integrated Products

[16:44] Playbooks and Redline Capabilities in Gavel Exec

[20:45] Contact Information

Resources

Connect with Dorna:

Websites & SaaS Products:

  • Apple Podcasts — Podcast platform (for reviews)

  • Apple Podcasts — Podcast platform (for reviews)

  • ChatGPT — AI conversational assistant by OpenAI

  • ChatGPT — AI conversational assistant by OpenAI

  • Gavel — AI-powered legal automation platform (formerly Documate)

  • Gavel Exec — AI assistant for legal document review and redlining (part of Gavel)

  • MacRumors — Apple news and product cycle information

  • MacRumors — Apple news and product cycle information

  • Notion — Workspace for notes, databases, and project management

  • Notion — Workspace for notes, databases, and project management

  • Slack — Team communication and collaboration platform 

Hardware:

Other:

MTC: AI Governance Crisis - What Every Law Firm Must Learn from 1Password's Eye-Opening Security Research

The legal profession stands at a crossroads. Recent research commissioned by 1Password reveals four critical security challenges that should serve as a wake-up call for every law firm embracing artificial intelligence. With 79% of legal professionals now using AI tools in some capacity while only 10% of law firms have formal AI governance policies, the disconnect between adoption and oversight has created unprecedented vulnerabilities that could compromise client confidentiality and professional liability.

The Invisible AI Problem in Law Firms

The 1Password study's most alarming finding mirrors what law firms are experiencing daily: only 21% of security leaders have full visibility into AI tools used in their organizations. This visibility gap is particularly dangerous for law firms, where attorneys and staff may be uploading sensitive client information to unauthorized AI platforms without proper oversight.

Dave Lewis, Global Advisory CISO at 1Password, captured the essence of this challenge perfectly: "We have closed the door to AI tools and projects, but they keep coming through the window!" This sentiment resonates strongly with legal technology experts who observe attorneys gravitating toward consumer AI tools like ChatGPT for legal research and document drafting, often without understanding the data security implications.

The parallel to law firm experiences is striking. Recent Stanford HAI research revealed that even professional legal AI tools produce concerning hallucination rates—Westlaw AI-Assisted Research showed a 34% error rate, while Lexis+ AI exceeded 17%. (Remember my editorial/bolo MTC/🚨BOLO🚨: Lexis+ AI™️ Falls Short for Legal Research!) These aren't consumer chatbots but professional tools marketed to law firms as reliable research platforms.

Four Critical Lessons for Legal Professionals

First, establish comprehensive visibility protocols. The 1Password research shows that 54% of security leaders admit their AI governance enforcement is weak, with 32% believing up to half of employees continue using unauthorized AI applications. Law firms must implement SaaS governance tools to identify AI usage across their organization and document how employees are actually using AI in their workflows.

Second, recognize that good intentions create dangerous exposures. The study found that 63% of security leaders believe the biggest internal threat is employees unknowingly giving AI access to sensitive data. For law firms handling privileged attorney-client communications, this risk is exponentially greater. Staff may innocently paste confidential case details into AI tools, potentially violating client confidentiality rules and creating malpractice liability.

Third, address the unmanaged AI crisis immediately. More than half of security leaders estimate that 26-50% of their AI tools and agents are unmanaged. In legal practice, this could mean AI agents are interacting with case management systems, client databases, or billing platforms without proper access controls or audit trails—a compliance nightmare waiting to happen.

Fourth, understand that traditional security models are inadequate. The research emphasizes that conventional identity and access management systems weren't designed for AI agents. Law firms must evolve their access governance strategies to include AI tools and create clear guidelines for how these systems should be provisioned, tracked, and audited.

Beyond Compliance: Strategic Imperatives

The American Bar Association's Formal Opinion 512 established clear ethical frameworks for AI use, but compliance requires more than policy documents. Law firms need proactive strategies that enable AI benefits while protecting client interests.

Effective AI governance starts with education. Most legal professionals aren't thinking about AI security risks in these terms. Firms should conduct workshops and tabletop exercises to walk through potential scenarios and develop incident response protocols before problems arise.

The path forward doesn't require abandoning AI innovation. Instead, it demands extending trust-based security frameworks to cover both human and machine identities. Law firms must implement guardrails that protect confidential information without slowing productivity—user-friendly systems that attorneys will actually follow.

Final Thoughts: The Competitive Advantage of Responsible AI Adoption

Firms that proactively address these challenges will gain significant competitive advantages. Clients increasingly expect their legal counsel to use technology responsibly while maintaining the highest security standards. Demonstrating comprehensive AI governance builds trust and differentiates firms in a crowded marketplace.

The research makes clear that security leaders are aware of AI risks but under-equipped to address them. For law firms, this awareness gap represents both a challenge and an opportunity. Practices that invest in proper AI governance now will be positioned to leverage these powerful tools confidently while their competitors struggle with ad hoc approaches.

The legal profession's relationship with AI has fundamentally shifted from experimental adoption to enterprise-wide transformation. The 1Password research provides a roadmap for navigating this transition securely. Law firms that heed these lessons will thrive in the AI-augmented future of legal practice.

MTC

📖 Word of the Week: Travel Converter vs. Travel Adapter

lawyers need to know the difference between travel adapters v. travel converters when they go overseas!

If your legal practice takes you beyond borders, understanding the difference between a travel converter and a travel adapter can protect both your tech investments and casework productivity. In many law offices, especially those with moderate technology exposure, these small devices often seem interchangeable; yet, their functions are quite different and critical for global legal engagements.

A travel adapter lets you plug your device into a foreign socket by reshaping your plug to fit the local outlet type. Adapters, however, do not change the local voltage. That means your laptop or phone charger will connect, but the electricity passing through remains at the voltage standard of the country you are in. Since most modern electronics, such as laptops, smartphones, and tablets, are dual-voltage (able to handle 100–240V), attorneys typically need only an appropriate adapter for these everyday tech tools.

A travel converter steps in when you need to change the actual voltage from the wall. American devices, such as hair dryers or some older law office equipment, may only be rated for 110V. If you plug these into a 220V outlet abroad with only an adapter, you risk damaging both the device and possibly your law firm’s reputation for being detail-oriented. A converter safely transforms the foreign voltage to match your device’s needs, ensuring you avoid costly mishaps.

lawyers be your firm’s travel warrior - know the type of electrical plug you need when traveling abroad!

How do you know which you need? Check the voltage label on each device. If it lists a range (like 100–240V), an adapter will suffice. If it’s fixed (like “120V only”), you must use a converter in countries with higher voltages, which is common across Europe and Asia. For attorneys on the move, a universal adapter set and a small, reliable converter can prevent technical disruptions during critical casework, presentations, or evidentiary document reviews.

Law office takeaway: Adapters make devices fit; converters make power safe. Read your device labels before you leave and never assume one solution works everywhere. Bring both if uncertain—being overprepared is a legal virtue. Safe travels and seamless connectivity! ✈️⚖️

Happy Lawyering!

MTC: Trump's 28-Page AI Action Plan - Reshaping Legal Practice, Client Protection, and Risk Management in 2025 ⚖️🤖

The July 23, 2025, release of President Trump's comprehensive "Winning the Race: America's AI Action Plan" represents a watershed moment for the legal profession, fundamentally reshaping how attorneys will practice law, protect client interests, and navigate the complex landscape of AI-enabled legal services. This 28-page blueprint, containing over 90 federal policy actions across three strategic pillars, promises to accelerate AI adoption while creating new challenges for legal professionals who must balance innovation with ethical responsibility.

What does Trump’s ai action plan mean for the practice of law?

Accelerated AI Integration and Deregulatory Impact

The Action Plan's aggressive deregulatory stance will dramatically accelerate AI adoption across law firms by removing federal barriers that previously constrained AI development and deployment. The Administration's directive to "identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development" will create a more permissive environment for legal technology innovation. This deregulatory approach extends to federal funding decisions, with the plan calling for limiting AI-related federal deemed "burdensome" to AI development.

For legal practitioners, this means faster access to sophisticated AI tools for document review, legal research, contract analysis, and predictive litigation analytics. The plan's endorsement of open-source and open-weight AI models will particularly benefit smaller firms that previously lacked access to expensive proprietary systems. However, this rapid deployment environment places greater responsibility on individual attorneys to implement proper oversight and verification protocols.

Enhanced Client Protection Obligations

The Action Plan's emphasis on "truth-seeking" AI models that are "free from top-down ideological bias" creates new client protection imperatives for attorneys. Under the plan's framework, (at least federal) lawyers must now ensure that AI tools used in client representation meet federal standards for objectivity and accuracy. This requirement aligns with existing ABA Formal Opinion 512, which mandates that attorneys maintain competence in understanding AI capabilities and limitations.

Legal professionals face (continued yet) heightened obligations to protect client confidentiality when using AI systems, particularly as the plan encourages broader AI adoption without corresponding privacy safeguards. Attorneys must implement robust data security protocols and carefully evaluate third-party AI providers' confidentiality protections before integrating these tools into client representations.

Critical Error Prevention and Professional Liability

What are the pros and cons to trump’s new ai plan?

The Action Plan's deregulatory approach paradoxically increases attorneys' responsibility for preventing AI-driven errors and hallucinations. Recent Stanford research reveals that even specialized legal AI tools produce incorrect information 17-34% of the time, with some systems generating fabricated case citations that appear authoritative but are entirely fictitious. The plan's call to adapt the Federal Rules of Evidence for AI-generated material means courts will increasingly encounter authenticity and reliability challenges.

Legal professionals must establish comprehensive verification protocols to prevent the submission of AI-generated false citations or legal authorities, which have already resulted in sanctions and malpractice claims across multiple jurisdictions. The Action Plan's emphasis on rapid AI deployment without corresponding safety frameworks makes attorney oversight more critical than ever for preventing professional misconduct and protecting client interests.

Federal Preemption and Compliance Complexity

Perhaps most significantly, the Action Plan's aggressive stance against state AI regulation creates unprecedented compliance challenges for legal practitioners operating across multiple jurisdictions. President Trump's declaration that "we need one common-sense federal standard that supersedes all states" signals potential federal legislation to preempt state authority over AI governance. This federal-state tension could lead to prolonged legal battles that create uncertainty for attorneys serving clients nationwide.

The plan's directive for agencies to factor state-level AI regulatory climates into federal funding decisions adds another layer of complexity, potentially creating a fractured regulatory landscape until federal preemption is resolved. Attorneys must navigate between conflicting federal deregulatory objectives and existing state AI protection laws, particularly in areas affecting employment, healthcare, and criminal justice, where AI bias concerns remain paramount. (All the while following their start bar ethics rules).

Strategic Implications for Legal Practice

Lawyers must remain vigilAnt when using AI in their work!

The Action Plan fundamentally transforms the legal profession's relationship with AI technology, moving from cautious adoption to aggressive implementation. While this creates opportunities for enhanced efficiency and client service, it also demands that attorneys develop new competencies in AI oversight, bias detection, and error prevention. Legal professionals who successfully adapt to this new environment will gain competitive advantages, while those who fail to implement proper safeguards face increased malpractice exposure and professional liability risks.

The plan's vision of AI-powered legal services requires attorneys to become sophisticated technology managers while maintaining their fundamental duty to provide competent, ethical representation. Success in this new landscape will depend on lawyers' ability to harness AI's capabilities while implementing robust human oversight and quality control measures to protect both client interests and professional integrity.

MTC

🎙️ Ep. #116: Free Versus Paid Legal AI: Conversation with DC Court of Appeals Head Law Librarian Laura Moorer

Laura Moorer, the Law Librarian for the DC Court of Appeals, brings over two decades of experience in legal research, including nearly 14 years with the Public Defender Service for DC. In this conversation, Laura shares her top three tips for crafting precise prompts when using generative AI, emphasizing clarity, specificity, and structure. She also offers insights on how traditional legal research methods—like those used with LexisNexis and Westlaw—can enhance AI-driven inquiries. Finally, Laura offers practical strategies for utilizing generative AI to help lawyers identify and locate physical legal resources, thereby bridging the gap between digital tools and tangible materials.

Join Laura and me as we discuss the following three questions and more!

  • What are your top three tips when it comes to precise prompt engineering, your generative AI inquiries?

  • What are your top three tips or tricks when it comes to using old-school prompts like those from the days of LexisNexis and Westlaw in your generative AI inquiries?

  • What are your top three tips or tricks using generative AI to help lawyers pinpoint actual physical resources?

In our conversation, we cover the following:

[00:40] Laura's Current Tech Setup

[03:27] Top 3 Tips for Crafting Precise Prompts in Generative AI

[13:44] Bringing Old-School Legal Research Tactics to Generative AI Prompting

[20:42] Using Generative AI to Help Lawyers Locate Physical Legal Resources

[24:38] Contact Information

Resources:

Connect with Laura:

Mentioned in the episode:

Software & Cloud Services mentioned in the conversation:

MTC: Is Puerto Rico’s Professional Responsibility Rule 1.19 Really Necessary? A Technology Competence Perspective.

Is PR’s Rule 1.19 necessary?

The legal profession stands at a crossroads regarding technological competence requirements. With forty states already adopting Comment 8 to Model Rule 1.1, which mandates lawyers "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology," the question emerges: do we need additional rules like PR Rule 1.19?

Comment 8 to Rule 1.1 establishes clear parameters for technological competence. This amendment, adopted by the ABA in 2012, expanded the traditional duty of competence beyond legal knowledge to encompass technological proficiency. The Rule requires lawyers to understand the "benefits and risks associated with relevant technology" in their practice areas.

The existing framework appears comprehensive. Comment 8 already addresses core technological competencies, including e-discovery, cybersecurity, and client communication systems. Under Rule 1.1 (Comment 5), legal professionals must evaluate whether their technological skills meet "the standards of competent practitioners" without requiring additional regulatory layers.

However, implementation challenges persist. Many attorneys struggle with the vague standard of "relevant technology". The rule's elasticity means that competence requirements continuously evolve in response to technological advancements. Some jurisdictions, like Puerto Rico (see PR’s Supreme Court’s Order ER-2025-02 approving adoption of its full set of Rules of Professional Conduct, have created dedicated technology competence rules (Rule 1.19) to provide clearer guidance.

The verdict: redundancy without added value. Rather than creating overlapping rules, the legal profession should focus on robust implementation of existing Comment 8 requirements. Enhanced continuing legal education mandates, clearer interpretive guidance, and practical competency frameworks would better serve practitioners than additional regulatory complexity.

Technology competence is essential, but regulatory efficiency should guide our approach. 🚀

TSS: Meet Our Next Tech-Savvy Saturday (July 19, 2025) Guest: Mathew Kerbis, The Subscription Attorney

Join us this Saturday, July 19, 2025, at 12 PM EST for Tech-Savvy Saturday!

Legal innovation meets technology with Matt Kerbis, founder of Subscription Attorney LLC. Matt leads the charge to end the billable hour by offering predictable, affordable legal services starting at $19.99/month. With expertise in AI tools like Paxton, NotebookLM Pro, Perplexity, and Descript, Matt empowers attorneys to embrace efficient, client-centered models. Matt hosts the Law Subscribed podcast and is celebrated by the ABA for innovation. Don’t miss this industry-changing session—unlock how to become your own “subscription attorney”!

Don’t miss this industry-changing session—unlock how to become your own “subscription attorney”!

👉 Sign up here for the free webinar!

👉 Sign up here to stay abreast of Tech-Savvy Saturdays News!

SEE YOU THIS SATURDAY!!!

🎙️

SEE YOU THIS SATURDAY!!! 🎙️

MTC: Why Courts Hesitate to Adopt AI - A Crisis of Trust in Legal Technology

Despite facing severe staffing shortages and mounting operational pressures, America's courts remain cautious about embracing artificial intelligence technologies that could provide significant relief. While 68% of state courts report staff shortages and 48% of court professionals lack sufficient time to complete their work, only 17% currently use generative AI tools. This cautious approach reflects deeper concerns about AI reliability, particularly in light of recent (and albeit unnecessarily continuing) high-profile errors by attorneys using AI-generated content in court documents.

The Growing Evidence of AI Failures in Legal Practice

Recent cases demonstrate why courts' hesitation may be justified. In Colorado, two attorneys representing MyPillow CEO Mike Lindell were fined $3,000 each after submitting a court filing containing nearly 30 AI-generated errors, including citations to nonexistent cases and misquoted legal authorities. The attorneys admitted to using artificial intelligence without properly verifying the output, violating Federal Rule of Civil Procedure 11.

Similarly, a federal judge in California sanctioned attorneys from Ellis George LLP and K&L Gates LLP $31,000 after they submitted briefs containing fabricated citations generated by AI tools including CoCounsel, Westlaw Precision, and Google Gemini. The attorneys had used AI to create an outline that was shared with colleagues who incorporated the fabricated authorities into their final brief without verification.

These incidents are part of a broader pattern of AI hallucinations in legal documents. The June 16, 2025, Order to Show Cause from the Oregon federal court case Sullivan v. Wisnovsky, No. 1:21-cv-00157-CL, D. Or. (June 16, 2025) demonstrates another instance where plaintiffs cited "fifteen non-existent cases and misrepresented quotations from seven real cases" after relying on what they claimed was "an automated legal citation tool". The court found this explanation insufficient to avoid sanctions.

The Operational Dilemma Facing Courts

LAWYERS NEED TO BalancE Legal Tradition with Ethical AI Innovation

The irony is stark: courts desperately need technological solutions to address their operational challenges, yet recent AI failures have reinforced their cautious approach. Court professionals predict that generative AI could save them an average of three hours per week initially, growing to nearly nine hours within five years. These time savings could be transformative for courts struggling with increased caseloads and staff shortages.

However, the profession's experience with AI-generated hallucinations has created significant trust issues. Currently, 70% of courts prohibit employees from using AI-based tools for court business, and 75% have not provided any AI training to their staff. This reluctance stems from legitimate concerns about accuracy, bias, and the potential for AI to undermine the integrity of judicial proceedings.

The Technology Adoption Paradox

Courts have successfully adopted other technologies, with 86% implementing case management systems, 85% using e-filing, and 88% conducting virtual hearings. This suggests that courts are not inherently resistant to technology. But they are specifically cautious about AI due to its propensity for generating false information.

The legal profession's relationship with AI reflects broader challenges in implementing emerging technologies. While 55% of court professionals recognize AI as having transformational potential over the next five years, the gap between recognition and adoption remains significant. This disconnect highlights the need for more reliable AI systems and better training for legal professionals.

The Path Forward: Measured Implementation

The solution is not to abandon AI but to implement it more carefully. Legal professionals must develop better verification protocols. As one expert noted, "AI verification isn't optional—it's a professional obligation." This means implementing systematic citation checking, mandatory human review, and clear documentation of AI use in legal documents. Lawyers must stay up to date on the technology available to them, as required by the American Bar Association Model Rule of Professional Conduct 1.1[8], including the expectation that they use the best available technology currently accessible. Thus, courts too need comprehensive governance frameworks that address data handling, disclosure requirements, and decision-making oversight before evaluating AI tools. The American Bar Association's Formal Opinion 512 on Generative Artificial Intelligence Tools provides essential guidance, emphasizing that lawyers must fully consider their ethical obligations when using AI.

Final Thoughts

THE Future of Law: AI and Justice in Harmony!

Despite the risks, courts and legal professionals cannot afford to ignore AI indefinitely. The technology's potential to address staffing shortages, reduce administrative burdens, and improve access to justice makes it essential for the future of the legal system. However, successful implementation requires acknowledging AI's limitations while developing robust safeguards to prevent the types of errors that have already damaged trust in the technology.

The current hesitation reflects a profession learning to balance innovation with reliability. As AI systems improve and legal professionals develop better practices for using them, courts will likely become more willing to embrace these tools. Until then, the cautious approach may be prudent, even if it means forgoing potential efficiency gains.

The legal profession's experience with AI serves as a reminder that technological adoption in critical systems requires more than just recognizing potential benefits—it demands building the infrastructure, training, and governance necessary to use these powerful tools responsibly.

MTC

🎙️ Ep. #115: Legal Technology Mastery with Law Librarian Jennifer Wondracek – Essential AI Tools and Skills for Modern Lawyers.

Our next guest is Jennifer Wondracek, Director of the Law Library and Professor of Legal Research and Writing at Capital University Law School. Jennifer shares her expertise as a legal technologist and ABA Women of Legal Tech Honoree. She addresses three vital questions: the top technological tools law students and lawyers should leverage, strategies to help new attorneys adapt to firm technologies, and ways law firms can automate routine tasks to prioritize high-value legal work. Drawing on her extensive experience in legal education and technology, Jennifer emphasizes practical solutions, the importance of transferable skills, and the increasing role of generative AI in modern legal practice.

Join Jennifer and me as we discuss the following three questions and more!

  1. As Head Librarian at Capital University Law School, what are the top three technological tools or resources that you believe law students and practicing lawyers should be leveraging right now to enhance legal research and client service?

  2. What are the top three strategies that lawyers can use to help law students clerking for a firm, or new attorneys, quickly adapt to become proficient with the technology platforms and tools used in their practice, particularly when these tools differ from what they learned in law school?

  3. Beyond legal research, what are the top three ways law firms and solo practitioners can use technology to automate routine tasks and create more time for high-value legal work?

In our conversation, we cover the following:

[01:03] Jennifer’s Current Tech Setup

[06:27] Top Technological Tools for Law Students and Practicing Lawyers

[11:23] Case Management Systems and Generative AI

[23:15] Strategies for Law Students and New Attorneys to Adapt to Technology

[31:03] Permissions and Backup Practices

[34:20] Automating Routine Tasks with Technology

[39:41] Favorite Non-Legal AI Tools

Resources:

Connect with Jennifer:

Mentioned in the episode:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation: