🎙️ Ep. #127: Mastering Legal Storytelling and AI Automation with Joshua Altman 🎙️⚖️

In Episode 127, I sit down with Joshua Altman, Managing Director of Beltway.Media, to decode the intersection of legal expertise and narrative strategy. 🏛️ We dive deep into the tech stack that powers a modern communications firm and explore how lawyers can leverage AI without losing their unique professional voice. Joshua shares actionable insights on using tools like Gumloop and Abacus.AI to automate workflows, the critical mistakes to avoid during high-stakes crisis management, and the real metrics you need to track to prove marketing ROI. 📊 Whether you are a solo practitioner or part of a large firm, this conversation bridges the gap between complex legal work and compelling public communication.

Join Joshua Altman and me as we discuss the following three questions and more!

  1. What are the top three technology tools or platforms you recommend that would help attorneys transform a single piece of thought leadership into multiple content formats across channels, and how can they use AI to accelerate this process without sacrificing their professional voice?

  2. What are the top three mistakes attorneys and law firms make when communicating during high-stakes situations—whether that’s managing negative publicity, navigating a client crisis, or pitching to potential investors—and how can technology help them avoid these pitfalls while maintaining their ethical obligations?

  3. What are the top three metrics for their online marketing technology investments that attorneys should actually be tracking to demonstrate return on investment, and what affordable technology solutions would you recommend to help them capture and analyze this data?

In our conversation, we cover the following:

  • [00:00] Introduction to Joshua Altman and Beltway.Media.

  • [01:06] Joshua’s current secure tech stack: From Mac setups to encrypted communications.

  • [03:52] Strategic content repurposing: Using AI as a tool, not a replacement for your voice.

  • [05:30] The "Human in the Loop" necessity: Why lawyers must proofread AI content.

  • [10:00] Tech Recommendation #1: using Abacus.AI and Root LLM for model routing.

  • [11:00] Tech Recommendation #2: Automating workflows with Gumloop.

  • [15:43] Tech Recommendation #3: The "Low Tech" solution of human editors.

  • [16:47] Crisis Communications: Navigating the Court of Public Opinion vs. the Court of Law.

  • [20:00] Using social listening tools for litigation support and witness tracking.

  • [24:30] Metric #1: Analyzing Meaningful Engagement (comments vs. likes).

  • [26:40] Metric #2: Understanding Impressions and network reach (1st vs. 2nd degree).

  • [28:40] Metric #3: Tracking Clicks to validate interest and sales funnels.

  • [31:15] How to connect with Joshua.

RESOURCES:

Connect with Joshua Altman

Mentioned in the episode

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

  • Abacus.AI - AI platform mentioned for its "Root LLM" model routing feature.

  • ChatGPT - AI language model.

  • Claude - AI language model.

  • Constant Contact - Email marketing platform.

  • Gumloop - AI automation platform for newsletters and social listening.

  • LinkedIn - Professional social networking.

  • MailChimp - Email marketing platform.

  • Proton Mail - Encrypted email service.

  • Tresorit - End-to-end encrypted file sharing (secure Dropbox alternative).

MTC: The Hidden Danger in Your Firm: Why We Must Teach the Difference Between “Open” and “Closed” AI!

Does your staff understand the difference between “free” and “paid” aI? Your license could depend on it!

I sit on an advisory board for a school that trains paralegals. We meet to discuss curriculum. We talk about the future of legal support. In a recent meeting, a presentation by a private legal research company caught my attention. It stopped me cold. The topic was Artificial Intelligence. The focus was on use and efficiency. But something critical was missing.

The lesson did not distinguish between public-facing and private tools. It treated AI as a monolith. This is a dangerous oversimplification. It is a liability waiting to happen.

We are in a new era of legal technology. It is exciting. It is also perilous. The peril comes from confusion. Specifically, the confusion between paid, closed-system legal research tools and public-facing generative AI.

Your paralegals, law clerks, and staff use these tools. They use them to draft emails. They use them to summarize depositions. Do they know where that data goes? Do you?

The Two Worlds of AI

There are two distinct worlds of AI in our profession.

First, there is the world of "Closed" AI. These are the tools we pay for - i.e., Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. These platforms are built for lawyers. They are walled gardens. You pay a premium for them. (Always check the terms and conditions of your providers.) That premium buys you more than just access. It buys you privacy. It buys you security. When you upload a case file to Westlaw, it stays there. The AI analyzes it. It does not learn from it for the public. It does not share your client’s secrets with the world. The data remains yours. The confidentiality is baked in.

Then, there is the world of "Open" or "Public" AI. This is ChatGPT. This is Perplexity. This is Claude. These tools are miraculous. But they are also voracious learners.

When you type a query into the free version of ChatGPT, you are not just asking a question. You are training the model. You are feeding the beast. If a paralegal types, "Draft a motion to dismiss for John Doe, who is accused of embezzlement at [Specific Company]," that information leaves your firm. It enters a public dataset. It is no longer confidential.

This is the distinction that was missing from the lesson plan. It is the distinction that could cost you your license.

The Duty to Supervise

Do you and your staff know when you can and can’t use free AI in your legal work?

You might be thinking, "I don't use ChatGPT for client work, so I'm safe." You are wrong.

You are not the only one doing the work. Your staff is doing the work. Your paralegals are doing the work.

Under the ABA Model Rules of Professional Conduct, you are responsible for them. Look at Rule 5.3. It covers "Responsibilities Regarding Nonlawyer Assistance." It is unambiguous. You must make reasonable efforts to ensure your staff's conduct is compatible with your professional obligations.

If your paralegal breaches confidentiality using AI, it is your breach. If your associate hallucinates a case citation using a public LLM, it is your hallucination.

This connects directly to Rule 1.1, Comment 8. This represents the duty of technology competence. You cannot supervise what you do not understand. You must understand the risks associated with relevant technology. Today, that means understanding how Large Language Models (LLMs) handle data.

The "Hidden AI" Problem

I have discussed this on The Tech-Savvy Lawyer.Page Podcast. We call it the "Hidden AI" crisis. AI is creeping into tools we use every day. It is in Adobe. It is in Zoom. It is in Microsoft 365.

Public-facing AI is useful. I use it. I love it for marketing. I use it for brainstorming generic topics. I use it to clean up non-confidential text. But I never trust it with a client's name. I never trust it with a very specific fact pattern.

A paid legal research tool is different. It is a scalpel. It is precise. It is sterile. A public chatbot is a Swiss Army knife found on the sidewalk. It might work. But you don't know where it's been.

The Training Gap

The advisory board meeting revealed a gap. Schools are teaching students how to use AI. They are teaching prompts. They are teaching speed. They are not emphasizing the where.

The "where" matters. Where does the data go?

We must close this gap in our own firms. You cannot assume your staff knows the difference. To a digital native, a text box is a text box. They see a prompt window in Westlaw. They see a prompt window in ChatGPT. They look the same. They act the same.

They are not the same.

One protects you. The other exposes you.

A Practical Solution

I have written about this in my blog posts regarding AI ethics. The solution is not to ban AI. That is impossible. It is also foolish. AI is a competitive advantage.

* Always check the terms of use in your agreements with private platforms to determine if your client confidential data and PII are protected.

The solution is policies and training.

  1. Audit Your Tools. Know what you have. Do you have an enterprise license for ChatGPT? If so, your data might be private. If not, assume it is public.

  2. Train on the "Why." Don't just say "No." Explain the mechanism. Explain that public AI learns from inputs. Use the analogy of a confidential conversation in a crowded elevator versus a private conference room.

  3. Define "Open" vs. "Closed." Create a visual guide. List your "Green Light" tools (Westlaw, Lexis, etc.). List your "Red Light" tools for client data (Free ChatGPT, personal Gmail, etc.).

  4. Supervise Output. Review the work. AI hallucinates. Even paid tools can make mistakes. Public tools make up cases entirely. We have all seen the headlines. Don't be the next headline.

The Expert Advantage

The line between “free” and “paid” ai could be a matter of keeping your bar license!

On The Tech-Savvy Lawyer.Page, I often say that technology should make us better lawyers, not lazier ones.

Using Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. is about leveraging a curated, verified database. It is about relying on authority. Using a public LLM for legal research is about rolling the dice.

Your license is hard-earned. Your reputation is priceless. Do not risk them on a free chatbot.

The lesson from the advisory board was clear. The schools are trying to keep up. But the technology moves faster than the curriculum. It is up to us. We are the supervisors. We are the gatekeepers.

Take time this week. Gather your team. Ask them what tools they use. You might be surprised. Then, teach them the difference. Show them the risks.

Be the tech-savvy lawyer your clients deserve. Be the supervisor the Rules require.

The tools are here to stay. Let’s use them effectively. Let’s use them ethically. Let’s use them safely.

MTC

📢 Shout Out! ILTACON 2025 Recap: AI Revolution, Cybersecurity Imperatives, and the Exciting Legal Tech Future!

🎉 Three Game-Changing Highlights from Legal Technology's Premier Event!

Iltacon - The only peer-created and led conference for legal technology professionals.

The corridors of the Gaylord National Resort & Convention Center just outside Washington, DC were buzzing with an energy as one fellow reporter aptly put it was the most excitement he’d seen at ILTACON in years – and the catalyst was undeniably artificial intelligence.

With over 4,000 legal professionals from 30 different countries converging in National Harbor, Maryland, from August 10-14, ILTACON 2025 delivered an unprecedented showcase of innovation. The numbers tell the story: over 225 vendors and over 80 educational sessions created a treasure trove of legal technology advancements that had attorneys and IT professionals equally captivated.

🚀 Highlight #1: AI Takes Center Stage – From Pilots to Production

The shift from AI experimentation to implementation was unmistakable. Harvey, iManage, Thomson Reuters, and Litera weren't just talking about AI anymore – they were demonstrating working solutions and real-world results.

AI agents emerged as the breakout stars. These sophisticated systems move beyond simple chatbots to become "digital colleagues" that can plan, reason, and execute complex legal tasks autonomously. The "Orchestrating Intelligence: AI Agents in the Legal Space" session showcased how these tools amplify human capabilities rather than replace them, with speakers noting that agents will be able to do much more, but with a better quality output.

iltacon was ready for it 4000+ attendees from 30+ countries!

Knowledge Management experienced a renaissance. The "KM Roundtable: Embracing the New Wave of Knowledge Management" revealed that KM professionals have become the unsung heroes of AI implementation. Without proper content governance and data structure, even the most advanced AI tools fall flat. KM teams are shifting from maintaining knowledge bases to orchestrating AI workflows and ensuring data quality.

Interoperability standards like the Model Context Protocol (MCP) are breaking down data silos. These developments signal a future where AI tools can seamlessly integrate across platforms without costly custom development.

Real-world applications dominated discussions. Sessions demonstrated concrete time savings: customers reported 50-70% time savings reaching early drafts with better consistency, while legal research showed 60%+ time savings while discovering new arguments in cross-jurisdictional litigation. The "Charting Your Search Journey in the Age of AI" session emphasized how precedent research has evolved from "finding a needle in a haystack" to having a "haystack full of needles".

🔒 Highlight #2: Cybersecurity Rises to Critical Priority

The cybersecurity focus was evident throughout the conference, with sessions like "Emerging Cybersecurity Threats in Legal Tech" and "The Yin & Yang of Cybersecurity in eDiscovery" drawing significant attendance. These sessions addressed how sophisticated cybersecurity threats present new challenges for legal organizations, from AI-driven attacks to vulnerabilities in emerging technologies.

Reporting on iltacon2025 from Gaylord National Resort & Convention Center just outside Washington, DC!

AI Ethics in Legal Writing emerged as a critical intersection between technology adoption and professional responsibility. Ivy Grey of WordRake, recognized as an Influential Woman in Legal Tech by ILTA, led compelling discussions about the ethical implications of using generative AI in legal writing. Her panel explored how lawyers can maintain ethical obligations while leveraging AI tools for document creation, emphasizing the importance of verification, maintaining independent judgment, and ensuring client confidentiality when using AI-assisted writing tools.

Security-AI integration discussions addressed prompt injection attacks, data leakage prevention, and the challenge of educating clients about AI security measures. The "Getting the Most from M365 Copilot: The Do's & Don'ts" session provided practical frameworks for rolling out AI tools while maintaining security protocols.

Document management security revealed concerning trends. Sessions highlighted how firm knowledge is scattered across OneDrive, SharePoint, Teams, and personal folders, making it difficult to locate and use effectively. Security by obscurity no longer works, as AI tools like Copilot can surface documents that were previously hidden by poor organization rather than true security measures.

🔮 Highlight #3: The Future-Forward Mindset Revolution

Keynote speaker Reena SenGupta challenged the industry with her "seven evolutions" framework, urging legal professionals to think of law firms as living organisms rather than rigid hierarchies. Her fungal network metaphor resonated deeply – emphasizing how technology professionals serve as the connective tissue enabling knowledge flow throughout organizations.

Predictive capabilities are replacing reactive approaches. SenGupta showcased how firms are moving from precedent to prediction, with examples like DLA Piper's "Compliance-as-a-Service" product that uses AI to spot minor compliance issues before they become major problems, and Paul Hastings restructuring their white-collar investigations practice around AI-powered anomaly detection.

ILTACON2025 is celebrating 45 years!

The billable hour debate intensified. The "Bill(AI)ble Hours: The Debate Continues" session explored how AI's efficiency gains might fundamentally alter legal economics, with the audience showing more support for alternative fee arrangements (AFAs) than opposition. The discussion centered on capturing value creation rather than time tracking, though the majority agreed the billable hour wouldn't disappear within the next five years.

Multidisciplinary integration emerged as essential rather than optional. SenGupta described the breakdown of the divide between legal and non-legal roles, citing examples like White & Case's integration of project managers into client teams and DLA Piper's consulting unit working hand-in-glove with lawyers. These cross-functional teams are becoming critical for delivering client value.

🎯 Strategic Takeaways for Legal Professionals

For Solo and Small Firms: While ILTACON traditionally targets larger firms, this year's vendor presentations often included scalable solutions. The key insight? Start with AI tools that integrate with existing workflows rather than requiring complete system overhauls.

For Mid-Size Firms: Investment in knowledge management infrastructure emerged as the critical success factor. The KM Roundtable revealed that firms implementing AI without proper data governance struggle to achieve meaningful results.

For Large Firms: Change management and user adoption dominated discussions. Technical capability matters less than organizational readiness to embrace new workflows. The overview from these sessions is that robust workflows and a positive organizational culture are essential building blocks for effective AI adoption.

🔧 Practical Implementation Insights

The most valuable sessions provided actionable frameworks rather than theoretical discussions. The "Actionable AI Strategy & Policy" session offered specific methodologies for balancing governance with flexibility, with speakers emphasizing the need for a mellable but strong foundational governance policy.

Vendor interactions proved particularly valuable. The exhibit hall's "Pirate's Bounty" theme encouraged exploration, and many attendees reported discovering solutions through peer recommendations rather than vendor pitches.

Technology evaluation challenges were evident. The KM Roundtable revealed "POC fatigue" as teams try to evaluate numerous AI tools while managing regular workloads, with general skepticism about which tools will have longevity.

🚢 Looking Ahead: Charting the Course

It was great catching up with The Tech-Savvy Lawyer.Page Podcast Guest (Ep. 109) Jacqueline Schafer, Founder and CEO of Clearbrief!

ILTACON 2025 demonstrated that legal technology has moved from experimental to operational. The questions are no longer "Can AI help lawyers?" but rather "How do we implement AI responsibly and effectively?"

The excitement was palpable – and justified. For technology professionals in law, this represents a career-defining moment where their expertise directly impacts firm competitiveness and client service quality.

As we navigate these transformative waters, remember that the real treasure isn't the technology itself. It's the enhanced client service, improved efficiency, and competitive advantages these tools provide when properly implemented.

Next year's ILTACON promises to build on this momentum. Mark your calendars now – this is where the legal profession's technological future gets written, one innovation at a time.

Ready to implement what you learned at ILTACON 2025? Subscribe to The Tech-Savvy Lawyer.Page for ongoing insights and practical guidance on legal technology adoption.