MTC: 2025 Year in Review: The "AI Squeeze," Redaction Disasters, and the Return of Hardware!

As we close the book on 2025, the legal profession finds itself in a dramatically different landscape than the one we predicted back in January. If 2023 was the year of "AI Hype" and 2024 was the year of "AI Experimentation," 2025 has undeniably been the year of the "AI Reality Check."

Here at The Tech-Savvy Lawyer.Page, we have spent the last twelve months documenting the friction between rapid innovation and the stubborn realities of legal practice. From our podcast conversations with industry leaders like Seth Price and Chris Dralla to our deep dives into the ethics of digital practice, one theme has remained constant: Competence is no longer optional; it is survival.

Looking back at our coverage from this past year, three specific highlights stand out as defining moments for legal technology in 2025. These aren't just news items; they are signals of where our profession is heading.

Highlight #1: The "Black Box" Redaction Wake-Up Call

Just days ago, on December 23, 2025, the legal world learned of a catastrophic failure of basic technological competence. As we covered in our recent post, How To: Redact PDF Documents Properly and Recover Data from Failed Redactions: A Guide for Lawyers After the DOJ Epstein Files Release “Leak”, the Department of Justice’s release of the Jeffrey Epstein files became a case study in what not to do.

The failure was simple but devastating: relying on visual "masks" rather than true data sanitization. Tech-savvy readers—and let’s be honest, anyone with a basic knowledge of copy-paste—were able to lift the "redacted" names of associates and victims directly from the PDF.

Why this matters for you: This event shattered the illusion that "good enough" tech skills are acceptable in high-stakes litigation. In 2025, we learned that the duty of confidentiality (Model Rule 1.6) is inextricably linked to the duty of technical competence (Model Rule 1.1 and its Comment 8). As we move into 2026, firms must move beyond basic PDF tools and invest in purpose-built redaction software that "burns in" changes and scrubs metadata. If the DOJ can fail this publicly, your firm is not immune.

Highlight #2: The "AI Squeeze" on Hardware

Throughout the year, we’ve heard complaints about sluggish laptops and crashing applications. In our December 22nd post, The 2026 Hardware Hike: Why Law Firms Must Budget for the 'AI Squeeze' Now, we identified the culprit. It isn’t just your imagination—it’s the supply chain.

We are currently facing a global shortage of DRAM (Dynamic Random Access Memory), driven by the insatiable appetite of data centers powering the very AI models we use daily. Manufacturers like Dell and Lenovo are pivoting their supply to these high-profit enterprise clients, leaving consumer and business laptops with a supply deficit.

Why this matters for you: The era of the 16GB RAM laptop for lawyers is dead. Running local, privacy-focused AI models (a major trend in 2025) and heavy eDiscovery platforms now requires 32GB or even 64GB of RAM as a baseline (which means you may want more than the “baseline”). The "AI Squeeze" means that in 2026, hardware will be 15-20% more expensive and harder to find. The lesson? Buy now. If your firm has a hardware refresh cycle planned for Q2 2026, accelerate it to Q1. Budgeting for technology is no longer just about software subscriptions; it’s about securing the physical silicon needed to do your job.

Highlight #3: From "Chat" to "Doing" (The Rise of Agentic AI)

Earlier this year, on the Tech-Savvy Lawyer Podcast, we spoke with Chris Dralla of TypeLaw and discussed the evolution of AI tools. 2025 marked the shift from "Chatbot AI" (asking a bot a question) to "Agentic AI" (telling a bot to do a job).

Tools like TypeLaw didn't just "summarize" cases this year; they actively formatted briefs, checked citations against local court rules, and built tables of authorities with minimal human intervention. This is the "boring" automation we have always advocated for—technology that doesn't try to be a robot lawyer, but acts as a tireless paralegal.

Why this matters for you: The novelty of chatting with an LLM has worn off. The firms winning in 2025 were the ones adopting tools that integrated directly into Microsoft Word and Outlook to automate specific, repetitive workflows. The "Generalist AI" is being replaced by the "Specialist Agent."

Moving Forward: What We Can Learn Today for 2026

As we look toward the new year, the profession must internalize a critical lesson: Technology is a supply chain risk.

Whether it is the supply of affordable memory chips or the supply of secure software that properly handles redactions, you are dependent on your tools. The "Tech-Savvy" lawyer of 2026 is not just a user of technology but a manager of technology risk.

What to Expect in 2026:

Is your firm budgeted for the anticipated 2026 hardware price hike?

  1. The Rise of the "Hybrid Builder": I predict that mid-sized firms will stop waiting for vendors to build the perfect tool and start building their own "micro-apps" on top of secure, private AI models.

  2. Mandatory Tech Competence CLEs: rigorous enforcement of tech competence rules will likely follow the high-profile data breaches and redaction failures of 2025.

  3. The Death of the Billable Hour (Again?): With "Agentic AI" handling the grunt work of drafting and formatting, clients will aggressively push back on bills for "document review" or "formatting." 2026 will force firms to bill for judgment, not just time.

As we sign off for the last time in 2025, remember our motto: Technology should make us better lawyers, not lazier ones. Check your redactions, upgrade your RAM, and we’ll see you in 2026.

Happy Lawyering and Happy New Year!

🧪🎧 TSL Labs Bonus Podcast: Open vs. Closed AI — The Hidden Liability Trap in Your Firm ⚖️🤖

Welcome to TSL Labs Podcast Experiment. 🧪🎧 In this special "Deep Dive" bonus episode, we strip away the hype surrounding Generative AI to expose a critical operational risk hiding in plain sight: the dangerous confusion between "Open" and "Closed" AI systems.

Featuring an engaging discussion between our Google Notebook AI hosts, this episode unpacks the "Swiss Army Knife vs. Scalpel" analogy that every managing partner needs to understand. We explore why the "Green Light" tools you pay for are fundamentally different from the "Red Light" public models your staff might be using—and why treating them the same could trigger an immediate breach of ABA Model Rule 5.3. From the "hidden crisis" of AI embedded in Microsoft 365 to the non-negotiable duty to supervise, this is the essential briefing for protecting client confidentiality in the age of algorithms.

In our conversation, we cover the following:

  • [00:00] – Introduction: The hidden danger of AI in law firms.

  • [01:00] – The "AI Gap": Why staff confuse efficiency with confidentiality.

  • [02:00] – The Green Light Zone: Defining secure, "Closed" AI systems (The Scalpel).

  • [03:45] – The Red Light Zone: Understanding "Open" Public LLMs (The Swiss Army Knife).

  • [04:45] – "Feeding the Beast": How public queries actively train the model for everyone else.

  • [05:45]The Duty to Supervise: ABA Model Rules 5.3 and 1.1[8] implications.

  • [07:00] – The Hidden Crisis: AI embedded in ubiquitous tools (Microsoft 365, Adobe, Zoom).

  • [09:00] – The Training Gap: Why digital natives assume all prompt boxes are safe.

  • [10:00] – Actionable Solutions: Auditing tools and the "Elevator vs. Private Room" analogy.

  • [12:00] – Hallucinations: Vendor liability vs. Professional negligence.

  • [14:00] – Conclusion: The final provocative thought on accidental breaches.

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

MTC: The Hidden Danger in Your Firm: Why We Must Teach the Difference Between “Open” and “Closed” AI!

Does your staff understand the difference between “free” and “paid” aI? Your license could depend on it!

I sit on an advisory board for a school that trains paralegals. We meet to discuss curriculum. We talk about the future of legal support. In a recent meeting, a presentation by a private legal research company caught my attention. It stopped me cold. The topic was Artificial Intelligence. The focus was on use and efficiency. But something critical was missing.

The lesson did not distinguish between public-facing and private tools. It treated AI as a monolith. This is a dangerous oversimplification. It is a liability waiting to happen.

We are in a new era of legal technology. It is exciting. It is also perilous. The peril comes from confusion. Specifically, the confusion between paid, closed-system legal research tools and public-facing generative AI.

Your paralegals, law clerks, and staff use these tools. They use them to draft emails. They use them to summarize depositions. Do they know where that data goes? Do you?

The Two Worlds of AI

There are two distinct worlds of AI in our profession.

First, there is the world of "Closed" AI. These are the tools we pay for - i.e., Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. These platforms are built for lawyers. They are walled gardens. You pay a premium for them. (Always check the terms and conditions of your providers.) That premium buys you more than just access. It buys you privacy. It buys you security. When you upload a case file to Westlaw, it stays there. The AI analyzes it. It does not learn from it for the public. It does not share your client’s secrets with the world. The data remains yours. The confidentiality is baked in.

Then, there is the world of "Open" or "Public" AI. This is ChatGPT. This is Perplexity. This is Claude. These tools are miraculous. But they are also voracious learners.

When you type a query into the free version of ChatGPT, you are not just asking a question. You are training the model. You are feeding the beast. If a paralegal types, "Draft a motion to dismiss for John Doe, who is accused of embezzlement at [Specific Company]," that information leaves your firm. It enters a public dataset. It is no longer confidential.

This is the distinction that was missing from the lesson plan. It is the distinction that could cost you your license.

The Duty to Supervise

Do you and your staff know when you can and can’t use free AI in your legal work?

You might be thinking, "I don't use ChatGPT for client work, so I'm safe." You are wrong.

You are not the only one doing the work. Your staff is doing the work. Your paralegals are doing the work.

Under the ABA Model Rules of Professional Conduct, you are responsible for them. Look at Rule 5.3. It covers "Responsibilities Regarding Nonlawyer Assistance." It is unambiguous. You must make reasonable efforts to ensure your staff's conduct is compatible with your professional obligations.

If your paralegal breaches confidentiality using AI, it is your breach. If your associate hallucinates a case citation using a public LLM, it is your hallucination.

This connects directly to Rule 1.1, Comment 8. This represents the duty of technology competence. You cannot supervise what you do not understand. You must understand the risks associated with relevant technology. Today, that means understanding how Large Language Models (LLMs) handle data.

The "Hidden AI" Problem

I have discussed this on The Tech-Savvy Lawyer.Page Podcast. We call it the "Hidden AI" crisis. AI is creeping into tools we use every day. It is in Adobe. It is in Zoom. It is in Microsoft 365.

Public-facing AI is useful. I use it. I love it for marketing. I use it for brainstorming generic topics. I use it to clean up non-confidential text. But I never trust it with a client's name. I never trust it with a very specific fact pattern.

A paid legal research tool is different. It is a scalpel. It is precise. It is sterile. A public chatbot is a Swiss Army knife found on the sidewalk. It might work. But you don't know where it's been.

The Training Gap

The advisory board meeting revealed a gap. Schools are teaching students how to use AI. They are teaching prompts. They are teaching speed. They are not emphasizing the where.

The "where" matters. Where does the data go?

We must close this gap in our own firms. You cannot assume your staff knows the difference. To a digital native, a text box is a text box. They see a prompt window in Westlaw. They see a prompt window in ChatGPT. They look the same. They act the same.

They are not the same.

One protects you. The other exposes you.

A Practical Solution

I have written about this in my blog posts regarding AI ethics. The solution is not to ban AI. That is impossible. It is also foolish. AI is a competitive advantage.

* Always check the terms of use in your agreements with private platforms to determine if your client confidential data and PII are protected.

The solution is policies and training.

  1. Audit Your Tools. Know what you have. Do you have an enterprise license for ChatGPT? If so, your data might be private. If not, assume it is public.

  2. Train on the "Why." Don't just say "No." Explain the mechanism. Explain that public AI learns from inputs. Use the analogy of a confidential conversation in a crowded elevator versus a private conference room.

  3. Define "Open" vs. "Closed." Create a visual guide. List your "Green Light" tools (Westlaw, Lexis, etc.). List your "Red Light" tools for client data (Free ChatGPT, personal Gmail, etc.).

  4. Supervise Output. Review the work. AI hallucinates. Even paid tools can make mistakes. Public tools make up cases entirely. We have all seen the headlines. Don't be the next headline.

The Expert Advantage

The line between “free” and “paid” ai could be a matter of keeping your bar license!

On The Tech-Savvy Lawyer.Page, I often say that technology should make us better lawyers, not lazier ones.

Using Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. is about leveraging a curated, verified database. It is about relying on authority. Using a public LLM for legal research is about rolling the dice.

Your license is hard-earned. Your reputation is priceless. Do not risk them on a free chatbot.

The lesson from the advisory board was clear. The schools are trying to keep up. But the technology moves faster than the curriculum. It is up to us. We are the supervisors. We are the gatekeepers.

Take time this week. Gather your team. Ask them what tools they use. You might be surprised. Then, teach them the difference. Show them the risks.

Be the tech-savvy lawyer your clients deserve. Be the supervisor the Rules require.

The tools are here to stay. Let’s use them effectively. Let’s use them ethically. Let’s use them safely.

MTC

MTC: Balancing Digital Transparency and Government Employee Safety: The Legal Profession's Ethical Crossroads in the Age of ICE Tracking Apps

The balance between government employee saftey and the public’s right to know is always in flux.

The intersection of technology, government transparency, and employee safety has created an unprecedented ethical challenge for the legal profession. Recent developments surrounding ICE tracking applications like ICEBlock, People Over Papers, and similar platforms have thrust lawyers into a complex moral and professional landscape where the traditional principle of "sunlight as the best disinfectant" collides with legitimate security concerns for government employees.

The Technology Landscape: A New Era of Crowdsourced Monitoring

The proliferation of ICE tracking applications represents a significant shift in how citizens monitor government activities. ICEBlock, developed by Joshua Aaron, allows users to anonymously report ICE agent sightings within a five-mile radius, functioning essentially as "Waze for immigration enforcement". People Over Papers, created by TikTok user Celeste, operates as a web-based platform using Padlet technology to crowdsource and verify ICE activity reports with photographs and timestamps. Additional platforms include Islip Forward, which provides real-time push notifications for Suffolk County residents, and Coquí, offering mapping and alert systems for ICE activities.

These applications exist within a broader ecosystem of similar technologies. Traditional platforms like Waze, Google Maps, and Apple Maps have long enabled police speed trap reporting. More controversial surveillance tools include Fog Reveal, which allows law enforcement to track civilian movements using advertising IDs from popular apps. The distinction between citizen-initiated transparency tools and government surveillance technologies highlights the complex ethical terrain lawyers must navigate.

The Ethical Framework: ABA Guidelines and Professional Responsibilities

Legal professionals face multiple competing ethical obligations when addressing these technological developments. ABA Model Rule 1.1 requires lawyers to maintain technological competence, understanding both the benefits and risks associated with relevant technology. This competence requirement extends beyond mere familiarity to encompass the ethical implications of technology use in legal practice.

Rule 1.6's confidentiality obligations create additional complexity when lawyers handle cases involving government employees, ICE agents, or immigration-related matters. The duty to protect client information becomes particularly challenging when technology platforms may compromise attorney-client privilege or expose sensitive personally identifiable information to third parties.

The tension between advocacy responsibilities and ethical obligations becomes acute when lawyers represent clients on different sides of immigration enforcement. Attorneys representing undocumented immigrants may view transparency tools as legitimate safety measures, while those representing government employees may consider the same applications as security threats that endanger their clients.

Balancing Transparency and Safety: The Core Dilemma

Who watches whom? Exploring transparency limits in democracy.

The principle of transparency in government operations serves as a cornerstone of democratic accountability. However, the safety of government employees, including ICE agents, presents legitimate counterbalancing concerns. Federal officials have reported significant increases in assaults against ICE agents, citing these tracking applications as contributing factors.

The challenge for legal professionals lies in advocating for their clients while maintaining ethical standards that protect all parties' legitimate interests. This requires nuanced understanding of both technology capabilities and legal boundaries. Lawyers must recognize that the same transparency tools that may protect their immigrant clients could potentially endanger government employees who are simply performing their lawful duties.

Technology Ethics in Legal Practice: Professional Standards

The legal profession's approach to technology ethics must evolve to address these emerging challenges. Lawyers working with sensitive immigration cases must implement robust cybersecurity measures, understand the privacy implications of various communication platforms, and maintain clear boundaries between personal advocacy and professional obligations.

The ABA's guidance on generative AI and technology use provides relevant frameworks for addressing these issues. Legal professionals must ensure that their technology choices do not inadvertently compromise client confidentiality or create security vulnerabilities that could harm any party to legal proceedings.

Jurisdictional and Regulatory Considerations

The removal of ICEBlock from Apple's App Store and People Over Papers from Padlet demonstrates how private platforms exercise content moderation that can significantly impact government transparency tools. These actions raise important questions about the role of technology companies in mediating between transparency advocates and security concerns.

Legal professionals must understand the complex regulatory environment governing these technologies. Federal agencies like CISA recommend encrypted communications for high-value government targets while acknowledging the importance of government transparency. This creates a nuanced landscape where legitimate security measures must coexist with accountability mechanisms.

Professional Recommendations and Best Practices

Legal practitioners working in this environment should adopt several key practices. First, maintain clear separation between personal political views and professional obligations. Second, implement comprehensive cybersecurity measures that protect all client information regardless of their position in legal proceedings proceedings. Third, stay informed about technological developments and their legal implications through continuing education focused on technology law and ethics.

Lawyers should also engage in transparent communication with clients about the risks and benefits of various technology platforms. This includes obtaining informed consent when using technologies that may impact privacy or security, and maintaining awareness of how different platforms handle data security and user privacy.

The legal profession must also advocate for balanced regulatory approaches that protect both government transparency and employee safety. This may involve supporting legislation that creates appropriate oversight mechanisms while maintaining necessary security protections for government workers.

The Path Forward: Ethical Technology Advocacy

The future of legal practice will require increasingly sophisticated approaches to balancing competing interests in our digital age. Legal professionals must serve as informed advocates who understand both the technological landscape and the ethical obligations that govern their profession. This includes recognizing that technology platforms designed for legitimate transparency purposes can be misused, while also acknowledging that government accountability remains essential to democratic governance.

transparency is a balancing act that all lawyers need to be aware of in their practice!

The legal profession's response to ICE tracking applications and similar technologies will establish important precedents for how lawyers navigate future ethical challenges in our increasingly connected world. By maintaining focus on professional ethical standards while advocating effectively for their clients, legal professionals can help ensure that technological advances serve justice rather than undermining it.

Success in this environment requires lawyers to become technologically literate advocates who understand both the promise and perils of digital transparency tools. Only through this balanced approach can the legal profession effectively serve its clients while maintaining the ethical standards that define professional practice in the digital age.

MTC

MTC: AI Hallucinated Cases Are Now Shaping Court Decisions - What Every Lawyer, Legal Professional and Judge Must Know in 2025!

AL Hallucinated cases are now shaping court decisions - what every lawyer and judge needs to know in 2025.

Artificial intelligence has transformed legal research, but a threat is emerging from chambers: hallucinated case law. On June 30, 2025, the Georgia Court of Appeals delivered a landmark ruling in Shahid v. Esaam that should serve as a wake-up call to every member of the legal profession: AI hallucinations are no longer just embarrassing mistakes—they are actively influencing court decisions and undermining the integrity of our judicial system.

The Georgia Court of Appeals Ruling: A Watershed Moment

The Shahid v. Esaam decision represents the first documented case where a trial court's order was based entirely on non-existent case law, likely generated by AI tools. The Georgia Court of Appeals found that the trial court's order denying a motion to reopen a divorce case relied upon two fictitious cases, and the appellee's brief contained an astounding 11 bogus citations out of 15 total citations. The court imposed a $2,500 penalty on attorney Diana Lynch—the maximum allowed under GA Court of Appeals Rule 7(e)(2)—and vacated the trial court's order entirely.

What makes this case particularly alarming is not just the volume of fabricated citations, but the fact that these AI-generated hallucinations were adopted wholesale without verification by the trial court. The court specifically referenced Chief Justice John Roberts' 2023 warning that "any use of AI requires caution and humility".

The Explosive Growth of AI Hallucination Cases

The Shahid case is far from isolated. Legal researcher Damien Charlotin has compiled a comprehensive database tracking over 120 cases worldwide where courts have identified AI-generated hallucinations in legal filings. The data reveals an alarming acceleration: while there were only 10 cases documented in 2023, that number jumped to 37 in 2024, and an astounding 73 cases have already been reported in just the first five months of 2025.

Perhaps most concerning is the shift in responsibility. In 2023, seven out of ten cases involving hallucinations were made by pro se litigants, with only three attributed to lawyers. However, by May 2025, legal professionals were found to be at fault in at least 13 of 23 cases where AI errors were discovered. This trend indicates that trained attorneys—who should know better—are increasingly falling victim to AI's deceptive capabilities.

High-Profile Cases and Escalating Sanctions

Always check your research - you don’t want to get in trouble with your client, the judge or the bar!

The crisis has intensified with high-profile sanctions. In May 2025, a special master in California imposed a staggering $31,100 sanction against law firms K&L Gates and Ellis George for what was termed a "collective debacle" involving AI-generated research4. The case involved attorneys who used multiple AI tools including CoCounsel, Westlaw Precision, and Google Gemini to generate a brief, with approximately nine of the 27 legal citations proving to be incorrect.

Even more concerning was the February 2025 case involving Morgan & Morgan—the largest personal injury firm in the United States—where attorneys were sanctioned for a motion citing eight nonexistent cases. The firm subsequently issued an urgent warning to its more than 1,000 lawyers that using fabricated AI information could result in termination.

The Tech-Savvy Lawyer.Page: Years of Warnings

The risks of AI hallucinations in legal practice have been extensively documented by experts in legal technology. I’ve been sounding the alarm at The Tech-Savvy Lawyer.Page Blog and Podcast about these issues for years. In a blog post titled "Why Are Lawyers Still Failing at AI Legal Research? The Alarming Rise of AI Hallucinations in Courtrooms," the editorial detailed how even advanced legal AI platforms can generate plausible but fake authorities.

My comprehensive coverage has included reviews of specific platforms, such as the November 2024 analysis "Lexis+ AI™️ Falls Short for Legal Research," which documented how even purpose-built legal AI tools can cite non-existent legislation. The platform's consistent message has been clear: AI is a collaborator, not an infallible expert.

International Recognition of the Crisis

The problem has gained international attention, with the London High Court issuing a stark warning in June 2025 that attorneys who use AI to cite non-existent cases could face contempt of court charges or even criminal prosecution. Justice Victoria Sharp warned that "in the most severe instances, intentionally submitting false information to the court with the aim of obstructing the course of justice constitutes the common law criminal offense of perverting the course of justice".

The Path Forward: Critical Safeguards

Based on extensive research and mounting evidence, several key recommendations emerge for legal professionals:

For Individual Lawyers:

Lawyers need to be diligent and make sure their case citations are not only accurate but real!

  • Never use general-purpose AI tools like ChatGPT for legal research without extensive verification

  • Implement mandatory verification protocols for all AI-generated content

  • Obtain specialized training on AI limitations and best practices

  • Consider using only specialized legal AI platforms with built-in verification mechanisms

For Courts:

  • Implement consistent disclosure requirements for AI use in court filings

  • Develop verification procedures for detecting potential AI hallucinations

  • Provide training for judges and court staff on AI technology recognition

FINAL THOUGHTS

The legal profession is at a crossroads. AI can enhance efficiency, but unchecked use can undermine the integrity of the justice system. The solution is not to abandon AI, but to use it wisely with appropriate oversight and verification. The warnings from The Tech-Savvy Lawyer.Page and other experts have proven prescient—the question now is whether the profession will heed these warnings before the crisis deepens further.

MTC

Happy Lawyering!

🚨 BOLO: Android Ad Fraud Malware and Your ABA Ethical Duties – What Every Lawyer Must Know in 2025 🚨

Defend Client Data from Malware!

The discovery of the “Kaleidoscope” ad fraud malware targeting Android devices is a wake-up call for legal professionals. This threat, which bombards users with unskippable ads and exploits app permissions, is not just an annoyance - it is a direct risk to client confidentiality, law firm operations, and compliance with the ABA Model Rules of Professional Conduct. Lawyers must recognize that cybersecurity is not optional; it is an ethical mandate under the ABA Model Rules, including Rules 1.1, 1.3, 1.4, 1.6, 5.1, and 5.3.

Why the ABA Model Rules Matter

  • Rule 1.6 (Confidentiality): Lawyers must make reasonable efforts to prevent unauthorized disclosure of client information. A compromised device can leak confidential data, violating this core duty.

  • Rule 1.1 (Competence): Competence now includes understanding and managing technological risks. Lawyers must stay abreast of threats like Kaleidoscope and take appropriate precautions.

  • Rule 1.3 (Diligence): Prompt action is required to investigate and remediate breaches, protecting client interests.

  • Rule 1.4 (Communication): Lawyers must communicate risks and safeguards to clients, including the potential for data breaches and the steps being taken to secure information.

  • Rules 5.1 & 5.3 (Supervision): Law firm leaders must ensure all personnel, including non-lawyers, adhere to cybersecurity protocols.

Practical Steps for Lawyers – Backed by Ethics and The Tech-Savvy Lawyer.Page

Lawyers: Secure Your Practice Now!

  • Download Only from Trusted Sources: Only install apps from the Google Play Store, leveraging its built-in protections. Avoid third-party stores, the main source of Kaleidoscope infections.

  • Review App Permissions: Be vigilant about apps requesting broad permissions, such as “Display over other apps.” These can enable malware to hijack your device.

  • Secure Devices: Use strong, unique passwords, enable multi-factor authentication, and encrypt devices-simple but essential steps emphasized by our blog posts on VPNs and ABA guidance.

  • Update Regularly: Keep your operating system and apps up to date to patch vulnerabilities.

  • Educate and Audit: Train your team about mobile threats and run regular security audits, as highlighted in Cybersecurity Awareness Month posts on The Tech-Savvy Lawyer.Page.

  • Incident Response: Have a plan for responding to breaches, as required by ABA Formal Opinion 483 and best practices.

  • Communicate with Clients: Discuss with clients how their information is protected and notify them promptly in the event of a breach, as required by Rule 1.4 and ABA opinions.

  • Label Confidential Communications: Mark sensitive communications as “privileged” or “confidential,” per ABA guidance.

Advanced Strategies

Lawyers need to have security measures in place to protect client data!

  • Leverage AI-Powered Security: Use advanced tools for real-time threat detection, as recommended by The Tech-Savvy Lawyer.Page.

  • VPN and Secure Networks: Avoid public Wi-Fi. But if/when you do be sure to use VPNs (see The Tech-Savvy Lawyer.Page articles on VPNs) to protect data in transit.

  • Regular Backups: Back up data to mitigate ransomware and other attacks.

By following these steps, lawyers fulfill their ethical duties, protect client data, and safeguard their practice against evolving threats like Kaleidoscope.

🎙️ Ep. 109: Building Trust in Legal AI - Clearbrief's Jacqueline Schafer on Security, Citations in The Future of Law.

I'm joined by Jacqueline Schafer, Founder and CEO of Clearbrief.ai. Jacqueline shares key insights into how legal professionals can effectively leverage AI. She outlines three essential expectations from legal AI assistants: robust security, accurate and verifiable citations, and seamless integration into legal workflows. Jacqueline addresses common misconceptions about AI, encourages responsible use, and highlights Clearbrief's unique features, including its seamless integration with Microsoft Word and AI-driven document analysis. With a focus on ethics, usability, and innovation, Jacqueline also provides a clear vision for the future of AI in legal practice.

Join Jacqueline and me as we discuss the following three questions and more!

  1. What are the top three things lawyers should expect from their legal AI assistant?

  2. What are the top three ways clearbrief.ai differentiates from its competitors?

  3. Regardless of what a lawyer uses, what are the top three things lawyers need to be mindful of regarding their legal and ethical responsibilities?

In our conversation, we cover the following:

[01:09] Jacqueline's Tech Setup

[02:33] Top Three Expectations from Legal AI Assistants

[06:58] Top Three Things Lawyers Should Not Expect from Legal AI Assistants

[08:36] Clearbrief's Unique Features and Differentiators

[17:37] Ethical Responsibilities and Training Staff

Resources:

Connect with Jacqueline:

Mentioned in the episode

Software & Cloud Services mentioned in the conversation: