MTC: PornHub Breach: Cybersecurity Wake-Up Call for Lawyers

Lawyers are the first line defenders for their clientS’ pii.

It's the start of the New Year, and as good a time as any to remind the legal profession of their cybersecurity obligations! The recent PornHub data exposure reveals critical vulnerabilities every lawyer must address under ABA ethical obligations. Third-party analytics provider Mixpanel suffered a breach compromising user email addresses, triggering targeted sextortion campaigns. This incident illuminates three core security domains for legal professionals while highlighting specific duties under ABA Model Rules 1.1, 1.6, 5.1, 5.3, and Formal Opinion 483.

Understanding the Breach and Its Legal Implications

The PornHub incident demonstrates how failures by third-party vendors can lead to cascading security consequences. When Mixpanel's systems were compromised, attackers gained access to email addresses that now fuel sextortion schemes. Criminals threaten to expose purported adult site usage unless victims pay cryptocurrency ransoms. For law firms, this scenario is not hypothetical—your practice management software, cloud storage providers, and analytics tools present identical vulnerabilities. Each third-party vendor represents a potential entry point for attackers targeting your client data.

ABA Model Rule 1.1: The Foundation of Technology Competence

ABA Model Rule 1.1 requires lawyers to provide competent representation, and Comment 8 explicitly extends this duty to technology: "To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". This is not a suggestion—it is an ethical mandate. Thirty-one states have adopted this technology competence requirement into their professional conduct rules.

What does this mean practically? You must understand the security implications of every technology tool your firm uses. Before onboarding any platform, conduct due diligence on the vendor's security practices. Require SOC 2 compliance, cyber insurance verification, and detailed security questionnaires. The "reasonable efforts" standard does not demand perfection, but it does require informed decision-making. You cannot delegate technology competence entirely to IT consultants. You must understand enough to ask the right questions and evaluate the answers meaningfully.

ABA Model Rule 1.6: Safeguarding Client Information in Digital Systems

Rule 1.6 establishes your duty of confidentiality, and Comment 18 requires "reasonable efforts to prevent [the inadvertent or unauthorized] access or disclosure” to information relating to the representation of a client. This duty extends beyond privileged communications to all client-related information stored digitally.

The PornHub breach illustrates why this matters. Your firm's email system, document management platform, and client portals contain information criminals actively target. The "reasonable efforts" analysis considers the sensitivity of information, likelihood of disclosure without additional safeguards, cost of safeguards, and difficulty of implementation. For most firms, this means mandatory multi-factor authentication (MFA) on all systems, encryption for data at rest and in transit, and secure file-sharing platforms instead of email attachments.

You must also address third-party vendor access under Rule 1.6. When you grant a case management platform access to client data, you remain ethically responsible for protecting that information. Your engagement letters should specify security expectations, and vendor contracts must include confidentiality obligations and breach notification requirements.

ABA Model Rules 5.1 and 5.3: Supervisory Responsibilities Extend to Technology

lawyers need to stay up to date on the security protocOls for their firm’s software!

Rule 5.1 imposes duties on partners and supervisory lawyers to ensure the firm has measures giving "reasonable assurance that all lawyers in the firm conform to the Rules of Professional Conduct". Rule 5.3 extends this duty to nonlawyer assistants, which courts and ethics opinions have interpreted to include technology vendors and cloud service providers.

If you manage a firm or supervise other lawyers, you must implement technology policies and training programs. This includes security awareness training, password management requirements, and incident reporting procedures. You cannot assume your younger associates understand cybersecurity best practices—they need explicit training and clear policies.

For nonlawyer assistance, you must "make reasonable efforts to ensure that the person's conduct is compatible with the professional obligations of the lawyer". This means vetting your IT providers, requiring them to maintain appropriate security certifications, and ensuring they understand their confidentiality obligations. Your vendor management program is an ethical requirement, not just a business best practice.

ABA Formal Opinion 483: Data Breach Response Requirements

ABA Formal Opinion 483 establishes clear obligations when a data breach occurs. Lawyers have a duty to monitor for breaches, stop and mitigate damage promptly, investigate what occurred, and notify affected clients. This duty arises from Rules 1.1 (competence), 1.6 (confidentiality), and 1.4 (communication).

The Opinion requires you to have a written incident response plan before a breach occurs. Your plan must identify who will coordinate the response, how you will communicate with affected clients (including backup communication methods if email is compromised), and what steps you will take to assess and remediate the breach. You must document what data was accessed, whether malware was used, and whether client information was taken, altered, or destroyed.

Notification to clients is mandatory when a breach involves material client confidential information. The notification must be prompt and include what happened, what information was involved, what you are doing in response, and what clients should do to protect themselves. This duty extends to former clients in many circumstances, as their files may still contain sensitive information subject to state data breach laws.

Three Security Domains: Personal, Practice, and Client Protection

Your Law Practice's Security
Under Rules 5.1 and 5.3, you must implement reasonable security measures throughout your firm. Conduct annual cybersecurity risk assessments. Require MFA on all systems. Implement data minimization principles—only share what vendors absolutely need. Establish incident response protocols before breaches occur. Your supervisory duties require you to ensure that all firm personnel, including non-lawyer staff, understand and follow the firm's security policies.

Client Security Obligations
Rule 1.4 requires you to keep clients reasonably informed, which includes advising them on security matters relevant to their representation. Clients experiencing sextortion need immediate, informed guidance. Preserve all threatening emails with headers intact. Document timestamps and demands. Advise clients never to pay or respond—payment confirms active monitoring and often leads to additional demands. Report incidents to the FBI's IC3 unit and local cybercrime divisions. For family law practitioners, understand that sextortion often targets vulnerable individuals during contentious proceedings. Criminal defense attorneys must recognize these threats as extortion, not embarrassment issues. Your competence under Rule 1.1 requires you to understand these threats well enough to provide effective guidance.

Personal Digital Hygiene
Your personal email account is your digital identity's master key. Enable MFA on all professional and personal accounts. Use unique, complex passwords managed through a password manager. Consider pseudonymous email addresses for sensitive subscriptions. Separate your litigation communications from personal browsing activities. The STOP framework applies: Slow down, Test suspicious contacts, Opt out of high-pressure conversations, and Prove identities through independent channels. Your personal security failures can compromise your professional obligations under Rule 1.6.

Practical Implementation Steps

THere are five Practical Implementation Steps lawyers can do today to get their practice cyber compliant!

First, conduct a technology audit to map every system that stores or accesses client information. Identify all third-party vendors and assess their security practices against industry standards.

Second, implement MFA across all systems immediately—this is one of the most effective and cost-efficient security controls available.

Third, develop written security policies covering password management, device encryption, remote work procedures, and incident response.

Fourth, train all firm personnel on these policies and conduct simulated phishing exercises to test awareness.

Fifth, review and update your engagement letters to include technology provisions and breach notification procedures.

Conclusion

The PornHub breach is not an isolated incident—it is a template for how modern attacks occur through third-party vendors. Your ethical duties under ABA Model Rules require proactive cybersecurity measures, not reactive responses after a breach. Technology competence under Rule 1.1, confidentiality protection under Rule 1.6, supervisory responsibilities under Rules 5.1 and 5.3, and breach response obligations under Formal Opinion 483 together create a comprehensive framework for protecting your practice and your clients. Cybersecurity is no longer an IT issue delegated to consultants; it is a core professional competency that affects your license to practice law. The time to act is before your firm appears in a breach notification headline.

Words of the Week: “ANTHROPIC” VS. “AGENTIC”: UNDERSTANDING THE DISTINCTION IN LEGAL TECHNOLOGY 🔍

lawyers need to know the difference anthropic v. agentic

The terms "Anthropic" and "agentic" circulate frequently in legal technology discussions. They sound similar. They appear in the same articles. Yet they represent fundamentally different concepts. Understanding the distinction matters deeply for legal practitioners seeking to leverage artificial intelligence effectively.

Anthropic is a company—specifically, an AI safety-focused organization that develops large language models, most notably Claude. Think of Anthropic as a technology provider. The company pioneered "Constitutional AI," a training methodology that embeds explicit principles into AI systems to guide their behavior toward helpfulness, harmlessness, and honesty. When you use Claude for legal research or document drafting, you are using a product built by Anthropic.

Agentic describes a category of AI system architecture and capability—not a company or product. Agentic systems operate autonomously, plan multi-step tasks, make decisions dynamically, and execute workflows with minimal human intervention. An agentic system can break down complex assignments, gather information, refine outputs, and adjust its approach based on changing circumstances. It exercises judgment about which tools to deploy and when to escalate matters to human oversight.

"Constitutional AI" is an ai training methodology promoting helpfulness, harmlessness, and honesty in ai programing

The relationship between these concepts becomes clearer through a practical scenario. Imagine you task an AI system with analyzing merger agreements from a target company. A non-agentic approach requires you to provide explicit instructions for each step: search the database, extract key clauses, compare terms against templates, and prepare a summary. You guide the process throughout. An agentic approach allows you to assign a goal—Review these contracts, flag risks, and prepare a risk summary—and the AI system formulates its own research plan, prioritizes which documents to examine first, identifies gaps requiring additional information, and works through the analysis independently, pausing only when human judgment becomes necessary.

Anthropic builds AI models capable of agentic behavior. Claude, Anthropic's flagship model, can function as an agentic system when configured appropriately. However, Anthropic's models can also operate in simpler, non-agentic modes. You might use Claude to answer a direct question or draft a memo without any agentic capability coming into play. The capability exists within Anthropic's models, but agentic functionality remains optional depending on your implementation.

They work together as follows: Anthropic provides the underlying AI model and the training methodology emphasizing constitutional principles. That foundation becomes the engine powering agentic systems. The Constitutional AI approach matters specifically for agentic applications because autonomous systems require robust safeguards. As AI systems operate more independently, explicit principles embedded during training help ensure they remain aligned with human values and institutional requirements. Legal professionals cannot simply deploy an autonomous AI agent without trust in its underlying decision-making framework.

Agentic vs. Anthropic: Know the Difference. Shape the Future of Law!

For legal practitioners, the distinction carries practical implications. You evaluate Anthropic as a vendor when selecting which AI provider's tools to adopt. You evaluate agentic architecture when deciding whether your specific use case requires autonomous task execution or whether simpler, more directed AI assistance suffices. Many legal workflows benefit from direct AI support without requiring full autonomy. Others—such as high-volume contract analysis during due diligence—leverage agentic capabilities to move work forward rapidly.

Both elements represent genuine advances in legal technology. Recognizing the difference positions you to make informed decisions about tool adoption and appropriate implementation for your practice. ✅

🎙️Ep. 128, Building a Tech-Forward Law Firm: AI Intake, CRM Strategy & Client Experience with Colleen Joyce!

My next guest is Colleen Joyce, CEO of Lawyer.com, a leading legal marketplace that connects over one million consumers monthly with qualified attorneys nationwide. With nearly two decades of experience transforming how law firms leverage technology and marketing, Colleen has pioneered innovations including LawyerLine call intake services, AI-powered matching technology, and the Lawyer Growth Summit. She publishes the Fast Five newsletter every Tuesday, reaching over 20,000 legal professionals with insights on AI trends, business growth strategies, and practice management. In this episode, Colleen shares her expertise on the essential technologies modern law firms need to scale profitably, how AI is revolutionizing client intake processes, and the critical human touchpoints that should never be automated in legal practice.

💬 Join Colleen Joyce and me as we discuss the following three questions and more!

1.     Beyond the essential lead generation that Lawyer.com provides, you see thousands of firms succeed and fail based on their operational efficiency. If you are building a modern law firm from scratch today, what are the top three non-negotiable technologies? For example, specific CRM automations, financial analytics, or project management tools you would implement immediately to ensure the firm scales profitably rather than just chaotically.

2.     We know AI is reshaping the top of the funnel for legal consumers. Based on the data you're seeing from your new AI initiatives, what are the top three specific intake bottlenecks that AI can now solve better than a human receptionist? Allowing attorneys to focus primarily on high-value legal work rather than data entry or basic screening.

3.     Technology can handle logistics, but it can't handle the emotion of legal crisis. From your experience overseeing millions of consumer connections, what are the top three human touchpoints in the client lifecycle that a lawyer should never automate? Because they are crucial for building the trust and transparency that leads to long-term referrals.

In our conversation, we cover the following:

-      00:00:00 - Welcome and Introduction to Colleen Joyce

-      00:00:20 - Colleen's Current Tech Setup: MacBook Pro, iPhone 16, iPad, and Curved Monitor

-      00:01:00 - Discussion about iPhone Models and AppleCare Benefits

-      00:02:00 - Using Plaud AI for Recording Conversations

-      00:03:00 - MacBook Pro Specifications and Upgrade Recommendations

-      00:04:00 - Dell Curved Monitor Benefits for Focus and Productivity

-      00:05:00 - Question 1: Top Three Non-Negotiable Technologies for Modern Law Firms

-      00:06:00 - Intake Technology, CRM, and Practice Management Systems

-      00:07:00 - Balancing Cost and Technology for New Lawyers

-      00:08:00 - Leveraging Freemium Tools and AI for Budget-Conscious Firms

-      00:08:30 - Question 2: AI Solutions for Intake Bottlenecks

-      00:09:00 - Answering Phones with Empathetic AI Agents

-      00:10:00 - Importance of Legal-Specific AI Training

-      00:11:00 - Consumer Adoption and Resistance to AI vs. Human Agents

-      00:12:00 - Using Virtual Receptionists and Calendly for Scheduling

-      00:13:00 - Generational Differences in Technology Adoption

-      00:14:00 - The Evolution of Legal Technology Adoption Over 14 Years

-      00:15:00 - Question 3: Human Touchpoints That Should Never Be Automated

-      00:16:00 - Relationship Building and the Courting Period

-      00:17:00 - Screening Clients Through Your Tech Processes

-      00:18:00 - Where to Find Colleen: LinkedIn and the Fast Five Newsletter - 00:18:30 - Closing Remarks and Gratitude

---

📚 Resources

🤝 Connect with Colleen Joyce

•  LinkedIn: https://www.linkedin.com/in/colleenjoyce

•  Lawyer.com: https://www.lawyer.com

•  Lawyer.com Services: https://services.lawyer.com

•  Fast Five Newsletter (Published Tuesdays): https://www.linkedin.com/newsletters/ fast-five-fridays-7265815097552326656

•  Lawyer Growth Summit: https://lawyergrowthsummit.com

•  Lawyer.com Phone: 800-620-0900

•  Lawyer.com Address: 25 Mountainview Boulevard, Basking Ridge, NJ 07920

📖 Mentioned in the Episode

•  MacRumors Buyer's Guide: https://buyersguide.macrumors.com

•  LawyerLine (24-hour Intake Services) : https://www.lawyerline.ai/

🖥 Hardware Mentioned in the Conversation

•  MacBook Pro : https://www.apple.com/macbook-pro/

•  MacBook Pro with M4/M5 Chips (Upgrade recommendation): https://www.apple.com/macbook-pro/

•  iPhone 16: https://www.apple.com/iphone-16/

•  iPad: https://www.apple.com/ipad/

•  Dell Curved Monitor (22-24 inch, white): https://www.dell.com/monitors

•  HP Printer (with automatic duplex printing): https://www.hp.com/printers

☁ Software & Cloud Services Mentioned in the Conversation

•  Plaud AI (Call Recording & Transcription): https://www.plaud.ai

Slack (Team Communication Platform): https://slack.com

•  iMessage (Apple Messaging): https://support.apple.com/en-us/104969

•  Calendly (Scheduling Software): https://calendly.com

•  Monday.com (Project Management & Team Organization): https://monday.com

•  ChatGPT (AI Assistant): https://openai.com/chatgpt

•  AppleCare (Apple Device Protection): https://www.apple.com/support/applecare/

🎙️📘 Quick reminder: The Lawyer’s Guide to Podcasting releases NEXT WEEK!

Inside title page of The Lawyer’s Guide to Podcasting, releasing January 12, 2026.

If you want a podcast that sounds professional without turning your week into a production project, this book is built for you. It’s practical. It’s workflow-first. It keeps ethics and confidentiality in view. 🔐⚖️

✅ Inside you’ll learn:

  • How to choose a podcast format that fits your goals 🎯

  • A simple, reliable setup that sounds credible 🎤

  • Recording habits that reduce editing time ⏱️

  • Repurposing steps so one episode powers your content plan ♻️

📩 Want the release link the moment it’s live? Email Admin@TheTechSavvyLawyer.Page with subject “Book Link.” I’ll send it on launch day. 🚀

MTC🪙🪙:  When Reputable Databases Fail: What Lawyers Must Do After AI Hallucinations Reach the Court

What should a lawyer do when they inadvertENTLY USE A HALLUCINATED CITE?

In a sobering December 2025 filing in Integrity Investment Fund, LLC v. Raoul, plaintiff's counsel disclosed what many in the legal profession feared: even reputable legal research platforms can generate hallucinated citations. The Motion to Amend Complaint revealed that "one of the cited cases in the pending Amended Complaint could not be found," along with other miscited cases, despite the legal team using LexisNexis and LEXIS+ Document Analysis tools rather than general-purpose AI like ChatGPT. The attorney expressed being "horrified" by these inexcusable errors, but horror alone does not satisfy ethical obligations.

This case crystallizes a critical truth for the legal profession: artificial intelligence remains a tool requiring rigorous human oversight, not a substitute for attorney judgment. When technology fails—and Stanford research confirms it fails at alarming rates—lawyers must understand their ethical duties and remedial obligations.

The Scope of the Problem: Even Premium Tools Hallucinate

Legal AI vendors marketed their products as hallucination-resistant, leveraging retrieval-augmented generation (RAG) technology to ground responses in authoritative legal databases. Yet as reported in our 📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻, independent testing by Stanford's Human-Centered Artificial Intelligence program and RegLab reveals persistent accuracy problems. Lexis+ AI produced incorrect information 17% of the time, while Westlaw's AI-Assisted Research hallucinated at nearly double that rate—34% of queries.

These statistics expose a dangerous misconception: that specialized legal research platforms eliminate fabrication risks. The Integrity Investment Fund case demonstrates that attorneys using established, subscription-based legal databases still face citation failures. Courts nationwide have documented hundreds of cases involving AI-generated hallucinations, with 324 incidents in U.S. federal, state, and tribal courts as of late 2025. Legal professionals can no longer claim ignorance about AI limitations.

The consequences extend beyond individual attorneys. As one federal court warned, hallucinated citations that infiltrate judicial opinions create precedential contamination, potentially "sway[ing] an actual dispute between actual parties"—an outcome the court described as "scary". Each incident erodes public confidence in the justice system and, as one commentator noted, "sets back the adoption of AI in law".

The Ethical Framework: Three Foundational Rules

When attorneys discover AI-generated errors in court filings, three Model Rules of Professional Conduct establish clear obligations.

ABA Model Rule 1.1 mandates technological competence. The 2012 amendment to Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". Forty-one jurisdictions have adopted this technology competence requirement. This duty is ongoing and non-delegable. Attorneys cannot outsource their responsibility to understand the tools they deploy, even when those tools carry premium price tags and prestigious brand names.

Technological competence means understanding that current AI legal research tools hallucinate at rates ranging from 17% to 34%. It means recognizing that longer AI-generated responses contain more falsifiable propositions and therefore pose a greater risk of hallucination. It means implementing verification protocols rather than accepting AI output as authoritative.

ABA Model Rule 3.3 requires candor toward the tribunal. This rule prohibits knowingly making false statements of law or fact to a court and imposes an affirmative duty to correct false statements previously made. The duty continues until the conclusion of the proceeding. Critically, courts have held that the standard under Federal Rule of Civil Procedure 11 is objective reasonableness, not subjective good faith. As one court stated, "An attorney who acts with 'an empty head and a pure heart' is nonetheless responsible for the consequences of his actions".

When counsel in Integrity Investment Fund discovered the miscitations, filing a Motion to Amend Complaint fulfilled this corrective duty. The attorney took responsibility and sought to rectify the record before the court relied on fabricated authority. This represents the ethical minimum. Waiting for opposing counsel or the court to discover errors invites sanctions and disciplinary referrals.

The duty of candor applies regardless of how the error originated. In Kaur v. Desso, a Northern District of New York court rejected an attorney's argument that time pressure justified inadequate verification, stating that "the need to check whether the assertions and quotations generated were accurate trumps all". Professional obligations do not yield to convenience or deadline stress.

ABA Model Rules 5.1 and 5.3 establish supervisory responsibilities. Managing attorneys must ensure that subordinate lawyers and non-lawyer staff comply with the Rules of Professional Conduct. When a supervising attorney has knowledge of specific misconduct and ratifies it, the supervisor bears responsibility. This principle extends to AI-assisted work product.

The Integrity Investment Fund matter reportedly involved an experienced attorney assisting with drafting. Regardless of delegation, the signing attorney retains ultimate accountability. Law firms must implement training programs on AI limitations, establish mandatory review protocols for AI-generated research, and create policies governing which tools may be used and under what circumstances. Partners reviewing junior associate work must apply heightened scrutiny to AI-assisted documents, treating them as first drafts requiring comprehensive validation.

Federal Rule of Civil Procedure 11: The Litigation Hammer

Reputable databases can hallucinate too!

Beyond professional responsibility rules, Federal Rule of Civil Procedure 11 authorizes courts to impose sanctions on attorneys who submit documents without a reasonable inquiry into the facts and law. Courts may sanction the attorney, the party, or both. Sanctions range from monetary penalties paid to the court or opposing party to non-monetary directives, including mandatory continuing legal education, public reprimands, and referrals to disciplinary authorities.

Rule 11 contains a 21-day safe harbor provision. Before filing a sanctions motion, the moving party must serve the motion on opposing counsel, who has 21 days to withdraw or correct the challenged filing. If counsel promptly corrects the error during this window, sanctions may be avoided. This procedural protection rewards attorneys who implement monitoring systems to catch mistakes early.

Courts have imposed escalating consequences as AI hallucination cases proliferate. Early cases resulted in warnings or modest fines. Recent sanctions have grown more severe. A Colorado attorney received a 90-day suspension after admitting in text messages that he failed to verify ChatGPT-generated citations. An Arizona federal judge sanctioned an attorney and required her to personally notify three federal judges whose names appeared on fabricated opinions, revoked her pro hac vice admission, and referred her to the Washington State Bar Association. A California appellate court issued a historic fine after discovering 21 of 23 quotes in an opening brief were fake.

Morgan & Morgan—the 42nd largest law firm by headcount—faced a $5,000 sanction when attorneys filed a motion citing eight nonexistent cases generated by an internal AI platform. The court divided the sanction among three attorneys, with the signing attorney bearing the largest portion. The firm's response acknowledged "great embarrassment" and promised reforms, but the reputational damage extends beyond the individual case.

What Attorneys Must Do: A Seven-Step Protocol

Legal professionals who discover AI-generated errors in filed documents must act decisively. The following protocol aligns with ethical requirements and minimizes sanctions risk:

First, immediately cease relying on the affected research. Do not file additional briefs or make oral arguments based on potentially fabricated citations. If a hearing is imminent, notify the court that you are withdrawing specific legal arguments pending verification.

Second, conduct a comprehensive audit. Review every citation in the affected filing. Retrieve and read the full text of each case or statute cited. Verify that quoted language appears in the source and that the legal propositions match the authority's actual holding. Check citation accuracy using Shepard's or KeyCite to confirm cases remain good law. This process cannot be delegated to the AI tool that generated the original errors.

Third, assess the materiality of errors. Determine whether fabricated citations formed the basis for legal arguments or appeared as secondary support. In Integrity Investment Fund, counsel noted that "the main precedents...and the...statutory citations are correct, and none of the Plaintiffs' claims were based on the mis-cited cases". This distinction affects the appropriate remedy but does not eliminate the obligation to correct the record.

Fourth, notify opposing counsel immediately. Candor extends to adversaries. Explain that you have discovered citation errors and are taking corrective action. This transparency may forestall sanctions motions and demonstrates good faith to the court.

Fifth, file a corrective pleading or motion. In Integrity Investment Fund, counsel filed a Motion to Amend Complaint under Federal Rule of Civil Procedure 15(a)(2). Alternative vehicles include motions to correct the record, errata sheets, or supplemental briefs. The filing should acknowledge the errors explicitly, explain how they occurred without shifting blame to technology, take personal responsibility, and specify the corrections being made.

Sixth, notify the court in writing. Even if opposing counsel does not move for sanctions, attorneys have an independent duty to inform the tribunal of material misstatements. The notification should be factual and direct. In cases where fabricated citations attributed opinions to real judges, courts have required attorneys to send personal letters to those judges clarifying that the citations were fictitious.

Seventh, implement systemic reforms. Review firm-wide AI usage policies. Provide training on verification requirements. Establish mandatory review checkpoints for AI-assisted work product. Consider technology solutions such as citation validation software that flags cases not found in authoritative databases. Document these reforms in any correspondence with the court or bar authorities to demonstrate that the incident prompted institutional change.

The Duty to Supervise: Training the Humans and the Machines

The Integrity Investment Fund case involved an experienced attorney assisting with drafting, yet errors reached the court. This pattern appears throughout AI hallucination cases. In the Chicago Housing Authority litigation, the responsible attorney had previously published an article on ethical considerations of AI in legal practice, yet still submitted a brief citing the nonexistent case Mack v. Anderson. Knowledge about AI risks does not automatically translate into effective verification practices.

Law firms must treat AI tools as they would junior associates—competent at discrete tasks but requiring supervision. Partners should review AI-generated research as they would first-year associate work, assuming errors exist and exercising vigilant attention to detail. Unlike human associates who learn from corrections, AI systems may perpetuate errors across multiple matters until their underlying models are retrained.

Training programs should address specific hallucination patterns. AI tools frequently fabricate case citations with realistic-sounding names, accurate-appearing citation formats, and plausible procedural histories. They misrepresent legal holdings, confuse arguments made by litigants with court rulings, and fail to respect the hierarchy of legal authority. They cite proposed legislation as enacted law and rely on overturned precedents as current authority. Attorneys must learn to identify these red flags.

Supervisory duties extend to non-lawyer staff. If a paralegal uses an AI grammar checker on a document containing confidential case strategy, the supervising attorney bears responsibility for any confidentiality breach. When legal assistants use AI research tools, attorneys must verify their work with the same rigor applied to traditional research methods.

Client Communication and Informed Consent

watch out for ai hallucinations!

Ethical obligations to clients intersect with AI usage in multiple ways. ABA Model Rule 1.4 requires attorneys to keep clients reasonably informed and to explain matters to the extent necessary for clients to make informed decisions. Several state bar opinions suggest that attorneys should obtain informed consent before inputting confidential client information into AI tools, particularly those that use data for model training.

The confidentiality analysis turns on the AI tool's data-handling practices. Many general-purpose AI platforms explicitly state in their terms of service that they use input data for model training and improvement. This creates significant privilege and confidentiality risks. Even legal-specific platforms may share data with third-party vendors or retain information on servers outside the firm's control. Attorneys must review vendor agreements, understand data flow, and ensure adequate safeguards exist before using AI tools on client matters.

When AI-generated errors reach a court filing, clients deserve prompt notification. The errors may affect litigation strategy, settlement calculations, or case outcome predictions. In extreme cases, such as when a court dismisses claims or imposes sanctions, malpractice liability may arise. Transparent communication preserves the attorney-client relationship and demonstrates that the lawyer prioritizes the client's interests over protecting their reputation.

Jurisdictional Variations: Illinois Sets the Standard

While the ABA Model Rules provide a national framework, individual jurisdictions have begun addressing AI-specific issues. Illinois, where the Integrity Investment Fund case was filed, has taken proactive steps.

The Illinois Supreme Court adopted a Policy on Artificial Intelligence effective January 1, 2025. The policy recognizes that AI presents challenges for protecting private information, avoiding bias and misrepresentation, and maintaining judicial integrity. The court emphasized "upholding the highest ethical standards in the administration of justice" as a primary concern.

In September 2025, Judge Sarah D. Smith of Madison County Circuit Court issued a Standing Order on Use of Artificial Intelligence in Civil Cases, later extended to other Madison County courtrooms. The order "embraces the advancement of AI" while mandating that tools "remain consistent with professional responsibilities, ethical standards and procedural rules". Key provisions include requirements for human oversight and legal judgment, verification of all AI-generated citations and legal statements, disclosure of expert reliance on AI to formulate opinions, and potential sanctions for submissions including "case law hallucinations, [inappropriate] statements of law, or ghost citations".

Arizona has been particularly active given the high number of AI hallucination cases in the state—second only to the Southern District of Florida. The State Bar of Arizona issued guidance calling on lawyers to verify all AI-generated research before submitting it to courts or clients. The Arizona Supreme Court's Steering Committee on AI and the Courts issued similar guidance emphasizing that judges and attorneys, not AI tools, are responsible for their work product.

Other states are following suit. California issued Formal Opinion 2015-93 interpreting technological competence requirements. The District of Columbia Bar issued Ethics Opinion 388 in April 2024, specifically addressing generative artificial intelligence in client matters. These opinions converge on several principles: competence includes understanding AI technology sufficiently to be confident it advances client interests, all AI output requires verification before use, and technology assistance does not diminish attorney accountability.

The Path Forward: Responsible AI Integration

The legal profession stands at a crossroads. AI tools offer genuine efficiency gains—automated document review, pattern recognition in discovery, preliminary legal research, and jurisdictional surveys. Rejecting AI entirely would place practitioners at a competitive disadvantage and potentially violate the duty to provide competent, efficient representation.

Yet uncritical adoption invites the disasters documented in hundreds of cases nationwide. The middle path provided by the Illinois courts requires human oversight and legal judgment at every stage.

Attorneys should adopt a "trust but verify" approach. Use AI for initial research, document drafting, and analytical tasks, but implement mandatory verification protocols before any work product leaves the firm. Treat AI-generated citations as provisional until independently confirmed. Read cases rather than relying on AI summaries. Check the currency of legal authorities. Confirm that quotations appear in the cited sources.

Law firms should establish tiered AI usage policies. Low-risk applications such as document organization or calendar management may require minimal oversight. High-risk applications, including legal research, brief writing, and client advice, demand multiple layers of human review. Some uses—such as inputting highly confidential information into general-purpose AI platforms—should be prohibited entirely.

Billing practices must evolve. If AI reduces the time required for legal research from eight hours to two hours, the efficiency gain should benefit clients through lower fees rather than inflating attorney profits. Clients should not pay both for AI tool subscriptions and for the same number of billable hours as traditional research methods would require. Transparent billing practices build client trust and align with fiduciary obligations.

Lessons from Integrity Investment Fund

The Integrity Investment Fund case offers several instructive elements. First, the attorney used a reputable legal database rather than a general-purpose AI. This demonstrates that brand name and subscription fees do not guarantee accuracy. Second, the attorney discovered the errors and voluntarily sought to amend the complaint rather than waiting for opposing counsel or the court to raise the issue. This proactive approach likely mitigated potential sanctions. Third, the attorney took personal responsibility, describing himself as "horrified" rather than deflecting blame to the technology.

The court's response also merits attention. Rather than immediately imposing sanctions, the court directed defendants to respond to the motion to amend and address the effect on pending motions to dismiss. This measured approach recognizes that not all AI-related errors warrant the most severe consequences, particularly when counsel acts promptly to correct the record. Defendants agreed that "the striking of all miscited and non-existent cases [is] proper", suggesting that cooperation and candor can lead to reasonable resolutions.

The fact that "the main precedents...and the...statutory citations are correct" and "none of the Plaintiffs' claims were based on the mis-cited cases" likely influenced the court's analysis. This underscores the importance of distinguishing between errors in supporting citations versus errors in primary authorities. Both require correction, but the latter carries greater risk of case-dispositive consequences and sanctions.

The Broader Imperative: Preserving Professional Judgment

Lawyers must verify their AI work!

Judge Castel's observation in Mata v. Avianca that "many harms flow from the submission of fake opinions" captures the stakes. Beyond individual case outcomes, AI hallucinations threaten systemic values: judicial efficiency, precedential reliability, adversarial fairness, and public confidence in legal institutions.

Attorneys serve as officers of the court with special obligations to the administration of justice. This role cannot be automated. AI lacks the judgment to balance competing legal principles, to assess the credibility of factual assertions, to understand client objectives in their full context, or to exercise discretion in ways that advance both client interests and systemic values.

The attorney in Integrity Investment Fund learned a costly lesson that the profession must collectively absorb: reputable databases, sophisticated algorithms, and expensive subscriptions do not eliminate the need for human verification. AI remains a tool—powerful, useful, and increasingly indispensable—but still just a tool. The attorney who signs a pleading, who argues before a court, and who advises a client bears professional responsibility that technology cannot assume.

As AI capabilities expand and integration deepens, the temptation to trust automated output will intensify. The profession must resist that temptation. Every citation requires verification. Every legal proposition demands confirmation. Every AI-generated document needs human review. These are not burdensome obstacles to efficiency but essential guardrails protecting clients, courts, and the justice system itself.

When errors occur—and the statistics confirm they will occur with disturbing frequency—attorneys must act immediately to correct the record, accept responsibility, and implement reforms preventing recurrence. Horror at one's mistakes, while understandable, satisfies no ethical obligation. Action does.

MTC

“How To” Happy New Year 2026 Edition! 🎉 Future-Proof Your Firm: The Essential Guide to Law Firm Technology for 2026

FUture proof your firm and make sure you have the right technology to get your legal work done in 2026!

The year 2025 was a wake-up call for the legal industry. We watched Artificial Intelligence move from a shiny toy to a serious business tool. We saw cybersecurity threats evolve faster than our firewalls. And we faced the reality of aging infrastructure as the "Windows 10 era" officially ended in October.

Now we look toward 2026. The theme for the coming year is not just adoption. It is integration and security. You do not need to be a coder to run a modern law firm. You just need to make smart, practical decisions.

This guide aggregates lessons from 2025, including insights from my blog, The Tech-Savvy Lawyer.Page, and top legal tech reporters. Here is how to prepare your firm for 2026.

1. The Hardware Reality Check: Windows 11 or Bust

The most critical lesson from 2025 was the "End of Life" for Windows 10. Microsoft stopped supporting it on October 14, 2025. If your firm is still running Windows 10 in 2026, you are driving a car without brakes. You have no security updates. You are non-compliant with most client data protection mandates.

The Action Plan:

  • Audit Your Fleet: Check every laptop and desktop. If it cannot run Windows 11, replace it. Do not try to bypass the requirements.

  • The 2026 Standard Spec: When buying new computers, ignore the "minimum" requirements. You need longevity.

    • Processor: Intel Core i7 (13th gen or newer) or AMD Ryzen 7.

    • RAM: 32GB is the new 16GB. AI tools built into Windows (like Copilot) consume significant memory. 16GB is no longer the bare minimum; 32GB is the minimum, 64GB is future-proof.

    • Storage: 1TB NVMe SSD. Cloud storage is great, but local speed still matters for caching large case files. 2TB gives you breathing room; 4TB will help you in the years to come.

  • Monitors: Dual monitors are standard. But for 2026, consider a single 34-inch ultrawide curved monitor. It eliminates the bezel gap. It simplifies cable management. Or consider a three-monitor setup with the center monitor a little better than other two

2. Software: The Shift from "Open" to "Closed" AI

In 2025, we learned the hard way about "shadow AI." This happens when staff paste client data into public tools like the free version of ChatGPT. That is a major ethics violation.

For 2026, you must pivot to "Closed" AI systems.

The Action Plan:

  • Define "Closed" AI: These are tools where your data is not used to train the public model. Microsoft 365 Copilot is a prime example. Most practice management platforms (like Clio or MyCase) now have embedded AI features. These are generally safe "closed" environments.

  • Enable Copilot (Carefully): Microsoft 365 Copilot is likely already in your subscription options. It can summarize email threads. It can draft initial responses. Turn it on, but train your team on "The Review Rule."

  • The Review Rule: The Tech-Savvy Lawyer.Page emphasizes this constantly. AI is a drafter, not a lawyer. Every output must be verified. Human verification is the standard for 2026.

3. Security: The "Triple-E" Framework

Cybersecurity is no longer just for the IT department. It is a core competency for every lawyer. The "Triple-E Framework" that is perfect for 2026 planning: Educate, Empower, Elevate.

The Action Plan:

Be confident with your technology and make sure everything is up to date for 2026!

  • Educate: Run phishing simulations monthly. The attacks are getting smarter. AI is being used to write convincing phishing emails. Your team needs to see examples of these AI-generated scams.

  • Empower: Force the use of Password Managers (like 1Password or Bitwarden). Stop letting partners save passwords in their browsers. It is not secure.

  • Elevate: Implement "Zero Trust" access. This means verifying identity at every step, not just at the front door. Multi-Factor Authentication (MFA) must be on everything. No exceptions for senior partners.

4. The Cloud Ecosystem: Consolidation

In 2024 and 2025, firms bought too many separate apps. One app for billing. One for intake. One for signatures. This created "subscription fatigue."

The trend for 2026 is Platformization.

The Action Plan:

  • Audit Your Subscriptions: Look at your credit card statement. Do you have three tools that do the same thing?

  • Lean on Your Core Platform: If you use a major practice management system, check their new features. They likely added texting, e-signatures, or payments recently. Use the built-in tools. It is cheaper. It keeps your data in one place. It reduces security risks.

5. Mobile Lawyering: Professionalism Anywhere

Remote work is not "new" anymore. It is just "work." But looking unprofessional on Zoom is no longer acceptable.

The Action Plan:

  • Audio: Buy noise-canceling headsets for everyone. Laptop microphones are not good enough for court records. There are plenty of wired and Bluetooth noise-canceling headphones on the market - find the one that is best for you (most Bluetooth headphones will work on any operating system (Windows, Apple, Android, etc. - yes, Apple AirPods will work on Windows and Android devices).

  • Connectivity: Stop relying on public Wi-Fi. It is dangerous. Equip your lawyers with mobile hotspots or 5G-enabled laptops. Consider having phones/hotspots from two different providers in case one provider is down or if it just doesn’t have the signal strength necessary at a particular location.

  • The "ScanSnap" Standard: Every remote lawyer needs a dedicated scanner. The Ricoh (fka “Fujitsu”) ScanSnap remains the gold standard. It is reliable. It is fast. It keeps your paperless office actually paperless. But don’t forget about your smart device. Our phones’ cameras take great pictures, and there is plenty of scanning software that lets you capture a few pages easily when you are on the go.

Final Thoughts

Advances in technology ARE going to require some tech updates for your law practice - are you ready?

Preparing for 2026 is not about buying the most expensive futuristic gadgets. It is about solidifying your foundation. Upgrade your hardware to handle Windows 11. Move your AI use into secure, paid channels. Consolidate your software.

Technology is the nervous system of your firm. It can get out of control and even overly expensive. Treat it with the same care you treat your case files.

📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻

all lawyers need to remember to check ai-generated legal citations

After reviewing a year's worth of content from The Tech-Savvy Lawyer.Page blog and podcast, one word emerged to me as the defining concept for 2025: Verification. This term captures the essential duty that separates competent legal practice from dangerous shortcuts in the age of artificial intelligence.

Throughout 2025, The Tech-Savvy Lawyer consistently emphasized verification across multiple contexts. The blog covered proper redaction techniques following the Jeffrey Epstein files disaster. The podcast explored hidden AI in everyday legal tools. Every discussion returned to one central theme: lawyers must verify everything. 🔍

Verification means more than just checking your work. The concept encompasses multiple layers of professional responsibility. Attorneys must verify AI-generated legal research to prevent hallucinations. Courts have sanctioned lawyers who submitted fictitious case citations created by generative AI tools. One study found error rates of 33% in Westlaw AI and 17% in Lexis+ AI. Note the study's foundation is from May 2024, but a 2025 update confirms these findings remain current—the risk of not checking has not gone away. "Verification" cannot be ignored.

The duty extends beyond research. Lawyers must verify that redactions actually remove confidential information rather than simply hiding it under black boxes. The DOJ's failed redaction of the Epstein files demonstrated what happens when attorneys skip proper verification steps. Tech-savvy readers simply copied text from beneath the visual overlays. ⚠️

use of ai-generated legal work requires “verification”, “Verification”, “Verification”!

ABA Model Rule 1.1 requires technological competence. Comment 8 specifically mandates that lawyers understand "the benefits and risks associated with relevant technology." Verification sits at the heart of this competence requirement. Attorneys cannot claim ignorance about AI features embedded in Microsoft 365, Zoom, Adobe, or legal research platforms. Each tool processes client data differently. Each requires verification of settings, outputs, and data handling practices. 🛡️

The verification duty also applies to cybersecurity. Zero Trust Architecture operates on the principle "never trust, always verify." This security model requires continuous verification of user identity, device health, and access context. Law firms can no longer trust that users inside their network perimeter are authorized. Remote work and cloud-based systems demand constant verification.

Hidden AI poses another verification challenge. Software updates automatically activate AI features in familiar tools. These invisible assistants process confidential client data by default. Lawyers must verify which AI systems operate in their technology stack. They must verify data retention policies. They must verify that AI processing does not waive attorney-client privilege. 🤖

ABA Formal Opinion 512 eliminates the "I didn't know" defense. Lawyers bear responsibility for understanding how their tools use AI. Rule 5.3 requires attorneys to supervise software with the same care they supervise human staff members. Verification transforms from a good practice into an ethical mandate.

verify your ai-generated work like your bar license depends on it!

The year 2025 taught legal professionals that technology competence means verification competence. Attorneys must verify redactions work properly. They must verify AI outputs for accuracy. They must verify security settings protect confidential information. They must verify that hidden AI complies with ethical obligations. ✅

Verification protects clients, preserves attorney licenses, and maintains the integrity of legal practice. As The Tech-Savvy Lawyer demonstrated throughout 2025, every technological advancement creates new verification responsibilities. Attorneys who master verification will thrive in the AI era. Those who skip verification steps risk sanctions, malpractice claims, and disciplinary action.

The legal profession's 2025 Word of the Year is verification. Master it or risk everything. 💼⚖️

ANNOUNCEMENT (BOOK RELEASE): The Lawyer’s Guide to Podcasting: The Simple, Ethics-Aware Playbook to Launch a Professional Podcast (Release mid-January, 2026)

Anticipated release is mid-january 2026.

🎙️📘 Podcasting is still one of the fastest ways to build trust. It works for lawyers, legal professionals, and any expert who needs to explain complex topics in plain language.

On January 12, 2026, I’m releasing The Lawyer’s Guide to Podcasting. This book is designed for busy professionals who want a podcast that sounds credible, protects confidentiality, and fits into a real workflow. No studio required. No tech overwhelm.

✅ Inside the book, you’ll learn:

  • How to pick a podcast format that matches your goals 🎯

  • The “minimum viable setup” that sounds professional 🎤

  • Recording workflows that reduce editing time ⏱️

  • Practical ethics and risk habits for public content 🔐

  • Repurposing steps so one episode becomes a week of marketing ♻️

📩 Get the release link: Email Admin@TheTechSavvyLawyer.Page with the subject line “Podcasting Book Link” and I’ll send the link as soon as the book is released. 📩🎙️

MTC: 2025 Year in Review: The "AI Squeeze," Redaction Disasters, and the Return of Hardware!

As we close the book on 2025, the legal profession finds itself in a dramatically different landscape than the one we predicted back in January. If 2023 was the year of "AI Hype" and 2024 was the year of "AI Experimentation," 2025 has undeniably been the year of the "AI Reality Check."

Here at The Tech-Savvy Lawyer.Page, we have spent the last twelve months documenting the friction between rapid innovation and the stubborn realities of legal practice. From our podcast conversations with industry leaders like Seth Price and Chris Dralla to our deep dives into the ethics of digital practice, one theme has remained constant: Competence is no longer optional; it is survival.

Looking back at our coverage from this past year, three specific highlights stand out as defining moments for legal technology in 2025. These aren't just news items; they are signals of where our profession is heading.

Highlight #1: The "Black Box" Redaction Wake-Up Call

Just days ago, on December 23, 2025, the legal world learned of a catastrophic failure of basic technological competence. As we covered in our recent post, How To: Redact PDF Documents Properly and Recover Data from Failed Redactions: A Guide for Lawyers After the DOJ Epstein Files Release “Leak”, the Department of Justice’s release of the Jeffrey Epstein files became a case study in what not to do.

The failure was simple but devastating: relying on visual "masks" rather than true data sanitization. Tech-savvy readers—and let’s be honest, anyone with a basic knowledge of copy-paste—were able to lift the "redacted" names of associates and victims directly from the PDF.

Why this matters for you: This event shattered the illusion that "good enough" tech skills are acceptable in high-stakes litigation. In 2025, we learned that the duty of confidentiality (Model Rule 1.6) is inextricably linked to the duty of technical competence (Model Rule 1.1 and its Comment 8). As we move into 2026, firms must move beyond basic PDF tools and invest in purpose-built redaction software that "burns in" changes and scrubs metadata. If the DOJ can fail this publicly, your firm is not immune.

Highlight #2: The "AI Squeeze" on Hardware

Throughout the year, we’ve heard complaints about sluggish laptops and crashing applications. In our December 22nd post, The 2026 Hardware Hike: Why Law Firms Must Budget for the 'AI Squeeze' Now, we identified the culprit. It isn’t just your imagination—it’s the supply chain.

We are currently facing a global shortage of DRAM (Dynamic Random Access Memory), driven by the insatiable appetite of data centers powering the very AI models we use daily. Manufacturers like Dell and Lenovo are pivoting their supply to these high-profit enterprise clients, leaving consumer and business laptops with a supply deficit.

Why this matters for you: The era of the 16GB RAM laptop for lawyers is dead. Running local, privacy-focused AI models (a major trend in 2025) and heavy eDiscovery platforms now requires 32GB or even 64GB of RAM as a baseline (which means you may want more than the “baseline”). The "AI Squeeze" means that in 2026, hardware will be 15-20% more expensive and harder to find. The lesson? Buy now. If your firm has a hardware refresh cycle planned for Q2 2026, accelerate it to Q1. Budgeting for technology is no longer just about software subscriptions; it’s about securing the physical silicon needed to do your job.

Highlight #3: From "Chat" to "Doing" (The Rise of Agentic AI)

Earlier this year, on the Tech-Savvy Lawyer Podcast, we spoke with Chris Dralla of TypeLaw and discussed the evolution of AI tools. 2025 marked the shift from "Chatbot AI" (asking a bot a question) to "Agentic AI" (telling a bot to do a job).

Tools like TypeLaw didn't just "summarize" cases this year; they actively formatted briefs, checked citations against local court rules, and built tables of authorities with minimal human intervention. This is the "boring" automation we have always advocated for—technology that doesn't try to be a robot lawyer, but acts as a tireless paralegal.

Why this matters for you: The novelty of chatting with an LLM has worn off. The firms winning in 2025 were the ones adopting tools that integrated directly into Microsoft Word and Outlook to automate specific, repetitive workflows. The "Generalist AI" is being replaced by the "Specialist Agent."

Moving Forward: What We Can Learn Today for 2026

As we look toward the new year, the profession must internalize a critical lesson: Technology is a supply chain risk.

Whether it is the supply of affordable memory chips or the supply of secure software that properly handles redactions, you are dependent on your tools. The "Tech-Savvy" lawyer of 2026 is not just a user of technology but a manager of technology risk.

What to Expect in 2026:

Is your firm budgeted for the anticipated 2026 hardware price hike?

  1. The Rise of the "Hybrid Builder": I predict that mid-sized firms will stop waiting for vendors to build the perfect tool and start building their own "micro-apps" on top of secure, private AI models.

  2. Mandatory Tech Competence CLEs: rigorous enforcement of tech competence rules will likely follow the high-profile data breaches and redaction failures of 2025.

  3. The Death of the Billable Hour (Again?): With "Agentic AI" handling the grunt work of drafting and formatting, clients will aggressively push back on bills for "document review" or "formatting." 2026 will force firms to bill for judgment, not just time.

As we sign off for the last time in 2025, remember our motto: Technology should make us better lawyers, not lazier ones. Check your redactions, upgrade your RAM, and we’ll see you in 2026.

Happy Lawyering and Happy New Year!