TSS: Repurpose Your Old Work Tech Into Family Learning Tools This Back-to-School Season 💻📚

repurposing your tech for your children can be a platform for a talk with your school kids on the Safe use of Tech.

The new school year approaches, and your children need reliable technology. Before you head to the electronics store, consider the laptops and tablets gathering dust in your office closet or your current devices that you are about to upgrade. With proper preparation, these work devices can become powerful educational tools while teaching your family essential cybersecurity skills.

Why Lawyer Parents Need This Workshop 🎯

As attorneys, we face unique challenges when transitioning work devices to family use. Attorney-client privilege concerns, firm policy compliance, and data breach liability create legal risks most parents never consider. Our August Tech-Savvy Saturday seminar addresses these challenges head-on with practical solutions.

What You'll Master in This Essential Session 🛡️

Device Sanitization for Legal Professionals: Step-by-step Windows, Mac OS, iOS, and Android procedures that protect privileged information while preparing devices for family use. We cover complete data wiping, software licensing removal, and documentation requirements.

Family Technology Management Systems: Implementation strategies for password managers, shared calendars, and network security configurations that work for legal families. Special focus on co-parenting considerations and court-approved platforms.

Family Cyber Talks should be routine!

Age-Appropriate Cybersecurity Education: From elementary through college-age guidance on digital citizenship, password security, and online safety. Critical discussions about digital permanence and the serious legal consequences of non-consensual intimate image sharing.

Emergency Response Planning: Practical protocols for handling cyberbullying, predator contact, and other digital crises. Know when to involve law enforcement versus school administration.

Register Now for August Tech-Savvy Saturday 🚀

This workshop combines legal ethics with practical family technology management. You'll leave with actionable checklists, template agreements, and the confidence to transform old work devices into safe learning tools.

Word of the Week: Synthetic Data 🧑‍💻⚖️

What Is Synthetic Data?

Synthetic data is information that is generated by algorithms to mimic the statistical properties of real-world data, but it contains no actual client or case details. For lawyers, this means you can test software, train AI models, or simulate legal scenarios without risking confidential information or breaching privacy regulations. Synthetic data is not “fake” in the sense of being random or useless—it is engineered to be realistic and valuable for analysis.

How Synthetic Data Applies to Lawyers

  • Privacy Protection: Synthetic data allows law firms to comply with strict privacy laws like GDPR and CCPA by removing any real personal identifiers from the datasets used in legal tech projects.

  • AI Training: Legal AI tools need large, high-quality datasets to learn and improve. Synthetic data fills gaps when real data is scarce, sensitive, or restricted by regulation.

  • Software Testing: When developing or testing new legal software, synthetic data lets you simulate real-world scenarios without exposing client secrets or sensitive case details.

  • Cost and Efficiency: It is often faster and less expensive to generate synthetic data than to collect, clean, and anonymize real legal data.

Lawyers know your data source; your license could depend on it!

📢

Lawyers know your data source; your license could depend on it! 📢

Synthetic Data vs. Hallucinations

  • Synthetic Data: Created on purpose, following strict rules to reflect real-world patterns. Used for training, testing, and developing legal tech tools. It is transparent and traceable; you know how and why it was generated.

  • AI Hallucinations: Occur when an AI system generates information that appears plausible but is factually incorrect or entirely fabricated. In law, this can mean made-up case citations, statutes, or legal arguments. Hallucinations are unpredictable and can lead to serious professional risks if not caught.

Key Difference: Synthetic data is intentionally crafted for safe, ethical, and lawful use. Hallucinations are unintentional errors that can mislead and cause harm.

Why Lawyers Should Care

  • Compliance: Using synthetic data helps you stay on the right side of privacy and data protection laws.

  • Risk Management: It reduces the risk of data breaches and regulatory penalties.

  • Innovation: Enables law firms to innovate and improve processes without risking client trust or confidentiality.

  • Professional Responsibility: Helps lawyers avoid the dangers of relying on unverified AI outputs, which can lead to sanctions or reputational damage.

Lawyers know your data source; your license could depend on it!

MTC: Lawyers, Generative AI, and the Right to Privacy: Navigating Ethics, Client Confidentiality, and Public Data in the Digital Age

Modern attorneys need to tackle AI ethics and privacy risks.

The legal profession stands at a critical crossroads as generative AI tools like ChatGPT become increasingly integrated into daily practice. While these technologies offer unprecedented efficiency and insight, they also raise urgent questions about client privacy, data security, and professional ethics—questions that every lawyer, regardless of technical proficiency, must confront.

Recent developments have brought these issues into sharp focus. OpenAI, the company behind ChatGPT, was recently compelled to preserve all user chats for legal review, highlighting how data entered into generative AI systems can be stored, accessed, and potentially scrutinized by third parties. For lawyers, this is not a theoretical risk; it is a direct challenge to the core obligations of client confidentiality and the right to privacy.

The ABA Model Rules and Generative AI

The American Bar Association’s Model Rules of Professional Conduct are clear: Rule 1.6 requires lawyers to “act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure”. This duty extends beyond existing clients to former and prospective clients under Rules 1.9 and 1.18. Crucially, the obligation applies even to information that is publicly accessible or contained in public records, unless disclosure is authorized or consented to by the client.

Attorneys need to explain generative AI privacy concerns to client.

The ABA’s recent Formal Opinion 512 underscores these concerns in the context of generative AI. Lawyers must fully consider their ethical obligations, including competence, confidentiality, informed consent, and reasonable fees when using AI tools. Notably, the opinion warns that boilerplate consent in engagement letters is not sufficient; clients must be properly informed about how their data may be used and stored by AI systems.

Risks of Generative AI: PII, Case Details, and Public Data

Generative AI tools, especially those that are self-learning, can retain and reuse input data, including Personally Identifiable Information (PII) and case-specific details. This creates a risk that confidential information could be inadvertently disclosed or cross-used in other cases, even within a closed firm system. In March 2023, a ChatGPT data leak allowed users to view chat histories of others, illustrating the real-world dangers of data exposure.

Moreover, lawyers may be tempted to use client public data—such as court filings or news reports—in AI-powered research or drafting. However, ABA guidance and multiple ethics opinions make it clear: confidentiality obligations apply even to information that is “generally known” or publicly accessible, unless the client has given informed consent or an exception applies. The act of further publicizing such data, especially through AI tools that may store and process it, can itself breach confidentiality.

Practical Guidance for the Tech-Savvy (and Not-So-Savvy) Lawyer

Lawyers can face disciplinary hearing over unethical use of generative AI.

The Tech-Savvy Lawyer.Page Podcast Episode 99, “Navigating the Intersection of Law Ethics and Technology with Jayne Reardon and other The Tech-Savvy Lawyer.Page postings offer practical insights for lawyers with limited to moderate tech skills. The message is clear: lawyers must be strategic, not just enthusiastic, about legal tech adoption. This means:

  • Vetting AI Tools: Choose AI platforms with robust privacy protections, clear data handling policies, and transparent security measures.

  • Obtaining Informed Consent: Clearly explain to clients how their information may be used, stored, or processed by AI systems—especially if public data or PII is involved.

  • Limiting Data Input: Avoid entering sensitive client details, PII, or case specifics into generative AI tools unless absolutely necessary and with explicit client consent.

  • Monitoring for Updates: Stay informed about evolving ABA guidance, state bar opinions, and the technical capabilities of AI tools.

  • Training and Policies: Invest in ongoing education and firm-wide policies to ensure all staff understand the risks and responsibilities associated with AI use.

Conclusion

The promise of generative AI in law is real, but so are the risks. As OpenAI’s recent legal challenges and the ABA’s evolving guidance make clear, lawyers must prioritize privacy, confidentiality, and ethics at every step. By embracing technology with caution, transparency, and respect for client rights, legal professionals can harness AI’s benefits without compromising the foundational trust at the heart of the attorney-client relationship.

MTC

BOLO: LexisNexis Data Breach: What Legal Professionals Need to Know Now—and Why All Lexis Products Deserve Scrutiny!

LAWYERS NEED TO BE BOTH TECH-SAVVY AND Cyber-SavvY!

On December 25, 2024, LexisNexis Risk Solutions (LNRS)—a major data broker and subsidiary of LexisNexis—suffered a significant data breach that exposed the personal information of over 364,000 individuals. This incident, which went undetected until April 2025, highlights urgent concerns for legal professionals who rely on LexisNexis and its related products for research, analytics, and client management.

What Happened in the LexisNexis Breach?

Attackers accessed sensitive data through a third-party software development platform (GitHub), not LexisNexis’s internal systems. The compromised information includes names, contact details, Social Security numbers, driver’s license numbers, and dates of birth. Although LexisNexis asserts that no financial or credit card data was involved and that its main systems remain secure, the breach raises red flags about the security of data handled across all Lexis-branded platforms.

Why Should You Worry About Other Lexis Products?

LexisNexis Risk Solutions is just one division under the LexisNexis and RELX umbrella, which offers a suite of legal, analytics, and data products widely used by law firms, courts, and corporate legal departments. The breach demonstrates that vulnerabilities may not be limited to one product or platform; third-party integrations, development tools, and shared infrastructure can all present risks. If you use LexisNexis for legal research, client intake, or case management, your clients’ confidential data could be at risk—even if the breach did not directly affect your specific product.

Ethical Implications: ABA Model Rules of Professional Conduct

ALL LawyerS NEED TO BE PREPARED TO FighT Data LeakS!

The American Bar Association’s Model Rules of Professional Conduct require lawyers to safeguard client information and maintain competence in technology. Rule 1.6(c) mandates that attorneys “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Rule 1.1 further obligates lawyers to keep abreast of changes in law and its practice, including the benefits and risks associated with relevant technology.

In light of the LexisNexis breach, lawyers must:

  • Assess the security of all third-party vendors, including legal research and data analytics providers.

  • Promptly notify clients if their data may have been compromised, as required by ethical and sometimes statutory obligations.

  • Implement additional safeguards, such as multi-factor authentication and regular vendor risk assessments.

  • Stay informed about ongoing investigations and legal actions stemming from the breach.

What Should Legal Professionals Do Next?

  • Review your firm’s use of LexisNexis and related products.

  • Ask vendors for updated security protocols and breach response plans.

  • Consider offering affected clients identity protection services.

  • Update internal policies to reflect heightened risks associated with third-party platforms.

The Bottom Line

The LexisNexis breach is a wake-up call for the legal profession. Even if your primary Lexis product was not directly affected, the interconnected nature of modern legal technology means your clients’ data could still be at risk. Proactive risk management and ethical vigilance are now more critical than ever.