MTC: Why 2026’s PC Price Hikes Put Law Firms at Risk 💻⚖️ (and Why Many Lawyers Are Quietly Switching to Macs)

2026 PC price hikes threaten law firm budgets, performance, ethical compliance!

Lawyers and Legal Professionals, the warning signs have been flashing for more than a year: 2026 was never going to be a normal hardware refresh cycle for law firms. 💸 Economists tracking the global memory crunch and AI‑driven demand have been clear that PCs and laptops would see double‑digit price hikes as Dynamic Random-Access Memory (DRAM) and other components were redirected to lucrative data‑center workloads. For lawyers who depend on reliable, reasonably priced computers to run practice‑critical applications, this is not an abstract macroeconomic story; it is a direct hit to margins, access to justice, and even ethical compliance.

Recent moves by Microsoft have made the problem impossible to ignore. In mid‑April, Microsoft sharply raised prices across its Surface lineup, including the Surface Pro and Surface Laptop families that many lawyers and law firms rely on for their Windows‑based workflows. Entry‑level machines that once started under $1,000 now begin well above that mark, with some configurations jumping several hundred dollars over their launch prices. In some cases, high‑end Surface laptops now cost more than roughly comparable MacBook Pro configurations, erasing the longstanding assumption that Windows hardware is always the cheaper option.

Here, at the Tech‑Savvy Lawyer blog, I have been chronicling these developments for months, noting that major PC manufacturers signaled 15–20 percent price increases thanks to the AI‑driven memory squeeze and ongoing geopolitical tariff pressures. Those predictions are now a reality. For solo practitioners, small firms, and even midsize practices with thin IT budgets, the message is simple: if you are buying new Windows hardware in 2026, expect to pay more for the same level of performance, or accept underpowered machines that will age badly under AI‑enhanced workflows. 🧾

Apple, by contrast, has maneuvered itself into a relatively stronger position, even though it is not completely immune to component inflation. By tightly integrating Apple Silicon, storage, and other components under its own supply chain, Apple has been able to hold the line on some key configurations in a way that many PC Original Equipment Manufacturers (OEM) cannot. Commentators focusing on the legal market have already highlighted products like the MacBook Neo as examples of Apple using its vertical control to keep pricing relatively stable while competitors raise prices or quietly cut specifications. At the same time, Apple’s M‑series and M5‑generation chips continue to deliver strong performance per watt, especially for on‑device AI tasks and productivity applications, which matters when you are running multiple research tools, document management systems, videoconferencing platforms, and AI assistants on a single machine.

This does not mean Apple has avoided all price movement. Newer MacBook Air and MacBook Pro models with M5 chips have seen list price increases of around $ 100–$ 400, depending on configuration. However, when Microsoft’s updated Surface pricing pushes many midrange Windows machines into the same or higher price tiers than comparable Macs, the calculus for lawyers becomes more nuanced. A Windows laptop that used to be the “budget” choice can now be as expensive as, or more expensive than, a MacBook that delivers similar or better performance and longer support life.

MacBooks outperform rising-cost Windows laptops for lawyers seeking value, security!

For the legal sector, this convergence of price and performance has three important implications.

First, hardware purchasing is no longer a purely IT or “back office” concern. It is an integral part of risk management and client‑service strategy. The ABA Model Rules, particularly Model Rule 1.1 on competence and Comment 8 to that rule, make clear that lawyers have a duty to maintain competence in relevant technology. Using outdated, underpowered hardware can impair your ability to use secure videoconferencing, e‑discovery tools, AI‑driven research platforms, and document automation systems. That, in turn, can compromise both efficiency and the quality of representation. ⚖️ When price hikes push firms toward “cheap but weak” machines, they risk falling behind on this duty of technological competence.

Second, Model Rule 1.6 on confidentiality and related ethics opinions underscore the importance of protecting client information in digital environments. In an era when AI tools increasingly run on‑device, machines that can perform more work locally reduce reliance on cloud processing and third‑party data transfers. Apple’s integrated hardware and on‑device AI capabilities, combined with its strong security posture, can make Macs appealing from a confidentiality standpoint, especially for sensitive practices such as criminal defense, family law, and complex commercial litigation. That does not mean Windows machines are inherently less secure, but when high‑end, well‑secured Windows hardware costs significantly more than it used to, some firms may find that Apple’s offerings now deliver a stronger security‑to‑cost ratio.

Third, long‑term budgeting must adapt to the new reality that technology lifecycles will cost more. Economists and industry groups have projected that tariffs and component shortages could add hundreds of dollars to the average laptop by the time those costs are fully passed through. For law firms, this means that hardware refresh cycles should be planned more deliberately, with strategic staggering of purchases, careful evaluation of total cost of ownership, and perhaps a willingness to stretch the lifecycle of existing machines that still meet performance and security requirements. 🗓️

So where does this leave the practicing lawyer or small firm managing technology with limited internal IT support? 🤔

One practical approach is to stop treating the Windows versus Mac decision as a matter of habit and start treating it as a structured, documented evaluation. Build a simple matrix that compares specific models—such as a midrange Surface Laptop and a MacBook Air or MacBook Neo—on price, performance, storage, memory, security features, support life, and compatibility with your core practice software. Involving firm leadership in these decisions and tying them explicitly to ABA Model Rule 1.1 and 1.6 considerations will help demonstrate that you are exercising reasonable diligence in technology selection.

At the same time, lawyers should not assume that Apple is the default winner. Many legal‑industry tools, case management systems, and document workflows remain optimized for Windows, especially in litigation and specialized practice areas. If your practice depends heavily on Windows‑only software, the cost of moving to Macs (including virtualization or remote desktop solutions) may outweigh hardware price advantages. However, even in a Windows‑centric environment, the new pricing landscape may push firms to consider non‑Surface OEMs or to buy fewer, higher‑quality machines and share them across teams rather than treating laptops as disposable commodities.

Strategic legal tech planning improves performance, security, and long-term cost control for lawyers!

Ultimately, the predicted—and now visible—price hikes on PCs are not just a story about higher invoices from vendors. They are a stress test of how seriously law firms take technological competence, security, and long‑term planning. The firms that respond by proactively reassessing their hardware standards, considering platforms like Apple that have weathered the pricing storm more gracefully, and explicitly aligning purchasing decisions with ABA Model Rules will not only control costs; they will position themselves as trustworthy, efficient, and forward‑looking in a market where clients increasingly notice the difference. 🚀

MTC

📖 Word of the Week: “Cross‑Tenant” Learning in Legal Practice

Cross-tenant learning helps law firms improve AI tools without exposing data

If your firm uses cloud‑based tools, you are already living in a multi‑tenant world. In that world, cross‑tenant learning is quickly becoming a key concept that every lawyer and legal operations professional should understand. 🧠⚖️

In simple terms, a “tenant” is your firm’s logically separate space inside a cloud platform: your own users, matters, documents, and settings, isolated from everyone else’s. Cross‑tenant learning refers to techniques in which a vendor’s system learns from patterns across multiple tenants (for example, many law firms) to improve its features—such as search, drafting suggestions, or document classification—without exposing any other firm’s confidential data to you or yours to them.

Why cross‑tenant learning matters for law firms

Cross‑tenant learning is especially relevant as generative AI and machine‑learning tools become embedded in e‑discovery platforms, contract review tools, legal research systems, and practice‑management software. Vendors may use aggregated and anonymized usage data to:

  • Improve relevance of search results and recommendations.

  • Enhance clause and issue spotting in contracts and briefs.

  • Reduce false positives in e‑discovery or compliance alerts.

  • Optimize workflows based on how similar firms use the product.

For lawyers, the value proposition is straightforward: your tools can become “smarter” faster, based on lessons learned across many organizations, not just your own firm’s experience. Done properly, cross‑tenant learning can raise the baseline quality and efficiency of technology available to your practice. ⚙️📈

ABA Model Rules: Confidentiality and Competence

Any discussion of cross‑tenant learning for law firms must start with confidentiality and competence.

  • Model Rule 1.6 (Confidentiality of Information) requires lawyers to safeguard information relating to the representation of a client. That obligation extends to how your vendors collect, store, and use your data. You must understand whether and how client data may be used for cross‑tenant learning and ensure that any such use preserves confidentiality through anonymization, aggregation, and strong technical and contractual controls. 🔐

  • Model Rule 1.1 (Competence), including Comment 8, emphasizes that lawyers should keep abreast of the benefits and risks associated with relevant technology. Understanding cross‑tenant learning is now part of that duty. You do not need to become a data scientist, but you should be comfortable asking vendors precise questions and recognizing red flags.

  • Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) applies when you rely on vendors as nonlawyer assistants. You must make reasonable efforts to ensure that their conduct is compatible with your professional obligations, including how they use your data for cross‑tenant learning. 🧾

Key questions to ask your vendors

ABA Model Rules guide ethical use of cross-tenant learning technologies

When evaluating a product that relies on cross‑tenant learning, consider asking:

  1. What data is used?

    • Is it only metadata or usage logs, or are actual document contents included?

    • Is the data aggregated and anonymized before it is used to train shared models?

  1. How is confidentiality protected?

    • Can other tenants ever see prompts, documents, or client‑identifying information from our firm?

    • What technical measures (encryption, access controls, tenant isolation) are in place?

  1. Can cross‑tenant learning be limited or disabled?

    • Do we have opt‑out or configuration controls?

    • Is there a dedicated model or environment for our firm if needed?

  1. What do the contract and policies say?

    • Does the MSA or DPA clearly limit use of client data to defined purposes?

    • How long is data retained, and how is it deleted if we leave?

These questions are not merely IT concerns; they go directly to your obligations under the ABA Model Rules and your firm’s risk profile.

Practical examples in law practice

Consider a cloud‑based contract‑analysis platform used by hundreds of firms. Over time, the provider can see which clauses lawyers routinely flag as risky, which edits are typically made, and what becomes the “preferred” language for certain issues. Through cross‑tenant learning, the system can use that aggregated knowledge to highlight problematic clauses and suggest alternatives more accurately for everyone.

Another example is an e‑discovery platform that uses cross‑tenant learning to distinguish between truly relevant documents and common “noise” such as automatically generated emails. The more matters the system processes across different tenants, the better it gets at ranking documents and reducing review burdens. This can be a material efficiency gain for litigation teams. ⚖️💼

In both scenarios, your ethical comfort depends on whether underlying data is appropriately anonymized, compartmentalized, and contractually protected.

Governance steps for your firm

To align cross‑tenant learning with professional obligations, firms can:

  • Update vendor‑due‑diligence checklists to include explicit questions about cross‑tenant learning, training data use, and model isolation.

  • Involve a cross‑functional team—lawyers, IT, information security, and risk management—in vendor selection and review.

  • Document your analysis of vendor practices and how they satisfy confidentiality, competence, and supervision obligations under the ABA Model Rules.

  • Educate lawyers and staff about how AI‑enabled tools work, what kinds of data they send into the system, and how to avoid unnecessary exposure of client‑identifying details.

Takeaway for busy practitioners

Smart vendor questions reduce risk in cross-tenant legal technology adoption

You do not need to reject cross‑tenant learning to protect your clients. Instead, you should approach it as a powerful capability that demands informed oversight. When well‑implemented, cross‑tenant learning can help your firm deliver faster, more consistent, and more cost‑effective legal services, while still honoring confidentiality and ethical duties. When poorly explained or loosely governed, it becomes an unnecessary and avoidable risk.

Understanding how your tools learn—and from whom—is now part of competent, modern legal practice. ⚖️💡

TSL.P Podcast Special! Podcasting for Lawyers: The Truth Behind the Mic – ABA TECHSHOW 2026 (Special Audio‑Only Episode) 🎙️⚖️

This special episode features the audio‑only release of an ABA TECHSHOW 2026 panel I was excited to be part of: “Podcasting for Lawyers: The Truth Behind the Mic,” with moderator Ruby Powers and fellow panelists Gyi Tsakalakis and Stephanie Everett. 🎧 Instead of our usual one‑on‑one format, you will hear a live, conference‑style conversation about how lawyers can use podcasting, video, and modern legal technology to build authority, strengthen client and referral relationships, and stay aligned with legal‑ethics and professionalism rules.

Join Ruby, Gyi, Stephanie, and me as we discuss the following three questions and more!

  1. How can lawyers design and sustain a podcast that supports their practice goals and speaks to a clearly defined audience?

  2. What practical tech stacks—microphones, recording platforms, hosting services, and workflow tools—are realistic for busy attorneys and legal professionals?

  3. How do podcasting, video, and short‑form content contribute to SEO, GEO, and long‑term business development for law firms?

In our conversation, we cover the following

  • 00:00 – Welcome to ABA TECHSHOW 2026 and introduction of the panel: Ruby Powers (moderator), Gyi Tsakalakis, Stephanie Everett, and Michael D.J. Eisenberg. 🎙️

  • 02:00 – Each panelist explains their podcast, ideal listener, and why they chose podcasting as a medium.

  • 06:00 – Publishing cadence: weekly, bi‑weekly, and how consistency drives listener trust and download growth.

  • 10:00 – Adding video and YouTube to audio‑only shows and how video clips improve discovery on social media.

  • 14:00 – DIY production vs. using producers, internal teams, or podcast networks, including time and cost trade‑offs.

  • 18:00 – Core tech stacks in practice: microphones, Zoom, Riverside, StreamYard, Descript, Libsyn, Calendly, Buffer, and other essentials. 💻

  • 24:00 – Guest selection, outreach, and sound checks; when to decline an appearance or reschedule due to poor audio or bad fit.

  • 30:00 – Using podcast hosting analytics and social‑platform insights to understand who is listening and what resonates.

  • 35:00 – Podcasting as networking and “virtual coffee”: building relationships with lawyers, experts, and vendors. ☕

  • 40:00 – SEO and GEO benefits: how episodes create long‑tail visibility in search, and why attribution still matters.

  • 45:00 – Ethics and professionalism: confidentiality, bar‑advertising rules, disclaimers, and avoiding client‑identifying facts. ⚖️

  • 52:00 – Final advice for lawyers on the fence about starting a podcast and how to improve with each episode instead of waiting for perfection.

RESOURCES

Connect with the panel

Mentioned in the episode (non‑hardware / non‑software)

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

When AI Falls Short - What Legal Professionals Must Know Before Relying on Microsoft Copilot and Similar Embedded AIs.

AI Errors in Legal Practice Demand Vigilant Attorney Oversight!

Any reader of my blog should realize by now that artificial intelligence is no longer a novelty in law practice; it is embedded in research platforms, document automation, e‑discovery, and now in tools like Microsoft Copilot that appear inside the same Microsoft 365 ecosystem lawyers already live in. Yet Copilot’s own terms of use long described it as being “for entertainment purposes only,” while Microsoft has simultaneously marketed it as an enterprise‑grade productivity assistant and is now backing away from prominent Copilot buttons in several Windows 11 apps. For lawyers who must live under the ABA Model Rules of Professional Conduct, this tension is not an amusing footnote; it is an ethics problem waiting to happen. 

Microsoft’s Copilot terms have advised that the service “can make mistakes,” “may not work as intended,” and should not be relied on for important advice. At the same time, Microsoft has begun removing or rebranding Copilot buttons from Notepad, Snipping Tool, Photos, and Widgets in Windows 11, framing this move as an effort to reduce “unnecessary Copilot entry points” and be “more intentional” about where AI shows up. The features, or at least the underlying AI, are not disappearing entirely; they are simply becoming less conspicuous. For the practicing lawyer, the message is clear: powerful AI is being woven into everyday tools, but its creators still do not want you to rely on it the way you rely on a human associate. 🤖

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators.

⚠️

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. ⚠️

That is precisely where the ABA Model Rules step in. Model Rule 1.1 requires competent representation and, through Comment 8, includes a duty to keep abreast of the benefits and risks of relevant technology. Using AI in law practice is increasingly seen as part of that competence obligation, but competence does not mean blind trust in unvetted outputs from a system whose own terms warn you not to rely on it. A lawyer who treats Copilot’s draft as a finished research memo, brief, or contract without independent verification risks violating the duty of competence every bit as much as a lawyer who never learned to use electronic research tools in the first place.

Model Rule 1.6 on confidentiality presents a second, and in many ways more pressing, concern. Generative AI systems may store, log, or otherwise use prompt content for analysis and improvement, which means uncritical copying and pasting of confidential client information into Copilot can create a non‑trivial risk of exposure. The ABA and commentators have emphasized that before entering client data into a generative AI tool, lawyers must assess whether that data could be disclosed or accessed by others, including through unintended re‑use in future outputs to different users. That risk analysis is not optional; it is part of your obligation to make reasonable efforts to prevent unauthorized access or disclosure.

Fake Citations from AI Tools can Threaten Accuracy and Legal Ethics!

Model Rules 5.1 and 5.3, which govern the responsibilities of partners, managers, supervisory lawyers, and non‑lawyer assistants, also apply to AI use. When you deploy Copilot in your firm, you are functionally introducing a new category of “assistant” whose work product must be supervised like that of a junior lawyer or paralegal. Policies, training, and review procedures are needed so that AI‑drafted content is consistently checked for accuracy, bias, hallucinations, and improper legal conclusions before it ever reaches a client, court, or counterparty. Ignoring Copilot’s disclaimers and Microsoft’s own hedging around reliability is, in effect, ignoring red flags that any reasonable supervising attorney would address.

Model Rule 1.4 on communication adds yet another dimension: transparency with clients about how you are using AI in their matters. Authorities interpreting the Model Rules have stressed that lawyers should keep clients reasonably informed, which includes explaining when and how AI tools are utilized to assist in their cases. This is particularly important where AI may affect cost, turnaround time, or the nature of the work performed, such as using Copilot to generate a first draft instead of assigning that task to an associate. Engagement letters and fee agreements are increasingly incorporating language about AI use, both to set expectations and to align with evolving ethical guidance.

The “for entertainment purposes only” language is more than a curiosity; it is a signal about allocation of risk. Microsoft’s disclaimer mirrors language historically used by psychic hotlines and other services seeking to avoid responsibility for inaccurate advice. When such a disclaimer is attached to a tool you might be tempted to use for legal analysis, the tool is telling you that you assume the risks of errors. Under the Model Rules, those risks ultimately translate into potential malpractice, sanctions, or disciplinary action if AI‑generated errors make their way into filed documents or client counseling.

Recent real‑world incidents involving lawyers who submitted briefs containing AI‑fabricated citations demonstrate how quickly misuse of generative AI can cross ethical lines. In those cases, the core problem was not that AI was used; it was that the lawyers failed to verify the content and then misrepresented fictitious cases as genuine authority to the court. That behavior implicates Model Rules 3.3 (candor toward the tribunal) and 8.4 (misconduct) along with competence. Copilot’s warnings about possible mistakes do not excuse a lawyer from the duty to check every citation, quote, and legal conclusion that AI produces before relying on it.

lawyers must assess whether that data could be disclosed or accessed by others

⚠️

lawyers must assess whether that data could be disclosed or accessed by others ⚠️

For practitioners with limited to moderate technology skills, the answer is not to abandon AI entirely, but to approach it with structured safeguards. A practical workflow might involve using Copilot to outline a research plan or draft a first pass at a contract clause, followed by standard legal research in trusted databases and rigorous review by a human lawyer before anything is finalized. Firms should configure Copilot and other AI tools in ways that minimize data exposure, such as disabling cross‑tenant learning, a feature that lets the system learn from patterns across multiple organizations’ environments, where possible, and restricting which matters and users can access certain features. Training sessions can focus less on technical jargon and more on concrete do’s and don’ts tied directly to the Model Rules, which is the language most lawyers already speak. 🧠

alawys Protect Client Confidentiality When Using AI in Modern Law Practice!

Governance is also essential. Written AI policies should address acceptable use cases, prohibited content for prompts, mandatory review standards, logging and auditing of AI‑assisted work, and incident response if an AI‑related error is discovered. These policies should be backed by regular training and by leadership that models appropriate use, rather than quietly delegating AI experimentation to the most tech‑savvy associates. Vendors’ evolving terms of use—including Microsoft’s move to revise its “entertainment purposes” language and adjust Copilot integration in Windows—should be monitored and incorporated into risk assessments over time.

In short, when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. Copilot and similar tools can be valuable allies in a modern legal practice, but only if they are treated as fallible assistants whose work must be checked, not as oracles. The ABA Model Rules already provide the framework: competence, confidentiality, supervision, and honest communication. The task for today’s legal professionals is to apply that framework thoughtfully to AI, recognizing both its promise and its very real limitations before letting it anywhere near client work or court filings. ⚖️🤖

Podcasting for Lawyers: The Truth Behind the Mic at ABA TECHSHOW 2026 🎙️⚖️

🎧 Watch the ABA TECHSHOW 2026 panel: “Podcasting for Lawyers: The Truth Behind the Mic”

Podcasting has become one of the most powerful ways for lawyers to build authority, strengthen client relationships, and stand out in a crowded online marketplace—if it is done strategically and ethically. I recently had the privilege of serving on the March 26, 2026, ABA TECHSHOW panel, “Podcasting for Lawyers: The Truth Behind the Mic,” alongside moderator Ruby Powers and fellow panelists Gyi Tsakalakis and Stephanie Everett. Together, we walked through how attorneys can use podcasting, video, and legal technology to create consistent, professional content that supports real‑world business development while staying compliant with confidentiality and bar‑advertising rules. 🎧

In this post, you’ll find the recording of our ABA TECHSHOW 2026 session, a brief overview of the topics we covered, and links to tools and resources that can help you start—or sharpen—your own law‑firm podcast.

Brief Outline

1. Why podcasting makes sense for lawyers in 2026

  • How podcasting fits into modern law‑firm marketing and thought leadership.

  • The role of podcasts in SEO, GEO, and building long‑term visibility in your practice area.

  • Why authenticity, consistency, and a clear audience matter more than fancy production tricks.

2. Choosing your podcast’s audience and goals

  • Deciding whether you’re speaking to potential clients, referral sources, or other lawyers.

  • Aligning topics, interview guests, and episode formats with your business and reputational goals.

  • Avoiding the “variety show” trap and staying focused on the problems your audience actually cares about.

3. Building a realistic podcast tech stack for busy attorneys

  • Microphones and basic audio gear that deliver professional sound without breaking the bank.

  • Recording tools such as Zoom, Riverside, and StreamYard to capture both audio and video.

  • Hosting and workflow tools like Libsyn, Descript, Calendly, and Buffer that help you publish consistently and repurpose content efficiently.

4. Ethics, professionalism, and “the truth behind the mic”

  • Key confidentiality and advertising issues to consider when discussing client work or legal topics.

  • How to think about disclaimers, legal information vs. legal advice, and jurisdictional concerns.

  • Why podcasting is not just marketing content but also a professional reflection of how you communicate and practice law.

5. Making podcasting sustainable (and enjoyable) over time

  • Scheduling systems that keep you ahead on episodes without overwhelming your calendar.

  • Guest strategies that expand your network and add value for your audience.

  • How to measure success: client feedback, referrals, and qualitative signals—not just download counts.

Resources

  • 🌐 Session description on ABA TECHSHOW
    https://www.techshow.com/sessions/podcasting-for-lawyers-the-truth-behind-the-mic/

  • 💻 The Tech‑Savvy Lawyer.Page – blog and podcast
    https://www.TheTechSavvyLawyer.page

  • 🎙️ Tools and services mentioned

    • Buffer – https://buffer.com

    • Calendly – https://calendly.com

    • Descript – https://www.descript.com

    • Libsyn – https://libsyn.com

    • Riverside – https://riverside.fm

    • StreamYard – https://streamyard.com

    • Zoom – https://zoom.us

Suggested call‑to‑action paragraph

If you’re a lawyer or legal professional considering a podcast—or looking to refine the one you already have—I invite you to watch the full ABA TECHSHOW 2026 session and explore the resources above. Then connect with me at MichaelDJ@TheTechSavvyLawyer.Page to share what you’re building, ask questions about podcasting workflows and ethics, or suggest future topics you’d like to hear covered. 🎙️⚖️

📢 Special Shout-Out and Thank You to Ruby Powers for the invitation and Gyi and Stephanie for being great co-panelists!

📢 Your Tech-Savvy Lawyer Blogger and Podcaster, Michael D.J. Eisenberg, Announces His Upcoming Talk on Ethical AI Use in Legal Practice at the 2026 AI Legal Practice Summit!

Saturday, April 18, 2026 | Capital University Law School

As technology continues to transform legal practice, I’m honored to announce that I’ll be speaking at the 2026 AI Legal Practice Summit, hosted by my alma mater, Capital University Law School, in Columbus, Ohio. This event brings together attorneys, educators, and technologists to explore how artificial intelligence is reshaping the legal field — not just operationally, but ethically and professionally as well.

My presentation, “Smart Practice, Smarter Ethics: Navigating AI Tools Under the ABA Model Rules,” focuses on a topic that’s both timely and critically important: how lawyers can use emerging AI technologies responsibly while meeting their professional obligations under the ABA Model Rules of Professional Conduct.

👉 Learn more and view the full schedule at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
🎟️ Register today through Eventbrite: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

Through my work on The Tech-Savvy Lawyer.Page blog and podcast, I’ve had countless conversations with practitioners who want to use AI to streamline tasks such as research, document drafting, and client management — yet remain uncertain about compliance, bias, and confidentiality. Law practice is evolving rapidly, but our ethical foundations must remain strong.

In my session, I’ll walk through key aspects of how the ABA Model Rules, including Rules 1.1 (Competence), 1.6 (Confidentiality of Information), and 5.3 (Responsibilities Regarding Nonlawyer Assistance), apply in an age of intelligent automation. These rules guide us in assessing not just what technology can do, but how and when it should be used.

Your faculty!

We’ll discuss:

  • Reviewing the tech stack you already own;

  • How to vet and implement AI-powered tools while maintaining confidentiality.

  • Questions to ask vendors about data handling and bias;

  • How to document best practices for firm-wide ethical compliance;

  • Ways to blend human legal judgment with algorithmic assistance; and

  • Managing client expectations about AI-enabled legal work.

My goal is to help attorneys approach technology with confidence — to experiment, adopt, and adapt responsibly. Being a “tech‑savvy lawyer” isn’t about mastering every gadget or platform; it’s about understanding how technology fits within the ethical framework of our profession.

The conversation around technological competence has matured since Comment 8 to Rule 1.1 was introduced. It’s no longer optional. Attorneys must understand the benefits, risks, and limitations of relevant technology to provide competent representation. Artificial intelligence highlights that reality better than any emerging tool before it.

Whether you’re a solo practitioner looking to automate administrative tasks, working for a government agency, or part of a large firm implementing AI-assisted legal research or document review, I’ll share specific practices you can adopt immediately.

If you’re attending and seeking Ohio CLE credit, please contact Jenny Wondracek at jwondracek@law.capital.edu for details.

PRogram description of my presentation.

The 2026 AI Legal Practice Summit will feature leading scholars, ethics experts, and seasoned practitioners. I’m looking forward to exchanging ideas, testing assumptions, and continuing a dialogue that helps ensure AI becomes a responsible partner—never a replacement—in the practice of law.

Let’s move forward together, with competence, curiosity, and care.

Learn more about the Summit at law-capital.libguides.com/2026_AI_Legal_Practice_Summit.
Register today: eventbrite.com/e/ai-legal-practice-summit-tickets-1986544900273.

I look forward to seeing you there! ⚖️

MTC: Hidden AI, GEO, and the ABA Model Rules: What Every Lawyer Needs to Know Before Their Next Client Finds Them Online ⚖️🤖

Generative AI is already talking about you, your law firm, and your practice area—even if you have never opened ChatGPT. 😳 Clients ask AI tools legal questions in natural language, and those systems answer by pulling from whatever content they trust online. For lawyers, that raises two intertwined issues: “hidden AI” inside everyday tools and the rise of Generative Engine Optimization (GEO). Together, they sit squarely in the path of your duties under the ABA Model Rules.

Legal Ethics Meets GEO and Hidden AI!

Hidden AI is everywhere in modern law practice tools. Microsoft 365 suggests text, summarizes long email threads, and drafts documents. Zoom transcribes and sometimes “enhances” meetings. Practice‑management platforms now market AI assistants that review documents, summarize matters, and even suggest next steps. Much of this AI runs quietly in the background, so it is easy to forget it exists—or to assume it is “just another feature.” Yet under ABA Model Rule 1.1, technological competence now includes understanding the benefits and risks of the technology you choose for your clients’ work. You cannot competently supervise what you do not even realize is there.

At the same time, AI tools sit on the front end of client development. When a potential client types, “How does a New Jersey divorce work and when should I hire a lawyer?” into an AI chatbot, that system gives an answer based on content it considers reliable. GEO—Generative Engine Optimization—is about making your content understandable, quotable, and safe for those systems to lift into the response. Where SEO asks, “How do I rank in Google’s blue links?”, GEO asks, “How do I become the answer AI gives when someone in my jurisdiction asks a real client question?” 🧠

Where the ABA Model Rules Fit

GEO and hidden AI are not just marketing trends; they are ethics issues.

  • Model Rule 1.1 (Competence). Comment 8 extends competence to relevant technology. ABA guidance on AI (including Formal Opinion 512) explains that lawyers must understand how AI tools work in broad strokes, their limitations, and their failure modes. If you expect clients to find you through AI‑generated answers, you should know what those systems are likely to say about your area of law and how your own content feeds into that ecosystem. ⚖️

  • Model Rule 1.6 (Confidentiality). You do not need to paste client facts into AI tools to do GEO. Good GEO content relies on hypotheticals and public law, not on confidential stories. But when you use AI inside Word, your practice platform, or a browser‑based assistant, you must know where the data goes, whether it is used for training, and whether additional client consent or stronger safeguards are required. 🔐

  • Model Rule 1.4 (Communication). When AI tools materially affect how you handle a matter—such as drafting, research, or review—you may need to explain that to clients in clear, non‑technical terms. In marketing, that same communication duty supports honest disclaimers: your GEO‑optimized articles must state that they are general information, not legal advice, and that AI summaries of your content are no substitute for a direct attorney‑client consultation.

  • Model Rules 7.1–7.3 (Advertising and Solicitation). GEO content must still be truthful and non‑misleading. You cannot let AI‑targeted content slide into promises of “guaranteed results” or vague claims of being “the best.” The fact that you are writing for AI as well as humans does not relax your duties under the advertising rules—it amplifies them, because misstatements can get replicated and amplified by AI tools. 📢

Handled thoughtfully, GEO can actually help you satisfy these rules. It encourages you to publish accurate, current, and jurisdiction‑specific explanations that educate the public and reduce confusion. Done poorly, it can push you into ethically dangerous territory where AI retells your overbroad claims to countless readers you never see.

What Is “Hidden AI” in Law Practice?

How AI Shapes Legal Ethics and Client Discovery

For many lawyers with limited or moderate tech skills, the biggest risk is not exotic AI research—it is quiet defaults.

Examples:

  • Word processors that turn on AI‑assisted drafting by default.

  • Email services that summarize conversations using third‑party models.

  • Cloud DMS, i.e., a cloud-based document management system, or practice platforms that offer “smart” suggestions based on client documents.

These tools can be legitimate productivity boosts, but under Rules 1.1 and 1.6, you must understand enough about them to decide when and how to use them. That includes asking:

  • Does this feature send client content to an external provider?

  • Is that provider training on my data?

  • Can I turn that training off?

  • Is there a business or enterprise version with better confidentiality terms?

You do not need to become a software engineer. You do need to know the basic data‑flow story well enough to make an informed risk judgment and to explain that judgment if a client or disciplinary authority asks. 🙋‍♀️

Moving from SEO to GEO—Ethically

Traditional SEO still matters. You still want clear titles, descriptive meta tags, fast and mobile‑friendly pages, and basic schema markup so search engines can understand your site. GEO builds on that foundation and asks you to go one step further: write in a way that large language models can safely quote.

GEO‑friendly legal content usually has:

✅   An answer‑first summary at the top: a short, plain‑English overview of the main question.

✅   Strong jurisdiction signals: repeated references to the state, province, or country, relevant courts, and applicable statutes.

✅   Specific client questions: headings written in the same conversational style clients use (“How long do I have to sue after a car accident in Ohio?”).

✅   Trust signals: bylines, credentials, bar memberships, links to statutes and court sites, and recent update dates.

For example, if you serve veterans in disability benefits work, your GEO page might be titled “How VA Disability Claims Work for [Your State] Veterans” and open with a five‑sentence, answer‑first summary in plain English. You would clearly note that you practice in specific jurisdictions, link to the VA and governing statutes, and spell out when someone should seek legal counsel. An AI system looking for a safe, jurisdiction‑clear answer is more likely to treat that content as a reliable source.

From an ethics standpoint, this structure helps you:

  • Stay in your lane (Rule 1.1) by emphasizing your actual jurisdiction and practice scope.

  • Provide accurate, non‑misleading information (Rules 7.1–7.3).

  • Communicate clearly about what your content is—and is not (Rule 1.4).

Practical First Steps for Non‑Techy Lawyers

You do not need to rebuild your entire site this week. A focused, incremental approach works well, especially if you are still building your tech confidence. Here is a practical sequence that maintains compliance with the Model Rules:

Legal Ethics Meets GEO and Hidden AI

  1. Audit your “hidden AI.” With your IT provider or vendor reps, identify where AI is already in use in your stack: Microsoft 365, Google Workspace, Zoom, your case‑management system, research tools, and any browser extensions. Turn off any features you cannot yet explain to yourself in basic terms. 🛠️

  2. Pick one practice area to GEO‑optimize. Choose the area that drives most of your matters. List the 10 most common client questions you actually hear. Those are the headings for your first GEO page.

  3. Write answer‑first, jurisdiction‑specific content. Use short paragraphs and plain language, and embed jurisdiction cues and citations to official sources. Include clear disclaimers about general information, no legal advice, and the need for a consultation.

  4. Refresh and expand over time. Revisit that page whenever law or practice changes, add FAQs, and link related posts. This keeps content current for both search engines and AI tools.

  5. Document your choices. If you decide to use specific AI tools in drafting content or in client work, note your reasoning: confidentiality safeguards, vendor terms, and how you supervise outputs. This helps show that you approached AI use thoughtfully under Rules 1.1, 1.4, 1.6, 5.1, and 5.3. 📚

The core message is simple: you do not have to master every technical detail to be a tech‑savvy lawyer, but you do have to stop pretending that AI is optional. Your clients are already using it; your vendors are already embedding it; and AI systems are already shaping how clients find you. Taking a deliberate, ethics‑aware approach to hidden AI and GEO is no longer extra credit—it is part of protecting your clients, your reputation, and your license. 🚀⚖️

MTC

Word(s) of the Week: Understanding the Evolution of Artificial Intelligence: From AI to Generative AI to AI LLMs — and Why It Matters for Today’s Legal Professionals ⚖️🤖

lawyers need to understand what AL LLM can and can’t do!

Artificial Intelligence (AI) is transforming the legal industry, yet confusion still exists about what different terms mean — and why they matter. Terms like AI, Generative AI, and AI LLM (Large Language Model) are often used interchangeably, but they describe very different levels of capability. Understanding these distinctions is essential for attorneys navigating new professional responsibilities and compliance expectations under the ABA Model Rules. Let’s break down what each term means, why the progression matters, and what the next step—AI LLMs—means for legal practice.

AI: The Foundation of Machine Intelligence

Traditional AI refers to systems designed to perform tasks that require human-like intelligence. These tasks include pattern recognition, data sorting, predictive analytics, and document classification. For example, early e-discovery tools that identify relevant documents in large datasets use AI algorithms to flag patterns.

In legal practice, this type of AI boosted efficiency but remained narrow in function. Lawyers controlled the inputs and closely supervised the outcomes. Under ABA Model Rule 1.1 (Competence), using such tools responsibly required understanding their purpose and reliability, not their coding. Attorneys had to ensure that outputs were accurate and ethically sound.

Generative AI: Creating, Not Just Sorting

As technology evolved, so did AI’s capabilities. Generative AI differs from basic AI because it creates content instead of just classifying it. These models generate text, images, code, and even legal-style drafts based on training data. Tools like ChatGPT, which fall under this category, can draft letters, summarize cases, or brainstorm argument strategies.

Generative AI introduces profound efficiency benefits. A solo practitioner, for example, can use AI to prepare first drafts of client letters or marketing content quickly. The risk, however, is accuracy. Because these models generate content probabilistically, they can “hallucinate” — producing incorrect or fabricated information that sounds authoritative.

Generative ai is great at creating contENt - just watch out for hallucinations!

Under ABA Model Rule 5.3 (Supervision of Nonlawyer Assistants), lawyers must exercise oversight over tools like these since they function similarly to an assistant. Lawyers must verify all AI-generated output before use, maintaining professional independence and ethical standards.

AI LLMs: The Next Step in Practice Intelligence

AI LLMs — large language models — represent the next and most transformative step. Unlike earlier forms of AI, LLMs process massive datasets and can understand nuance, intent, and context in human language. This allows them to perform legal research, summarize filings, analyze contracts, and even simulate case strategies.

The key difference is scale and sophistication. LLMs learn not only from pre-set instructions but also by understanding the relationships between words and concepts. This contextual learning enables attorneys to interact with these systems conversationally. For example, an LLM-based research assistant can respond to a query such as, “Find Illinois cases interpreting non-compete clauses after 2023,” and then produce accurate summaries or citations.

Yet with great capability comes heightened responsibility. ABA Model Rule 1.6 (Confidentiality) applies when attorneys input client data into online tools. If the platform is public or cloud-based, lawyers must assess data handling, encryption, and privacy policies. Additionally, per Model Rule 1.1, competence now includes understanding how LLMs generate and manage information.

Why the Distinction Matters

The distinction between AI, Generative AI, and AI LLMs matters because it affects how attorneys use the technology within ethical, secure boundaries. A misstep in understanding can result in breached confidentiality, inaccurate filings, or ethical violations.

✅ AI assists.
✅ Generative AI creates.
✅ AI LLMs reason and interact.

In practical terms, lawyers need to update policies, train staff, and disclose use of these tools when appropriate. Law firms that adopt LLM-based platforms responsibly will gain a competitive advantage through increased efficiency and improved client service — without compromising professional duties.

Looking Ahead

Lawyers who use ai llms can save hours of menial work - always check your work!

AI LLMs are not replacing lawyers; they are amplifying their insight and reach. Attorneys who stay informed and practice technological competence will thrive in this next phase of digital legal service. The evolution from AI to Generative AI to LLMs represents not just a technological shift, but a professional one — requiring careful balance between innovation, ethics, and human judgment. ⚖️

🎙️ Ep. #134 — AI-Powered Legal Writing: How BriefCatch Helps Lawyers Write Smarter, Not Harder with Ross Guberman.

My next guest is Ross Guberman — founder of BriefCatch, nationally recognized legal writing trainer, and author of several acclaimed books on persuasive legal writing. Ross has trained thousands of lawyers and judges across the country. After years of teaching the craft of legal writing, he channeled that expertise into building BriefCatch — a purpose-built AI writing tool that lives right inside Microsoft Word and Outlook, scanning your legal documents using roughly 17,000 rules to help you write cleaner, sharper, and more persuasive work product. Whether you're a solo practitioner or part of a large firm, Ross brings insights that are immediately practical — no matter your tech comfort level. 🚀

Join Ross Guberman and me as we discuss the following three questions and more!

  1. 🏆 From your vantage point — having trained thousands of lawyers and judges and now running BriefCatch — what are the top three ways lawyers can leverage AI-driven writing tools like BriefCatch inside Word and Outlook to measurably improve the quality and persuasiveness of their briefs without sacrificing their own voice or judgment?

  2. ⚖️ For a tech-curious but time-strapped practitioner, what are the top three everyday workflows beyond traditional brief writing where lawyers are leaving the most value on the table by not using tools like BriefCatch and other legal tech?

  3. 🔮 Looking ahead five years, what are the top three technology competencies every lawyer must develop — not just "nice to have" skills — to collaborate effectively with AI, stay ethically compliant, and turn technology into a genuine competitive advantage rather than a source of risk?

In our conversation, we cover the following:

  • [00:30] 💻 Ross's current tech setup — MacBook Pro M4 Max, macOS, and iPhone 16

  • [01:30] 🔄 Why keeping your OS updated matters — security and performance

  • [03:00] 🖥️ External monitors, portable screens, and traveling with tech

  • [07:00] 📱 Using your iPad as an external monitor via Apple Sidecar

  • [08:30] 🎪 Bonus Question #1 - Ross’s experience in the ABA TECHSHOW Startup Alley

  • [11:00] ✍️ Question #1 — Top 3 ways to use AI writing tools to improve briefs without losing your voice

  • [12:00] 🧑‍⚖️ Using AI to role-play as a skeptical judge or opposing counsel to pressure-test your brief

  • [13:00] 📊 Transforming fact sections into timelines and case law into comparison charts

  • [14:00] 📝 Using AI as a self-check for hyperbole, redundancy, and tone

  • [15:30] 📲 How judges now read briefs on iPads — and what that means for your writing style

  • [17:00] 📂 Using Text Expander to store and deploy your best prompts

  • [18:30] 🎙️ Google Notebook LLM as a learning and podcast creation tool

  • [20:00] 🧩 Bonus Question #2 — What is BriefCatch and why use purpose-built legal AI over general tools?

  • [21:00] 🚀 The origin story of BriefCatch — from side hustle in 2018 to funded legal tech startup

  • [22:30] ⚙️ Workflow, ethics rules, and attorney-specific conventions — why legal-specific AI wins

  • [24:30] 📋 Question #2 — Top 3 underused everyday workflows for lawyers using AI

  • [25:00] 📧 Using AI with your email to surface unanswered messages and unresolved threads

  • [25:45] 📁 Mining your past work product for patterns, style, and reusable language

  • [26:30] 📅 Having AI review your calendar and correspondence for efficiency insights

  • [27:00] 🔒 Data privacy, security settings, and the risks of default AI configurations

  • [28:30] 🏛️ New York State's data protection approach and what more states should do

  • [29:30] 🤖 Question #3 — Top 3 technology competencies every lawyer must master in the next five years

  • [30:00] 🧠 Understanding how LLMs actually "think" — reading the AI's reasoning chain

  • [30:45] 🖊️ Making AI output sound like you — the human voice in an AI-generated world

  • [31:30] 🔧 Integrating AI into your daily workflow while preserving human judgment

  • [32:00] 👏 Closing thoughts and where to find Ross and BriefCatch

RESOURCES

🔗 Connect with Ross Guberman

  • 📧 Email: ross@briefcatch.com

  • 🌐 Website: https://www.briefcatch.com

  • 💼 LinkedIn: Search "Ross Guberman" on LinkedIn at https://www.linkedin.com

📌 Mentioned in the Episode

🖥️ Hardware Mentioned in the Conversation

☁️ Software & Cloud Services Mentioned in the Conversation

BOLO: Gone (Almost) Phishin’: What a Sophisticated Apple Scam Teaches Lawyers About Cybersecurity, Client Confidentiality, and ABA Ethical Duties 🚨📱

Lawyers Face Sophisticated Apple Phishing Scam Cybersecurity Risks!

A recent real‑world phishing attempt against a well‑known technology CEO offers an important warning for lawyers and law firms about how modern scams now convincingly mimic “legitimate” security workflows. This attack did not rely on laughable grammar, obvious fake domains, or clumsy social engineering; instead, it weaponized Apple’s genuine password‑reset system, real support case IDs, and realistic phone support to try to compromise the victim’s Apple ID. For lawyers who increasingly rely on mobile devices, cloud services, and multi‑factor authentication for client communications, this kind of scam is not hypothetical—it's a direct threat to client confidentiality and professional responsibility.

In the incident, the victim’s Apple Watch, iPhone, and Mac all began displaying unexpected prompts to reset the Apple ID password, despite the user running Apple’s Lockdown Mode on all devices. The prompts were not generated by malware on the devices, but by an attacker repeatedly triggering Apple’s legitimate password reset flow, thereby flooding the user with authentic-looking notifications. From the perspective of a busy lawyer, such prompts might be dismissed as an annoyance or, worse, acted upon in haste. Either reaction, without careful verification, can create risk. 📲

The scam escalated when the attacker called, posing as “Alexander from Apple Support,” referencing a real Apple support case that they had opened themselves by impersonating the victim. Because Apple’s own systems generated a valid case ID and corresponding emails, the communications appeared fully authentic; no spam filter or “phishing awareness” toolbar would have flagged them as suspicious. The caller began with correct, even prudent, security advice—check your account, verify nothing has changed, consider updating your password—which is precisely the kind of guidance many lawyers expect from legitimate support channels. This blend of real security language with a fraudulent goal is what makes the scam so dangerous. 🧠

Phishing Lessons for Lawyers Using Apple Devices and Cloud Tools!

The critical moment came when “Alexander” sent a text with a link to “audit-apple.com,” a pixel‑perfect imitation of Apple’s site that displayed the real case ID and even a fake transcript of the attackers’ prior “chat” with Apple. At the bottom of the page sat a “Sign in with Apple” button, intended to harvest the victim’s credentials under the guise of closing a fraudulent request. Only after poking at the site and noticing that any case ID produced the same result did the victim confirm it was a scam and confront the attacker. Many lawyers, particularly those with only moderate comfort with technology, might not test the site this way and could be persuaded by the case ID and realistic presentation. 🕵️‍♂️

For legal professionals, the ethical implications are significant. ABA Model Rule 1.1 on competence requires lawyers to understand the benefits and risks associated with relevant technology, including the ability to recognize and respond to sophisticated phishing. The duty of confidentiality under Rule 1.6 requires taking reasonable steps to prevent unauthorized access to client information, which includes protecting accounts and devices that store or access client files, email, and messaging. If a lawyer’s Apple ID or similar account is compromised, attackers may gain access to privileged communications, document repositories, calendar entries, and even secure messaging apps that sync via the device.

Model Rule 5.3 extends these obligations to nonlawyer assistants, including staff and outside vendors who may handle client data or access firm systems. If partners and associates are vulnerable to such scams, staff and contractors are as well; firm leadership must implement policies, training, and incident‑response procedures that recognize the new generation of phishing where everything “looks right” until you inspect the URL or underlying flow. This aligns with recognized best practices: anti‑phishing training, simulated phishing exercises, and clear escalation paths for suspicious security communications.

Key practical lessons for lawyers from this incident include:

  • Do not approve unexpected password‑reset prompts; instead, go directly to your device or account settings via a known‑good path (e.g., Settings → Apple ID on your device).

  • Treat unsolicited “support” calls with extreme skepticism, even when they reference real case IDs or recent activity; major vendors like Apple will not call you out of the blue to fix a security issue.

  • Always verify the URL before entering credentials; for Apple, support should live on apple.com or getsupport.apple.com, not look‑alike domains.

  • Establish a firm‑wide rule: no one—IT, vendors, or support—will ever ask for passwords, one‑time codes, or sign‑in via a link sent in an unsolicited message; any such request must be verified through a separate, trusted channel.

Apple Scam Warning for Lawyers Protecting Client Confidentiality

From an ethical‑risk perspective, a successful attack of this kind could trigger duties to notify clients, insurers, and regulators, depending on your jurisdiction’s breach‑notification regime and professional‑conduct rules. Even an “almost‑breach,” like the one described in this article, is a valuable opportunity for firms to revisit incident‑response plans, document what would happen if a lawyer’s Apple ID or smartphone were compromised, and rehearse the steps for containing damage. Doing so not only supports compliance with Model Rules 1.1 and 1.6 but also demonstrates to clients and courts that the firm takes cybersecurity governance seriously. ✅

The story also underscores that even highly technical users can be momentarily convinced by a well‑crafted scam, which should encourage humility rather than embarrassment among lawyers who worry they are “not technical enough.” The correct response is not shame, but systems: layered security controls, clear verification procedures, and regular training that turn individual vigilance into institutional resilience. Ultimately, as phishing attacks become more sophisticated and exploit real security workflows, lawyers must elevate their cybersecurity awareness to meet their ethical obligations and preserve the trust at the core of the attorney‑client relationship. 💼