MTC: Are Lawyers Really Ready for a Wallet‑Free Future? Digital Wallets, ABA Ethics, and the Reality of Going Fully Cashless 💳⚖️

Tech-savvy lawyers should not leave their physical wallets at home, BUT YOU CAN PROBABLY pare THEM down some.

When previous podcast guest David Sparks over at MacSparky shared his recent post about accidentally going out without his physical wallet—and still making it through the day just fine on his iPhone and Apple Wallet—it captured a quiet shift many of us in the legal profession are grappling with. He walked into his appointment armed only with a digital ID, digital insurance card, and Apple Pay, and everything worked. For a growing number of professionals, that is the new normal. The question for lawyers is more specific: not can we go wallet‑free, but should we—ethically, practically, and professionally—given our obligations under the ABA Model Rules?

Digital wallets are no longer niche tools reserved for tech enthusiasts. Apple Wallet and similar platforms have matured into robust ecosystems that can store payment cards, IDs, insurance cards, transit passes, and even car keys. They sit at the intersection of convenience, security, and risk. As attorneys, we have to examine that intersection with greater rigor than the average consumer, because our technology choices are framed by duties of competence, confidentiality, and client service.

The promise of a wallet‑free practice

On paper, the case for a full digital wallet is compelling. Digital payments can reduce friction at the courthouse café, client lunches, and bar events. Digital IDs eliminate worries about misplacing a physical card. Many platforms add layers of biometric security that traditional wallets can’t match. David notes that Apple Wallet has “been quietly getting better for years,” allowing storage of physical card numbers behind Face ID and making peer‑to‑peer payments a tap‑away. For a solo or small‑firm lawyer, that friction reduction compounds over time into real efficiency.

From a malpractice‑avoidance standpoint, a digital wallet can be safer than a billfold. Losing a traditional wallet means scrambling to cancel credit cards, monitoring for identity theft, and possibly dealing with unauthorized use of your bar ID or access cards. A lost phone, by contrast, can be located, remotely wiped, or locked with strong authentication. Properly configured, it can reduce risk rather than increase it.

This is where ABA Model Rule 1.1 on competence, particularly Comment 8, becomes relevant. The Comment notes that competent representation includes understanding “the benefits and risks associated with relevant technology.” A digital wallet is very much “relevant technology” for a modern practitioner. Choosing not to understand or use it, especially when it offers better security and traceability than analog methods, may itself become a competence question as the bar’s expectations evolve.

The gaps: cash, IDs, and access to justice

There are plenty of reasons not to go “cashless” when leaving home or the office.

Still, David’s hesitation—“there’s a part of me that still feels compelled to carry a small wallet with my driver’s license in it”—should resonate with lawyers. There are pockets of our professional lives where the ecosystem is not ready, and those pockets matter.

First, cash. Many lawyers still tip courthouse staff, parking attendants, baristas near the courthouse, and others in cash—including, in my case, using $2 bills (yes, they are still produced, still accepted, and can be obtained at many banks across the U.S. [At least as of the time of this posting]. I almost always get an excited smile when I tip my barista for his/her work with a $2 bill). Cash remains the lowest‑friction, most universally accepted “protocol” for small-scale human interactions. Refusing to carry any cash at all can put you in awkward social and professional situations, especially in older courthouses or local establishments that either do not take cards or resent micro‑transactions by card. For those committed to cash tipping as a personal or professional habit, a purely digital wallet is not yet a substitute.

Second, physical IDs. While TSA and some states are piloting and accepting digital IDs, acceptance is not universal, and the rules are in flux. David notes he has a state digital ID that “shows up nicely” in Apple Wallet. That is great—until you encounter an agency, judge, clerk, or officer who simply will not accept it. Not all jurisdictions recognize mobile driver’s licenses or digital IDs, and some procedures (e.g., certain filings or in‑person notarizations) still presume a physical, inspectable card. The risk is not hypothetical: show up with the wrong form of ID for a flight or a court security checkpoint, and you may face delay, additional fees, or outright denial of entry.

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.”

✈️ 🌎 ‼️

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.” ✈️ 🌎 ‼️

For lawyers, this is not just an inconvenience—it is a competence and diligence issue under Model Rules 1.1 and 1.3. If your failure to carry an accepted ID means you miss a hearing, delay a filing, or cannot visit a client, you have a professional problem, not just a tech annoyance. Likewise, local court rules and security policies may require a specific bar card or government‑issued ID to enter restricted areas. A digital ID on your phone will not help if the sheriff’s deputy at the door has not been trained or authorized to accept it.

Third, connectivity. A digital wallet that is fully dependent on live internet access is a fragile tool in old courthouses with thick stone walls, in rural jurisdictions, or during emergencies. Many modern digital wallets do allow offline transactions at NFC terminals using stored tokens, but not all. If your payment method, ID, or membership pass depends on a cloud verification step and you are in a dead zone—or your battery dies—you effectively have no wallet. Lawyers who rely on public transit, rideshares, or mobile office setups need to consider this in contingency planning, particularly when punctuality is essential.

Digital wallets and legal ethics

From an ethics perspective, digital wallets intersect with several core duties.

Under Model Rule 1.6, protecting client confidentiality extends to how you pay for and manage client‑related expenses. If you are using peer‑to‑peer payment apps or storing client‑related account details in a digital wallet, you must understand their privacy and data‑sharing practices. Some services expose transaction histories, social feeds, or metadata that could inadvertently reveal client relationships or matter details. Configuring strict privacy settings and separating personal from firm accounts is not optional; it is part of your duty of confidentiality.

Model Rule 1.15 on safekeeping property also comes into play if you ever use digital tools to handle client funds, reimbursements, or settlement distributions. While most bars still require traditional trust accounts and closely regulate payment processors, the trend toward digital payments will continue. Using any digital payment or wallet solution around client funds requires careful vetting, written policies, and—ideally—consultation with your malpractice carrier and bar ethics guidance.

Finally, Model Rule 5.3 on responsibilities regarding nonlawyer assistance extends to IT providers and wallet platforms. If your firm relies on third‑party providers to manage mobile device management (MDM), security, or payment integrations, you must make reasonable efforts to ensure their conduct aligns with your professional obligations. Managing digital wallets on firm‑owned or BYOD devices should be governed by a clear policy that addresses encryption, remote wipe, lock‑screen settings, and acceptable use.

Practical guidance: a hybrid, not a cliff

As advanced as our digital wallets are, the legal professional should carry a combination of digital and physical identification, means of payment, and cash!

Given these realities, are we “truly there” yet for lawyers to go fully wallet‑free? Not quite. For most practitioners, the prudent path is a hybrid approach:

  • Carry a slim physical wallet with a government‑issued ID, bar card (if used locally), a minimal backup payment card, and a small amount of cash for tipping and edge cases.

  • Use a digital wallet as your primary payment and convenience layer, especially in environments where it is well‑supported and secure.

  • Confirm, in advance, what IDs your courthouse, correctional facilities, and agencies accept, and do not assume your digital ID will suffice.

  • Harden your digital wallet: enable strong biometrics, ensure a reputable MDM or security solution manages any firm devices, and separate personal from professional payment flows where possible.

This hybrid approach aligns with Model Rule 1.1’s requirement to understand and responsibly adopt relevant technology while honoring the practical demands of courtroom work and client service. It allows you to benefit from the security and efficiency of digital wallets without betting your professional obligations on the most fragile parts of the ecosystem: universal acceptance and ubiquitous connectivity.

David ends his reflection by asking whether he will ever “truly go out knowingly wallet‑free” and whether he is alone in his hesitation. Lawyers should feel no pressure to be first in line to abandon physical wallets entirely. Our job is to advocate, counsel, and appear—on time, properly identified, and fully prepared. That may mean, for the foreseeable future, living comfortably in both worlds: with a well‑tuned digital wallet in your hand and a minimal, carefully curated physical wallet in your pocket.

MTC

⭐ First Five-Star Amazon Review for “The Lawyer’s Guide to Podcasting” – Why Tech-Savvy Lawyers Should Care About ABA Ethics, Client Trust, and Smart Marketing 🎙️⚖️

“The Lawyer’s Guide to Podcasting” by your favorite blogger/podcaster just earned its first five-star Amazon review, and it’s a milestone worth your attention. 🎉📘 The reviewer highlights what many of us in legal tech have been saying: podcasting is no longer a fringe hobby; it is a strategic, ethics-aware marketing channel for modern law practice. 🎙️

For lawyers with limited to moderate tech skills, this book demystifies microphones, workflows, and publishing tools without assuming you want to become an engineer. Instead, it walks you through practical steps to share your expertise in a format today’s clients already trust—long-form, authentic audio. 🔊

From a professional responsibility perspective, the guidance aligns with ABA Model Rule 1.1 on technology competence and Model Rule 1.6 on confidentiality by emphasizing the use of secure platforms, thoughtful content planning, and careful handling of client-identifying details. The book reinforces that podcasting can showcase your substantive knowledge while staying within the guardrails of Model Rule 7.1, avoiding misleading claims about your services. ⚖️

QR Code for Amazon book link

The first five-star review underlines two themes: listeners want real conversations, and they quickly recognize when a lawyer respects both the audience’s time and the profession’s ethical duties. That is exactly the posture this book encourages—credible, compliant, and client-centered. 🌟

If you are ready to build authority, differentiate your practice, and satisfy your tech-competence obligations without drowning in jargon, now is the perfect time to get your copy of “The Lawyer’s Guide to Podcasting” on Amazon and start planning your first ethically sound episode. 🚀

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

🎙️ My Law School Library Adds The Lawyer’s Guide to Podcasting to Empower Ethical, Tech-Savvy Attorneys ⚖️

https://law-capital.libguides.com/SpecialCollections/NewBooks

I’m thrilled to share that my alma mater, Capital University Law School, has added my book, The Lawyer’s Guide to Podcasting, to its Law Library Special Collections. 🎉📚 Seeing this guide on the same shelves where I learned to think like a lawyer underscores how central ethical technology use has become to modern advocacy. 🎙️ Written for attorneys with limited to moderate tech skills, it walks readers through planning, recording, and promoting a law‑firm podcast while honoring ABA Model Rules on technology competence, confidentiality, and attorney advertising, helping you communicate confidently, credibly, and compliantly. ⚖️🚀

You can pick up your copy on Amazon Today!

MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC

Word of the Week: Deepfakes: How Lawyers Can Spot Fake Digital Evidence and Avoid ABA Model Rule Violations ⚖️

A Tech-Savvy Lawyer needs to be able to spot Deepfakes Before Courtroom Ethics Violations!

“Deepfakes” are AI‑generated or heavily manipulated audio, video, or images that convincingly depict people saying or doing things that never happened.🧠 They are moving from internet novelty to everyday litigation risk, especially as parties try to slip fabricated “evidence” into the record.📹

Recent cases and commentary show courts will not treat deepfakes as harmless tech problems. Judges have dismissed actions outright and imposed severe sanctions when parties submit AI‑generated or altered media, because such evidence attacks the integrity of the judicial process itself.⚖️ At the same time, courts are wary of lawyers who cry “deepfake” without real support, since baseless challenges can look like gamesmanship rather than genuine concern about authenticity.

For practicing lawyers, deepfakes are first and foremost a professional responsibility issue. ABA Model Rule 1.1 (Competence) now clearly includes a duty to understand the benefits and risks of relevant technology, which includes generative AI tools that create or detect deepfakes. You do not need to be an engineer, but you should recognize common red flags, know when to request native files or metadata, and understand when to bring in a qualified forensic expert.

Deepfakes in Litigation: Detect Fake Evidence, Protect Your License!

Deepfakes also implicate Model Rule 3.3 (Candor to the tribunal) and Model Rule 3.4 (Fairness to opposing party and counsel). If you knowingly offer manipulated media, or ignore obvious signs of fabrication in your client’s “evidence,” you risk presenting false material to the court and obstructing access to truthful proof. Courts have made clear that submitting fake digital evidence can justify terminating sanctions, fee shifting, and referrals for disciplinary action.

Model Rule 8.4(c), which prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation, sits in the background of every deepfake decision. A lawyer who helps create, weaponize, or strategically “look away” from deepfake evidence is not just making a discovery mistake; they may be engaging in professional misconduct. Likewise, a lawyer who recklessly accuses an opponent of using deepfakes without factual grounding risks violating duties of candor and professionalism.

Practically, you can start protecting your clients with a few repeatable steps. Ask early in the case what digital media exists, how it was created, and who controlled the devices or accounts.🔍 Build authentication into your discovery plan, including requests for original files, device logs, and platform records that can help confirm provenance. When the stakes justify it, consult a forensic expert rather than relying on “gut feel” about whether a recording “looks real.”

lawyers need to know Deepfakes, Metadata, and ABA Ethics Rules!

Finally, talk to clients about deepfakes before they become a problem. Explain that altering media or using AI to “clean up” evidence is dangerous, even if they believe they are only fixing quality.📲 Remind them that courts are increasingly sophisticated about AI and that discovery misconduct in this area can destroy otherwise strong cases. Treat deepfakes as another routine topic in your litigation checklist, alongside spoliation and privilege, and you will be better prepared for the next “too good to be true” video that lands in your inbox.

ANNOUNCEMENT: My Book, “The Lawyer’s Guide to Podcasting,” is Amazon #1 New Release (Law Office Technology)

I’m excited to report that The Lawyer’s Guide to Podcasting ranked #1 as a New Release in Amazon’s Law Office Technology category for the week of February 07, 2026, and sales have already doubled since last month. 🎙️📈

For lawyers with limited-to-moderate tech skills, the book focuses on practical, repeatable workflows for launching and sustaining a compliant podcast presence. ⚖️💡

As you plan content, remember ABA Model Rule 1.1 (technology competence) and the related duties of confidentiality (Rule 1.6) and communications about services (Rule 7.1): use secure tools, avoid accidental client disclosures, and ensure marketing statements are accurate. 🔐✅

Get your copy today! 📘🚀

 
 

HOW TO: How Lawyers Can Protect Themselves on LinkedIn from New Phishing 🎣 Scams!

Fake LinkedIn warnings target lawyers!

LinkedIn has become an essential networking tool for lawyers, making it a high‑value target for sophisticated phishing campaigns.⚖️ Recent scams use fake “policy violation” comments that mimic LinkedIn’s branding and even leverage the official lnkd.in URL shortener to trick users into clicking on malicious links. For legal professionals handling confidential client information, falling victim to one of these attacks can create both security and ethical problems.

First, understand how this specific scam works.💻 Attackers create LinkedIn‑themed profiles and company pages (for example, “Linked Very”) that use the LinkedIn logo and post “reply” comments on your content, claiming your account is “temporarily restricted” for non‑compliance with platform rules. The comment urges you to click a link to “verify your identity,” which leads to a phishing site that harvests your LinkedIn credentials. Some links use non‑LinkedIn domains, such as .app, or redirect through lnkd.in, making visual inspection harder.

To protect yourself, treat all public “policy violation” comments as inherently suspect.🔍 LinkedIn has confirmed it does not communicate policy violations through public comments, so any such message should be considered a red flag. Instead of clicking, navigate directly to LinkedIn in your browser or app, check your notifications and security settings, and only interact with alerts that appear within your authenticated session. If the comment uses a shortened link, hover over it (on desktop) to preview the destination, or simply refuse to click and report it.

From an ethics standpoint, these scams directly implicate your duties under ABA Model Rules 1.1 and 1.6.⚖️ Comment 8 to Rule 1.1 stresses that competent representation includes understanding the benefits and risks associated with relevant technology. Failing to use basic safeguards on a platform where you communicate with clients and colleagues can fall short of that standard. Likewise, Rule 1.6 requires reasonable efforts to prevent unauthorized access to client information, which includes preventing account takeover that could expose your messages, contacts, or confidential discussions.

Public “policy violations” are a red flag!

Practically, you should enable multi‑factor authentication (MFA) on LinkedIn, use a unique, strong password stored in a reputable password manager, and review active sessions regularly for unfamiliar devices or locations.🔐 If you suspect you clicked a malicious link, immediately change your LinkedIn password, revoke active sessions, enable or confirm MFA, and run updated anti‑malware on your device. Then notify your firm’s IT or security contact and consider whether any client‑related disclosures are required under your jurisdiction’s ethics rules and breach‑notification laws.

Finally, build a culture of security awareness in your practice.👥 Brief colleagues and staff about this specific comment‑reply scam, show screenshots, and explain that LinkedIn does not resolve “policy violations” via comment threads. Encourage a “pause before you click” mindset and make reporting easy—internally to your IT team and externally to LinkedIn’s abuse channels. Taking these steps not only protects your professional identity but also demonstrates the technological competence and confidentiality safeguards the ABA Model Rules expect from modern legal practitioners.

From an ethics standpoint, these scams directly implicate your duties under ABA Model Rules 1.1 and 1.6.⚖️ Comment 8 to Rule 1.1 stresses that competent representation includes understanding the benefits and risks associated with relevant technology. Failing to use basic safeguards on a platform where you communicate with clients and colleagues can fall short of that standard. Likewise, Rule 1.6 requires reasonable efforts to prevent unauthorized access to client information, which includes preventing account takeover that could expose your messages, contacts, or confidential discussions.

Train your team to pause and report!

Practically, you should enable multi‑factor authentication (MFA) on LinkedIn, use a unique, strong password stored in a reputable password manager, and review active sessions regularly for unfamiliar devices or locations.🔐 If you suspect you clicked a malicious link, immediately change your LinkedIn password, revoke active sessions, enable or confirm MFA, and run updated anti‑malware on your device. Then notify your firm’s IT or security contact and consider whether any client‑related disclosures are required under your jurisdiction’s ethics rules and breach‑notification laws.

Finally, build a culture of security awareness in your practice.👥 Brief colleagues and staff about this specific comment‑reply scam, show screenshots, and explain that LinkedIn does not resolve “policy violations” via comment threads. Encourage a “pause before you click” mindset and make reporting easy—internally to your IT team and externally to LinkedIn’s abuse channels. Taking these steps not only protects your professional identity but also demonstrates the technological competence and confidentiality safeguards the ABA Model Rules expect from modern legal practitioners.

🚨 BOLO: Samsung Budget Phones Contain Pre-Installed Data-Harvesting Software: Critical Action Steps for Legal Professionals

‼️ ALERT: Hidden Spyware in Samsung Phones!

Samsung Galaxy A, M, and F series smartphones contain pre-installed software called AppCloud, developed by ironSource (now owned by Unity Technologies), that harvests user data, including location information, app usage patterns, IP addresses, and potentially biometric data. This software cannot be fully uninstalled without voiding your device warranty, and it operates without accessible privacy policies or explicit consent mechanisms. Legal professionals using these devices face significant risks to attorney-client privilege and confidential client information.

The Threat Landscape

AppCloud runs quietly in the background with permissions to access network connections, download files without notification, and prevent phones from sleeping. The application is deeply integrated into Samsung's One UI operating system, making it impossible to fully remove through standard methods. Users across West Asia, North Africa, Europe, and South Asia report that even after disabling the application, it reappears following system updates.

The digital rights organization SMEX documented that AppCloud's privacy policy is not accessible online, and the application does not present users with consent screens or terms of service disclosures. This lack of transparency raises serious ethical and legal compliance concerns, particularly for attorneys bound by professional responsibility rules regarding client confidentiality.

Legal and Ethical Implications for Attorneys

Under ABA Model Rule 1.6, attorneys must make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client". The duty of technological competence under Rule 1.1, Comment 8, requires attorneys to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology".

The New York Bar's 2022 ethics opinion specifically addresses smartphone security, prohibiting attorneys from sharing contact information with smartphone applications unless they can confirm that no person will view confidential client information and that data will not be transferred to third parties without client consent. AppCloud's data harvesting practices appear to violate both conditions.

Immediate Action Steps

‼️ Act now if you’ve purchased certain samsung phones - your bar license could be in jeopardy!

Step 1: Identify Affected Devices
Check whether you use a Samsung Galaxy A series (A05 through A56), M series (M01 through M56), or F series device. These budget and mid-range models are primary targets for AppCloud installation.

Step 2: Disable AppCloud
Navigate to Settings > Apps > Show System Apps > AppCloud > Disable. Additionally, revoke notification permissions, restrict background data usage, and disable the "Install unknown apps" permission.

Step 3: Monitor for Reactivation
After system updates, return to AppCloud settings and re-disable the application.

Step 4: Consider Device Migration
For attorneys handling highly sensitive matters, consider transitioning to devices without pre-installed data collection software. Document your decision-making process as evidence of reasonable security measures.

Step 5: Client Notification Assessment
Evaluate whether client notification is required under your jurisdiction's professional responsibility rules. California's Formal Opinion 2020-203 addresses obligations following an electronic data compromise.

The Bottom Line

Budget smartphone economics should not compromise attorney-client privilege. Samsung's partnership with ironSource places aggressive advertising technology on devices used by legal professionals worldwide. Until Samsung provides transparent opt-out mechanisms or removes AppCloud entirely, attorneys using affected devices should implement immediate mitigation measures and document their security protocols.