TSL.P Labs đ§Ş: Everyday Tech, Extraordinary Sanctions: How Courts Are Cracking Down on Fake AI Evidence in 2026 âď¸đ¤
/Everyday devices can capture extraordinary evidence, but the same tools can also manufacture convincing fakes. đĽâď¸ In this episode, we unpack our February 9, 2026, editorial on how courts are punishing fake digital and AI-generated evidence, then translate the risk into practical guidance for lawyers and legal teams.
Youâll hear why judges are treating authenticity as a frontline issue, what ethical duties get triggered when AI touches evidence or briefing, and how a simple âauthenticity playbookâ can help you avoid career-ending mistakes. â
In our conversation, we cover the following
00:00:00 â Preview: From digital discovery to digital deception, and the question of what happens when your âstar witnessâ is actually a hallucination or deepfake đ¨
00:00:20 â Introducing the editorial âEveryday Tech, Extraordinary Evidence Again: How Courts Are Punishing Fake Digital and AI Data.â đ
00:00:40 â Welcome to the Tech-Savvy Lawyer.Page Labs Initiative and this AI Deep Dive Roundtable đď¸
00:01:00 â Framing the episode: flipping last monthâs optimism about smartphones, dash cams, and wearables as case-winning âsilent witnessesâ to their dark mirrorâAI-fabricated evidence đ
00:01:30 â How everyday devices and AI tools can both supercharge litigation strategy and become ethical landmines under the ABA Model Rules âď¸
00:02:00 â Panel discussion opens: revisiting last monthâs âEveryday Tech, Extraordinary Evidenceâ AI bonus and the optimism around smartphone, smartwatch, and dash cam data as unbiased proof đąâđ
00:02:30 â Remembering cases like the Minnesota shooting and why these devices were framed as âultimate witnessesâ if the data is preserved quickly enough đ
00:03:00 â The pivot: same tools, new threatsâmoving from digital discovery to digital deception as deepfakes and hallucinations enter the evidentiary record đ¤
00:03:30 â Setting the âmissionâ for the episode: examining how courts are reacting to AI-generated âslopâ and deepfakes, with an increasingly aggressive posture toward sanctions đŁ
00:04:00 â Why courts are on high alert: the âdemocratization of deception,â falling costs of convincing video fakes, and the collapse of the old presumption that âpictures donât lieâ đŹ
00:04:30 â Everyday scrutiny: judges now start with âWhere did this come from?â and demand details on who created the file, how it was handled, and what the metadata shows đ
00:05:00 â Metadata explained as the âdata about the dataââtimestamps, software history, edit tracesâand how it reveals possible AI manipulation đ§Ź
00:06:00 â Entering the âsanction phaseâ: why we are beyond warnings and into real penalties for mishandling or fabricating digital and AI evidence đŤ
00:06:30 â Horror Story #1 (Mendon v. Cushman & Wakefield, Cal. Super. Ct. 2025): plaintiffs submit videos, photos, and screenshots later determined to be deepfakes created or altered with generative AI đ§¨
00:07:00 â Judge Victoria Kakowskiâs response: finding that the deepfakes undermined the integrity of judicial proceedings and imposing terminating sanctionsââdeath penaltyâ for the lawsuit âď¸
00:07:30 â How a single deepfake âpoisons the well,â destroying the courtâs trust in all of a partyâs submissions and forfeiting their right to the courtâs time đĽ
00:08:00 â Horror Story #2 (S.D.N.Y. 2023): the New York âhallucinating lawyerâ case where six imaginary cases generated by ChatGPT were filed without verification đ
00:08:30 â Rule 11 sanctions and humiliation: Judge Castelâs order, monetary penalty, and the requirement to send apology letters to real judges whose names were misused âď¸
00:09:00 â California follow-on: appellate lawyer Amir Mustaf files an appeal brief with 21 fake citations, triggering a 10,000-dollar sanction and a finding that he did not read or verify his own filing đ¸
00:09:30 â Courtsâ reasoning: outsourcing your job to an AI tool is not just being wrongâit is wasting judicial resources and taxpayer money đ§ž
00:10:00 â Do we need new laws? Why Michael argues that existing ABA Model Rules already provide the safety rails; the task is to apply them to AI and digital evidence, not to reinvent them đ§Š
00:10:20 â Rule 1.1 (competence): why âIâm not a tech personâ is no longer a viable excuse if you use AI to enhance video or draft briefs without understanding or verifying the output đ§
00:11:00 â Rule 1.6 (confidentiality): the ethical minefield of uploading client dash cam video or wearable medical data to consumer-grade AI tools and risking privilege leakage âď¸
00:11:30 â Training risk: how client data can end up in model training sets and why âquick AI summariesâ can inadvertently expose secrets đ
00:12:00 â Rules 3.3 and 4.1 (candor and truthfulness): presenting AI-altered media as original or failing to verify AI output can now be treated as misrepresentation đ¤Ľ
00:12:30 â Rules 5.1â5.3 (supervision): why partners and supervising lawyers remain on the hook for juniors, staff, and vendors who misuse AIâeven if they didnât personally type the prompts đ§âđź
00:13:00 â Authenticity Playbook, Step 1: mindset shiftânever treat AI as a âsilent co-counselâ; instead, treat it like a very eager, very inexperienced, slightly drunk intern who always needs checking đˇđ¤
00:13:30 â Authenticity Playbook, Step 2: preserve the original and disclose any AI enhancement; build a clean chain of custody while staying transparent about edits đď¸
00:14:00 â Authenticity Playbook, Step 3: using forensic vendors as authenticity firewallsâexperts who can certify that metadata and visual cues show no AI manipulation đĄď¸
00:14:30 â Authenticity Playbook, Step 4: âtrain with fearâ by showing your team real orders, sanctions, and public shaming rather than relying on abstract ethics lectures â ď¸
00:15:00 â Authenticity Playbook, Step 5: documenting verification stepsâlogging files, tools, and checks so you can demonstrate good faith if a judge questions your evidence đ
00:16:00 â Bigger picture: the era of easy, unchallenged digital evidence is over; mishandled tech can now produce âextraordinary sanctionsâ instead of extraordinary evidence đ§
00:16:30 â Authenticity as âthe moral center of digital advocacyâ: if you cannot vouch for your digital evidence, you are failing in your role as an advocate đď¸
00:17:00 â Future risk: as deepfakes become perfect and nearly impossible to detect with the naked eye, forensic expertise may become a prerequisite for trusting any digital evidence đŹ
00:17:30 â âDoes truth get a price tag?ââwhether justice becomes a luxury product if only wealthy parties can afford authenticity firewalls and expert validation đź
00:18:00 â Closing reflections: fake evidence, real consequences, and the call to verify sources and check metadata before you file â
00:18:30 â Closing: invitation to visit Tech-Savvy Lawyer.Page for the full editorial, resources, and to like, subscribe, and share with colleagues who need to stay ahead of legal tech innovation đ
Resources
Cases
In Mendones v. Cushman & Wakefield, Inc. (Cal. Super. Ct. Alameda County, 2025), plaintiffs submitted multiple videos, photos, and screenshots that the court determined were deepfakes or altered with generative AI.đš Judge Victoria Kolakowski found intentional submission of false testimony and imposed terminating sanctions, dismissing the case outright and emphasizing that deepfake evidence âfundamentally undermines the integrity of judicial proceedings.ââď¸
In New York, two lawyers became infamous in 2023 after filing a brief containing six imaginary cases generated by ChatGPT; Judge P. Kevin Castel sanctioned them under Rule 11 for abandoning their responsibilities and failing to verify the authorities they cited.đ They were ordered to pay a monetary penalty and to notify the real judges whose names had been falsely invoked, a reputational hit that far exceeded the dollar amount.đ¸
A California appellate lawyer, Amir Mostafavi, was later fined $10,000 for filing an appeal with twentyâone fake case citations generated by ChatGPT.đť The court stressed that he had not read or verified the AIâgenerated text, and treated that omission as a violation of court rules and a waste of judicial resources and taxpayer money.â ď¸
ABA Model Rules
Rule 1.1 (Competence): You must understand the benefits and risks of relevant technology, which now clearly includes generative AI and deepfake tools.âď¸ Using AI to draft or âenhanceâ without checking the output is not a harmless shortcutâit is a competence problem.
Comment 8âs duty of technological competence; the new sanctions landscape simply clarifies the stakes.đ
Rule 1.6 (Confidentiality): Uploading client videos, wearable logs, or sensitive communications to consumerâgrade AI sites can expose them to unknown retention and training practices, risking confidentiality violations.đ
Rule 3.3 (Candor to the Tribunal) and Rule 4.1 (Truthfulness): Presenting AIâaltered video or fake citations as if they were genuine is the very definition of misrepresentation, as the New York and California sanction orders make clear.â ď¸ Even negligent failure to verify can be treated harshly once the courtâs patience for AI excuses runs out.
Rules 5.1â5.3 (Supervision): Supervising lawyers must ensure that associates, law clerks, and vendors understand that AI outputs are starting points, not trustworthy final products, and that fake or manipulated digital evidence will not be tolerated.đĽ

