Word of the Week: Deepfakes: How Lawyers Can Spot Fake Digital Evidence and Avoid ABA Model Rule Violations ⚖️

A Tech-Savvy Lawyer needs to be able to spot Deepfakes Before Courtroom Ethics Violations!

“Deepfakes” are AI‑generated or heavily manipulated audio, video, or images that convincingly depict people saying or doing things that never happened.🧠 They are moving from internet novelty to everyday litigation risk, especially as parties try to slip fabricated “evidence” into the record.📹

Recent cases and commentary show courts will not treat deepfakes as harmless tech problems. Judges have dismissed actions outright and imposed severe sanctions when parties submit AI‑generated or altered media, because such evidence attacks the integrity of the judicial process itself.⚖️ At the same time, courts are wary of lawyers who cry “deepfake” without real support, since baseless challenges can look like gamesmanship rather than genuine concern about authenticity.

For practicing lawyers, deepfakes are first and foremost a professional responsibility issue. ABA Model Rule 1.1 (Competence) now clearly includes a duty to understand the benefits and risks of relevant technology, which includes generative AI tools that create or detect deepfakes. You do not need to be an engineer, but you should recognize common red flags, know when to request native files or metadata, and understand when to bring in a qualified forensic expert.

Deepfakes in Litigation: Detect Fake Evidence, Protect Your License!

Deepfakes also implicate Model Rule 3.3 (Candor to the tribunal) and Model Rule 3.4 (Fairness to opposing party and counsel). If you knowingly offer manipulated media, or ignore obvious signs of fabrication in your client’s “evidence,” you risk presenting false material to the court and obstructing access to truthful proof. Courts have made clear that submitting fake digital evidence can justify terminating sanctions, fee shifting, and referrals for disciplinary action.

Model Rule 8.4(c), which prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation, sits in the background of every deepfake decision. A lawyer who helps create, weaponize, or strategically “look away” from deepfake evidence is not just making a discovery mistake; they may be engaging in professional misconduct. Likewise, a lawyer who recklessly accuses an opponent of using deepfakes without factual grounding risks violating duties of candor and professionalism.

Practically, you can start protecting your clients with a few repeatable steps. Ask early in the case what digital media exists, how it was created, and who controlled the devices or accounts.🔍 Build authentication into your discovery plan, including requests for original files, device logs, and platform records that can help confirm provenance. When the stakes justify it, consult a forensic expert rather than relying on “gut feel” about whether a recording “looks real.”

lawyers need to know Deepfakes, Metadata, and ABA Ethics Rules!

Finally, talk to clients about deepfakes before they become a problem. Explain that altering media or using AI to “clean up” evidence is dangerous, even if they believe they are only fixing quality.📲 Remind them that courts are increasingly sophisticated about AI and that discovery misconduct in this area can destroy otherwise strong cases. Treat deepfakes as another routine topic in your litigation checklist, alongside spoliation and privilege, and you will be better prepared for the next “too good to be true” video that lands in your inbox.

ANNOUNCEMENT: My Book, “The Lawyer’s Guide to Podcasting,” is Amazon #1 New Release (Law Office Technology)

I’m excited to report that The Lawyer’s Guide to Podcasting ranked #1 as a New Release in Amazon’s Law Office Technology category for the week of February 07, 2026, and sales have already doubled since last month. 🎙️📈

For lawyers with limited-to-moderate tech skills, the book focuses on practical, repeatable workflows for launching and sustaining a compliant podcast presence. ⚖️💡

As you plan content, remember ABA Model Rule 1.1 (technology competence) and the related duties of confidentiality (Rule 1.6) and communications about services (Rule 7.1): use secure tools, avoid accidental client disclosures, and ensure marketing statements are accurate. 🔐✅

Get your copy today! 📘🚀

 
 

MTC: Everyday Tech, Extraordinary Evidence—Again: How Courts Are Punishing Fake Digital and AI Data ⚖️📱

Check your Ai work - AI fraud can meet courtroom consequences.

In last month’s editorial, “Everyday Tech, Extraordinary Evidence,” we walked through how smartphones, dash cams, and wearables turned the Minnesota ICE shooting into a case study in modern evidence practice, from rapid preservation orders to multi‑angle video timelines.📱⚖️ We focused on the positive side: how deliberate intake, early preservation, and basic synchronization tools can turn ordinary devices into case‑winning proof.📹 This follow‑up tackles the other half of the equation—what happens when “evidence” itself is fake, AI‑generated, or simply unverified slop, and how courts are starting to respond with serious sanctions.⚠️

From Everyday Tech to Everyday Scrutiny

The original article urged you to treat phones and wearables as critical evidentiary tools, not afterthoughts: ask about devices at intake, cross‑reference GPS trails, and treat cars as rolling 360‑degree cameras.🚗⌚ We also highlighted the Minnesota Pretti shooting as an example of how rapid, court‑ordered preservation of video and other digital artifacts can stop crucial evidence from “disappearing” before the facts are fully understood.📹 Those core recommendations still stand—if anything, they are more urgent now that generative AI makes it easier to fabricate convincing “evidence” that never happened.🤖

The same tools that helped you build robust, data‑driven reconstructions—synchronized bystander clips, GPS logs, wearables showing movement or inactivity—are now under heightened scrutiny for authenticity.📊 Judges and opposing counsel are no longer satisfied with “the video speaks for itself”; they want to know who created it, how it was stored, whether metadata shows AI editing, and what steps counsel took to verify that the file is what it purports to be.📁

When “Evidence” Is Fake: Sanctions Arrive

We have moved past the hypothetical stage. Courts are now issuing sanctions—sometimes terminating sanctions—when parties present fake or AI‑generated “evidence” or unverified AI research.💥

These are not “techie” footnotes; they are vivid warnings that falsified or unverified digital and AI data can end careers and destroy cases.🚨

ABA Model Rules: The Safety Rails You Ignore at Your Peril

Train to verify—defend truth in the age of AI.

Your original everyday‑tech playbook already fits neatly within ABA Model Rule 1.1 and Comment 8’s duty of technological competence; the new sanctions landscape simply clarifies the stakes.📚

  • Rule 1.1 (Competence): You must understand the benefits and risks of relevant technology, which now clearly includes generative AI and deepfake tools.⚖️ Using AI to draft or “enhance” without checking the output is not a harmless shortcut—it is a competence problem.

  • Rule 1.6 (Confidentiality): Uploading client videos, wearable logs, or sensitive communications to consumer‑grade AI sites can expose them to unknown retention and training practices, risking confidentiality violations.🔐

  • Rule 3.3 (Candor to the Tribunal) and Rule 4.1 (Truthfulness): Presenting AI‑altered video or fake citations as if they were genuine is the very definition of misrepresentation, as the New York and California sanction orders make clear.⚠️ Even negligent failure to verify can be treated harshly once the court’s patience for AI excuses runs out.

  • Rules 5.1–5.3 (Supervision): Supervising lawyers must ensure that associates, law clerks, and vendors understand that AI outputs are starting points, not trustworthy final products, and that fake or manipulated digital evidence will not be tolerated.👥

Bridging Last Month’s Playbook With Today’s AI‑Risk Reality

In Last month’s editorial, we urged three practical habits: ask about devices, move fast on preservation, and build a vendor bench for extraction and authentication.📱⌚🚗 This month, the job is to wrap those habits in explicit AI‑risk controls that lawyers with modest tech skills can realistically follow.🧠

  1. Never treat AI as a silent co‑counsel. If you use AI to draft research, generate timelines, or “enhance” video, you must independently verify every factual assertion and citation, just as you would double‑check a new associate’s memo.📑 “The AI did it” is not a defense; courts have already said so.

  2. Preserve the original, disclose the enhancement. Our earlier advice to keep raw smartphone files and dash‑cam footage now needs one more step: if you use any enhancement (AI or otherwise), label it clearly and be prepared to explain what was done, why, and how you ensured that the content did not change.📹

  3. Use vendors and examiners as authenticity firewalls. Just as we suggested, bringing in digital forensics vendors to extract phone and wearable data, you should now consider them for authenticity challenges as well—especially where the opposing side may have incentives or tools to create deepfakes.🔍 A simple expert declaration that a file shows signs of AI manipulation can be the difference between a credibility battle and a terminating sanction.

  4. Train your team using real sanction orders. Nothing clarifies the risk like reading Judge Castel’s order in the ChatGPT‑citation case or Judge Kolakowski’s deepfake ruling in Mendones.⚖️ Incorporate those cases into short internal trainings and CLEs; they translate abstract “AI ethics” into concrete, courtroom‑tested consequences.

  5. Document your verification steps. For everyday tech evidence, a simple log—what files you received, how you checked metadata, whether you compared against other sources, which AI tools (if any) you used, and what you did to confirm their outputs—can demonstrate good faith if a judge later questions your process.📋

Final Thoughts: Authenticity as a First‑Class Question

be the rock star! know how to use ai responsibly in your work!

In the first editorial, the core message was that everyday devices are quietly turning into your best witnesses.📱⌚ The new baseline is that every such “witness” will be examined for signs of AI contamination, and you will be expected to have an answer when the court asks, “What did you do to make sure this is real?”🔎

Lawyers with limited to moderate tech skills do not need to reverse‑engineer neural networks or master forensic software. Instead, they must combine the practical habits from January’s piece—asking, preserving, synchronizing—with a disciplined refusal to outsource judgment to AI.⚖️ In an era of deepfakes and hallucinated case law, authenticity is no longer a niche evidentiary issue; it is the moral center of digital advocacy.✨

Handled wisely, your everyday tech strategy can still deliver “extraordinary evidence.” Handled carelessly, it can just as quickly produce extraordinary sanctions.🚨

MTC

TSL.P Labs 🧪: Legal Tech Wars, Client Data, and Your Law License: An AI-Powered Ethics Deep Dive ⚖️🤖

📌 To Busy to Read This Week’s Editorial?

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this Tech-Savvy Lawyer Page Labs Initiative episode, AI co-hosts walk through how high‑profile “legal tech wars” between practice‑management vendors and AI research startups can push your client data into the litigation spotlight and create real ethics exposure under ABA Model Rules 1.1, 1.6, and 5.3.

We’ll explore what happens when core platforms face federal lawsuits, why discovery and forensic audits can put confidential matters in front of third parties, and how API lockdowns, stalled product roadmaps, and forced sales can grind your practice operations to a halt. More importantly, you’ll get a clear five‑step action plan—inventorying your tech stack, confirming data‑export rights, mapping backup providers, documenting diligence, and communicating with clients—that works even if you consider yourself “moderately tech‑savvy” at best.

Whether you’re a solo, a small‑firm practitioner, in‑house, or simply AI‑curious, this conversation will help you evaluate whether you are the supervisor of your legal tech—or its hostage. 🔐

👉 Listen now and decide: are you supervising your legal tech—or are you its hostage?

In our conversation, we cover the following

  • 00:00:00 – Setting the stage: Legal tech wars, “Godzilla vs. Kong,” and why vendor lawsuits are not just Silicon Valley drama for spectators.

  • 00:01:00 – Introducing the Tech-Savvy Lawyer Page Labs Initiative and the use of AI-generated discussions to stress-test legal tech ethics in real-world scenarios.

  • 00:02:00 – Who’s fighting and why it matters: Clio as the “nervous system” of many firms versus Alexi as the “brainy intern” of AI legal research.

  • 00:03:00 – The client data crossfire: How disputes over data access and training AI tools turn your routine practice data into high-stakes litigation evidence.

  • 00:04:00 – Allegations in the Clio–Alexi dispute, from improper data access to claims of anti-competitive gatekeeping of legal industry data.

  • 00:05:00 – Visualizing risk: Client files as sandcastles on a shelled beach and why this reframes vendor fights as ethics issues, not IT gossip.

  • 00:06:00 – ABA Model Rule 1.1 (Competence): What “technology competence” really entails and why ignorance of vendor instability is no longer defensible.

  • 00:07:00 – Continuity planning as competence: Injunctions, frozen servers, vendor shutdowns, and how missed deadlines can become malpractice.

  • 00:08:00 – ABA Model Rule 1.6 (Confidentiality): The “danger zone” of treating the cloud like a bank vault and misunderstanding who really holds the key.

  • 00:09:00 – Discovery risk explained: Forensic audits, third‑party access, protective orders that fail, and the cascading impact on client secrets.

  • 00:10:00 – Data‑export rights as your “escape hatch”: Why “usable formats” (CSV, PDF) matter more than bare contractual promises.

  • 00:11:00 – Practical homework: Testing whether you can actually export your case list today, not during a crisis.

  • 00:12:00 – ABA Model Rule 5.3 (Supervision): Treating software vendors like non‑lawyer assistants you actively supervise rather than passive utilities.

  • 00:13:00 – Asking better questions: Uptime, security posture, and whether your vendor is using your data in its own defense.

  • 00:14:00 – Operational friction: Rising subscription costs, API lockdowns, broken integrations, and the return of manual copy‑pasting.

  • 00:15:00 – Vaporware and stalled product roadmaps: How litigation diverts engineering resources away from features you are counting on.

  • 00:16:00 – Forced sales and 30‑day shutdown notices: Data‑migration nightmares under pressure and why waiting is the riskiest strategy.

  • 00:17:00 – The five‑step moderate‑tech action plan: Inventory dependencies, review contracts, map contingencies, document diligence, and communicate with nuance.

  • 00:18:00 – Turning risk management into a client‑facing strength and part of your value story in pitches and ongoing relationships.

  • 00:19:00 – Reframing legal tech tools as members of your legal team rather than invisible utilities.

  • 00:20:00 – “Supervisor or hostage?”: The closing challenge to check your contracts, your data‑export rights, and your practical ability to “fire” a vendor.

Resources

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

#LegalTech #AIinLaw #LegalEthics #Cybersecurity #LawPracticeManagement

Word of the Week: Vendor Risk Management for Law Firms in 026: Lessons from the Clio–Alexi CRM Fight ⚖️💻

Clio vs. Alexi: CRM Litigation COULD THREATEN Law Firm Data

“Vendor risk management” is no longer an IT buzzword; it is now a core law‑practice skill for any attorney who relies on cloud‑based tools, CRMs, or AI‑driven research platforms.⚙️📊 The Tech‑Savvy Lawyer.Page’s February 2, 2026 editorial on the Clio–Alexi CRM litigation showed how a dispute between legal‑tech companies can reach straight into your client list, calendars, and workflows.⚖️🧾

In that piece, Clio and Alexi’s legal fight over data, AI training, and competition was framed not as “tech drama,” but as a live test of how well your firm understands its dependencies on vendors that control client‑related information.🧠📂 When the platform that hosts your CRM, matter data, or AI research tools becomes embroiled in high‑stakes litigation, your risk profile changes even if you never set foot in that courtroom.⚠️🏛️

Under ABA Model Rule 1.1, competence includes a practical understanding of the technology that underpins your practice, and that now clearly includes vendor risk.📚💡 You do not have to reverse‑engineer APIs, yet you should be able to answer basic questions: Which vendors are mission‑critical, what data do they hold, how would you respond if one faced an injunction, outage, or rushed acquisition.🧩🚨 That is vendor risk management at a level that is realistic for lawyers with limited to moderate tech skills.🙂🧑‍💼

LawyerS NEED TO Build Vendor Risk Plan for Ethical Compliance

Model Rule 1.6 on confidentiality sits at the center of this analysis, because litigation involving a vendor can expose or pressure the systems that hold client information.🔐📁 Our February 2 article emphasized the need to know where your data is hosted, what the contracts say about subpoenas and law‑enforcement requests, and how quickly you can export data if your ethics analysis changes.⏱️📄 Vendor risk management, therefore, includes reviewing terms of service, capturing “current” versions of online agreements, and documenting export rights and notice obligations.📝🧷

Model Rule 5.3 requires reasonable efforts to ensure that non‑lawyer assistance is compatible with your professional duties, and 2026 legal‑tech commentary increasingly treats vendors as supervised extensions of the law office.🧑‍⚖️🤝 CRMs, AI research tools, document‑automation platforms, and e‑billing systems all act as non‑lawyer assistants for ethics purposes, which means you must screen them before adoption, monitor them for material changes, and reassess when events like the Clio–Alexi dispute surface.📡📊

Recent legal‑tech reporting has described 2026 as a reckoning year for vendors, with AI‑driven tools under heavier regulatory and client scrutiny, which makes disciplined vendor risk management a competitive advantage rather than a burden.📈🤖 Practical steps include maintaining a simple vendor inventory, ranking systems by criticality, reviewing cyber and data‑security representations, and identifying a plausible backup provider for each crucial function.📋🛡️

LAWYERS NEED TO SHIELD THEIR CLIENT DATA FROM CRM LITIGATION AS MUCH AS THEY NEED TO PROTECT THEIR EthicS DUTIES!

Vendor risk management, properly understood, turns your technology stack into part of your professional judgment instead of a black box that “IT” owns alone.🧱🧠 For solo and small‑firm lawyers, that shift can feel incremental rather than overwhelming: start by reading the Clio–Alexi editorial, pull your top three vendor contracts, and ask whether they let you protect competence, confidentiality, and continuity if your vendors suddenly become the ones needing legal help.🧑‍⚖️🧰

🎙️ Ep. #130: Taming Client Data Security – Nick Martin’s Proven Tech Strategies for Law Firms 🚀

My next guest is Nick Martin, CEO of FileScience. He shares expert insights on stabilizing law firm operations with smart backups and automation. Join us to discover practical, easy-to-implement ways to protect your data from outages and errors, so your clients’ information stays safe, secure, and accessible when you need it most. 

Listen in with Nick Martin and me as we discuss the following three questions and more! 💡

  • When a firm is drowning in document chaos, what are the first three specific workflows to digitize or automate to stabilize operations?

  • Beyond just losing documents, what are the three specific silent killers of document hygiene that lawyers ignore?

  • How do lawyers solve the top three friction points of digital collaboration: version conflicts, insecure sharing methods, and the loss of institutional knowledge buried inside files?

In our conversation, we cover the following 📊

  • 00:00 – Guest intro and Nick’s tech setup (MacBook Pro, iPad, iPhone 15, Bang & Olufsen speaker) 🔊

  • 00:30 – Q1: Digitizing workflows – unification of memory, forever undo button, retention 🛡️

  • 04:00 – Backups for iManage, NetDocuments, Clio, FileVine; air-gapped copies 📁

  • 06:00 – Microsoft 365 outage resilience with FileScience ☁️

  • 08:00 – Retention periods (5-7 years by state/practice); NY lawful order policy ⚖️

  • 10:00 – Q2: Silent killers – file degradation, wrong versions, insider threats 🕵️

  • 13:00 – Q3: Solving friction – immutable timelines, encryption (Purview, CBC), institutional knowledge preservation 🔒

  • 15:00 – End-to-end encryption details; where to find Nick

Resources 🔗

Connect with Nick Martin 🤝

Mentioned in the episode 📚

Hardware mentioned in the conversation 💻

Software & Cloud Services mentioned in the conversation ☁️

SHOUT OUT: Your Tech-Savvy Lawyer Blogger and Podcaster was Highlighted in an ABA "Best of 2025" Podcast!

Shout Out to Terrell A. Turner, and the ABA Law Practice Division for featuring myself with Amy Wood and Matt Darner in their "Best of 2025" special episode, The Law Firm Finance Lessons Every Lawyer Needs. 🎙️ Our conversations emphasized critical intersections between legal technology systems, financial processes, and ethical compliance that deserve attention from every law firm leader.

Terrell's expertise in making finance accessible to non-finance professionals mirrors a broader shift in legal operations: the recognition that effective law firm management requires both financial literacy and technological competence. Throughout this episode, my fellow guests and I reinforced that technology isn't merely about efficiency—it's fundamentally about creating sustainable financial practices that support your firm's growth and stability.

This “Best of 2025” episode highlighted how proper process design and system implementation directly impact your firm's ability to maintain trust accounting standards, address cash flow challenges, and make confident business decisions. For attorneys building tech stacks or evaluating process improvements, the intersection of ABA Model Rules requirements and practical technology solutions cannot be overlooked. Rule 1.15 obligations around client funds, for instance, demand both procedural discipline and technological infrastructure that supports compliance automatically.

Our conversations reinforced an essential principle: law firms operating with clear financial visibility and integrated technology systems don't just perform better financially—they also reduce ethical risk and enhance client service delivery. Terrell's work in translating complex financial concepts for legal professionals demonstrates real value in bridging the gap between accounting best practices and law firm operations.

Whether your firm is optimizing existing systems or evaluating new solutions, this episode provides actionable direction. Our discussion reinforces that financial health and technological competence work together, not separately. 

Thank you Terrell and the ABA for the recognition and elevating these critical conversations. 🚀

MTC: Clio–Alexi Legal Tech Fight: What CRM Vendor Litigation Means for Your Law Firm, Client Data and ABA Model Rule Compliance ⚖️💻

Competence, Confidentiality, Vendor Oversight!

When the companies behind your CRM and AI research tools start suing each other, the dispute is not just “tech industry drama” — it can reshape the practical and ethical foundations of your practice. At a basic to moderate level, the Clio–Alexi fight is about who controls valuable legal data, how that data can be used to power AI tools, and whether one side is using its market position unfairly. Clio (a major practice‑management and CRM platform) is tied to legal research tools and large legal databases. Alexi is a newer AI‑driven research company that depends on access to caselaw and related materials to train and deliver its products. In broad strokes, one side claims the other misused or improperly accessed data and technology; the other responds that the litigation is “sham” or anticompetitive, designed to limit a smaller rival and protect a dominant ecosystem. There are allegations around trade secrets, data licensing, and antitrust‑style behavior. None of that may sound like your problem — until you remember that your client data, workflows, and deadlines live inside tools these companies own, operate, or integrate with.

For lawyers with limited to moderate technology skills, you do not need to decode every technical claim in the complaints and counterclaims. You do, however, need to recognize that vendor instability, lawsuits, and potential regulatory scrutiny can directly touch: your access to client files and calendars, the confidentiality of matter information stored in the cloud, and the long‑term reliability of the systems you use to serve clients and get paid. Once you see the dispute in those terms, it becomes squarely an ethics, risk‑management, and governance issue — not just “IT.”

ABA Model Rule 1.1: Competence Now Includes Tech and Vendor Risk

Model Rule 1.1 requires “competent representation,” which includes the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. In the modern practice environment, that has been interpreted to include technology competence. That does not mean you must be a programmer. It does mean you must understand, in a practical way, the tools on which your work depends and the risks they bring.

If your primary CRM, practice‑management system, or AI research tool is operated by a company in serious litigation about data, licensing, or competition, that is a material fact about your environment. Competence today includes: knowing which mission‑critical workflows rely on that vendor (intake, docketing, conflicts, billing, research, etc.); having at least a baseline sense of how vendor instability could disrupt those workflows; and building and documenting a plan for continuity — how you would move or access data if the worst‑case scenario occurred (for example, a sudden outage, injunction, or acquisition). Failing to consider these issues can undercut the “thoroughness and preparation” the Rule expects. Even if your firm is small or mid‑sized, and even if you feel “non‑technical,” you are still expected to think through these risks at a reasonable level.

ABA Model Rule 1.6: Confidentiality in a Litigation Spotlight

Model Rule 1.6 is often front of mind when lawyers think about cloud tools, and the Clio–Alexi dispute reinforces why. When a technology company is sued, its systems may become part of discovery. That raises questions like: what types of client‑related information (names, contact details, matter descriptions, notes, uploaded files) reside on those systems; under what circumstances that information could be accessed, even in redacted or aggregate form, by litigants, experts, or regulators; and how quickly and completely you can remove or export client data if a risk materializes.

You remain the steward of client confidentiality, even when data is stored with a third‑party provider. A reasonable, non‑technical but diligent approach includes: understanding where your data is hosted (jurisdictions, major sub‑processors, data‑center regions); reviewing your contracts or terms of service for clauses about data access, subpoenas, law‑enforcement or regulatory requests, and notice to you; and ensuring you have clearly defined data‑export rights — not only if you voluntarily leave, but also if the vendor is sold, enjoined, or materially disrupted by litigation. You are not expected to eliminate all risk, but you are expected to show that you considered how vendor disputes intersect with your duty to protect confidential information.

ABA Model Rule 5.3: Treat Vendors as Supervised Non‑Lawyer Assistants

ABA Rules for Modern Legal Technology can be a factor when legal tech companies fight!

Model Rule 5.3 requires lawyers to make reasonable efforts to ensure that non‑lawyer assistants’ conduct is compatible with professional obligations. In 2026, core technology vendors — CRMs, AI research platforms, document‑automation tools — clearly fall into this category.

You are not supervising individual programmers, but you are responsible for: performing documented diligence before adopting a vendor (security posture, uptime, reputation, regulatory or litigation history); monitoring for material changes (lawsuits like the Clio–Alexi matter, mergers, new data‑sharing practices, or major product shifts); and reassessing risk when those changes occur and adjusting your tech stack or contracts accordingly. A litigation event is a signal that “facts have changed.” Reasonable supervision in that moment might mean: having someone (inside counsel, managing partner, or a trusted advisor) read high‑level summaries of the dispute; asking the vendor for an explanation of how the litigation affects uptime, data security, and long‑term support; and considering whether you need contractual amendments, additional audit rights, or a backup plan with another provider. Again, the standard is not perfection, but reasoned, documented effort.

How the Clio–Alexi Battle Can Create Problems for Users

A dispute at this scale can create practical, near‑term friction for everyday users, quite apart from any final judgment. Even if the platforms remain online, lawyers may see more frequent product changes, tightened integrations, shifting data‑sharing terms, or revised pricing structures as companies adjust to litigation costs and strategy. Any of these changes can disrupt familiar workflows, create confusion around where data actually lives, or complicate internal training and procedures.

There is also the possibility of more subtle instability. For example, if a product roadmap slows down or pivots under legal pressure, features that firms were counting on — for automation, AI‑assisted drafting, or analytics — may be delayed or re‑scoped. That can leave firms who invested heavily in a particular tool scrambling to fill functionality gaps with manual workarounds or additional software. None of this automatically violates any rule, but it can introduce operational risk that lawyers must understand and manage.

In edge cases, such as a court order that forces a vendor to disable key features on short notice or a rapid sale of part of the business, intense litigation can even raise questions about long‑term continuity. A company might divest a product line, change licensing models, or settle on terms that affect how data can be stored, accessed, or used for AI. Firms could then face tight timelines to accept new terms, migrate data, or re‑evaluate how integrated AI features operate on client materials. Without offering any legal advice about what an individual firm should do, it is fair to say that paying attention early — before options narrow — is usually more comfortable than reacting after a sudden announcement or deadline.

Practical Steps for Firms at a Basic–Moderate Tech Level

You do not need a CIO to respond intelligently. For most firms, a short, structured exercise will go a long way:

Practical Tech Steps for Today’s Law Firms

  1. Inventory your dependencies. List your core systems (CRM/practice management, document management, time and billing, conflicts, research/AI tools) and note which vendors are in high‑profile disputes or under regulatory or antitrust scrutiny.

  2. Review contracts for safety valves. Look for data‑export provisions, notice obligations if the vendor faces litigation affecting your data, incident‑response timelines, and business‑continuity commitments; capture current online terms.

  3. Map a contingency plan. Decide how you would export and migrate data if compelled by ethics, client demand, or operational need, and identify at least one alternative provider in each critical category.

  4. Document your diligence. Prepare a brief internal memo or checklist summarizing what you reviewed, what you concluded, and what you will monitor, so you can later show your decisions were thoughtful.

  5. Communicate without alarming. Most clients care about continuity and confidentiality, not vendor‑litigation details; you can honestly say you monitor providers, have export and backup options, and have assessed the impact of current disputes.

From “IT Problem” to Core Professional Skill

The Clio–Alexi litigation is a prominent reminder that law practice now runs on contested digital infrastructure. The real message for working lawyers is not to flee from technology but to fold vendor risk into ordinary professional judgment. If you understand, at a basic to moderate level, what the dispute is about — data, AI training, licensing, and competition — and you take concrete steps to evaluate contracts, plan for continuity, and protect confidentiality, you are already practicing technology competence in a way the ABA Model Rules contemplate. You do not have to be an engineer to be a careful, ethics‑focused consumer of legal tech. By treating CRM and AI providers as supervised non‑lawyer assistants, rather than invisible utilities, you position your firm to navigate future lawsuits, acquisitions, and regulatory storms with far less disruption. That is good risk management, sound ethics, and, increasingly, a core element of competent lawyering in the digital era. 💼⚖️

TSL.P Labs Bonus: Google AI Discussion: Everyday Tech, Extraordinary Evidence: Smartphones, Dash Cams, and Wearables as Silent Witnesses in Your Cases ⚖️📱

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this Tech-Savvy Lawyer.Page Labs episode, our Google AI hosts unpack our January 26, 2026, editorial and discuss how everyday devices—smartphones, dash cams, wearables, and connected cars—are becoming “silent witnesses” that can make or break your next case, while walking carefully through ABA Model Rules on competence, candor, privacy, and preservation of digital evidence.

In our conversation, we cover the following:

  • 00:00 – Welcome to The Tech-Savvy Lawyer.Page Labs Initiative and this week’s “Everyday Tech, Extraordinary Evidence” AI roundtable 🧪

  • 00:30 – Why classic “surprise witness” courtroom drama is giving way to always-on digital witnesses 🎭

  • 01:15 – Introducing the concept of smartphones, dash cams, and wearables as objective “silent witnesses” in litigation 📱

  • 02:00 – Overview of Michael D.J. Eisenberg’s editorial “Everyday Tech, Extraordinary Evidence” and his mission to bridge tech and courtroom practice 📰[

  • 03:00 – Case study setup: the Alex Preddy shooting in Minneapolis and the clash between official reports and digital evidence ⚖️

  • 04:00 – How bystander smartphone video reframed the legal narrative in the Preddy matter and dismantled “brandished a weapon” claims 🎥

  • 05:00 – From “pressing play” to full video synchronization: building a unified timeline from multiple cameras to audit police reports 🧩06:00 – Using frame-by-frame analysis to test loaded terms like “lunging,” “aggressive resistance,” and “brandishing” against what the pixels actually show 🔍

  • 07:00 – Moving beyond what we see: introducing “quiet evidence” such as GPS logs, telemetry, and sensor data as litigation tools 📡

  • 08:00 – GPS data for location, duration, and speed: turning “he was charging” into a measurable movement profile in protest and road-rage cases 🚶‍♂️🚗

  • 09:00 – Layering GPS from phones with vehicle telematics to create a multi-source reconstruction that is hard to impeach in court 📊

  • 10:00 – Dash cams as 360-degree witnesses: solving blind spots of human perception and single-angle video 🛞

  • 11:00 – Why exterior audio from dash cams—shouts, commands, crowd noise—can be crucial to proving state of mind and mens rea 🔊

  • 12:00 – Wearables as a body-wide sensor network: heart rate, sleep, and step count as quantitative proof of pain, fear, and trauma ⌚

  • 13:00 – Using longitudinal wearable data to support claims of emotional distress or sleep disruption in personal injury and civil-rights litigation 😴

  • 14:00 – Heart-rate spikes and movement logs at the moment of an encounter as corroboration of fear or immobility in use-of-force matters

  • 15:00 – Why none of this evidence exists in your case file unless you know to ask for it at intake 🗂️

  • 16:00 – Updating intake: adding questions about smartwatches, location services, doorbell cameras, dash cams, and connected cars to your client questionnaires 📝

  • 17:00 – Data preservation as an emergency task: deletion cycles, cloud overwrites, and using TROs to stop digital spoliation 🚨

  • 18:00 – Turning raw logs into compelling visuals: maps, synced clips, and timelines that juries can understand without sacrificing accuracy 🗺️

  • 19:00 – Ethics spotlight: ABA Model Rule 1.1 competence and Comment 8—why “I’m not a tech person” is now an ethical problem, not an excuse 📚

  • 20:00 – Candor to the tribunal and the line between strong advocacy and fraud when editing or excerpting digital evidence ⚠️

  • 21:00 – Respecting third-party privacy under Rule 4.4: when you must blur faces, redact audio, or limit collateral exposure of bystanders 🧩

  • 22:00 – Advising clients not to delete texts, videos, or logs and explaining spoliation risks under Rule 3.4 ⚖️

  • 23:00 – The uranium analogy: digital tools as powerful but dangerous if used without adequate ethical “containment” ☢️

  • 24:00 – Philosophical closing: will juries someday trust heart-rate logs more than tears on the witness stand, and what does that mean for human testimony? 🤔

  • 25:00 – Closing remarks and invitation to explore the full editorial, show notes, and resources on The Tech-Savvy Lawyer.Page 🌐

If you enjoyed this episode, please like, comment, subscribe, and share!

HOW TO: How Lawyers Can Protect Themselves on LinkedIn from New Phishing 🎣 Scams!

Fake LinkedIn warnings target lawyers!

LinkedIn has become an essential networking tool for lawyers, making it a high‑value target for sophisticated phishing campaigns.⚖️ Recent scams use fake “policy violation” comments that mimic LinkedIn’s branding and even leverage the official lnkd.in URL shortener to trick users into clicking on malicious links. For legal professionals handling confidential client information, falling victim to one of these attacks can create both security and ethical problems.

First, understand how this specific scam works.💻 Attackers create LinkedIn‑themed profiles and company pages (for example, “Linked Very”) that use the LinkedIn logo and post “reply” comments on your content, claiming your account is “temporarily restricted” for non‑compliance with platform rules. The comment urges you to click a link to “verify your identity,” which leads to a phishing site that harvests your LinkedIn credentials. Some links use non‑LinkedIn domains, such as .app, or redirect through lnkd.in, making visual inspection harder.

To protect yourself, treat all public “policy violation” comments as inherently suspect.🔍 LinkedIn has confirmed it does not communicate policy violations through public comments, so any such message should be considered a red flag. Instead of clicking, navigate directly to LinkedIn in your browser or app, check your notifications and security settings, and only interact with alerts that appear within your authenticated session. If the comment uses a shortened link, hover over it (on desktop) to preview the destination, or simply refuse to click and report it.

From an ethics standpoint, these scams directly implicate your duties under ABA Model Rules 1.1 and 1.6.⚖️ Comment 8 to Rule 1.1 stresses that competent representation includes understanding the benefits and risks associated with relevant technology. Failing to use basic safeguards on a platform where you communicate with clients and colleagues can fall short of that standard. Likewise, Rule 1.6 requires reasonable efforts to prevent unauthorized access to client information, which includes preventing account takeover that could expose your messages, contacts, or confidential discussions.

Public “policy violations” are a red flag!

Practically, you should enable multi‑factor authentication (MFA) on LinkedIn, use a unique, strong password stored in a reputable password manager, and review active sessions regularly for unfamiliar devices or locations.🔐 If you suspect you clicked a malicious link, immediately change your LinkedIn password, revoke active sessions, enable or confirm MFA, and run updated anti‑malware on your device. Then notify your firm’s IT or security contact and consider whether any client‑related disclosures are required under your jurisdiction’s ethics rules and breach‑notification laws.

Finally, build a culture of security awareness in your practice.👥 Brief colleagues and staff about this specific comment‑reply scam, show screenshots, and explain that LinkedIn does not resolve “policy violations” via comment threads. Encourage a “pause before you click” mindset and make reporting easy—internally to your IT team and externally to LinkedIn’s abuse channels. Taking these steps not only protects your professional identity but also demonstrates the technological competence and confidentiality safeguards the ABA Model Rules expect from modern legal practitioners.

From an ethics standpoint, these scams directly implicate your duties under ABA Model Rules 1.1 and 1.6.⚖️ Comment 8 to Rule 1.1 stresses that competent representation includes understanding the benefits and risks associated with relevant technology. Failing to use basic safeguards on a platform where you communicate with clients and colleagues can fall short of that standard. Likewise, Rule 1.6 requires reasonable efforts to prevent unauthorized access to client information, which includes preventing account takeover that could expose your messages, contacts, or confidential discussions.

Train your team to pause and report!

Practically, you should enable multi‑factor authentication (MFA) on LinkedIn, use a unique, strong password stored in a reputable password manager, and review active sessions regularly for unfamiliar devices or locations.🔐 If you suspect you clicked a malicious link, immediately change your LinkedIn password, revoke active sessions, enable or confirm MFA, and run updated anti‑malware on your device. Then notify your firm’s IT or security contact and consider whether any client‑related disclosures are required under your jurisdiction’s ethics rules and breach‑notification laws.

Finally, build a culture of security awareness in your practice.👥 Brief colleagues and staff about this specific comment‑reply scam, show screenshots, and explain that LinkedIn does not resolve “policy violations” via comment threads. Encourage a “pause before you click” mindset and make reporting easy—internally to your IT team and externally to LinkedIn’s abuse channels. Taking these steps not only protects your professional identity but also demonstrates the technological competence and confidentiality safeguards the ABA Model Rules expect from modern legal practitioners.