MTC🪙🪙:  When Reputable Databases Fail: What Lawyers Must Do After AI Hallucinations Reach the Court

What should a lawyer do when they inadvertENTLY USE A HALLUCINATED CITE?

In a sobering December 2025 filing in Integrity Investment Fund, LLC v. Raoul, plaintiff's counsel disclosed what many in the legal profession feared: even reputable legal research platforms can generate hallucinated citations. The Motion to Amend Complaint revealed that "one of the cited cases in the pending Amended Complaint could not be found," along with other miscited cases, despite the legal team using LexisNexis and LEXIS+ Document Analysis tools rather than general-purpose AI like ChatGPT. The attorney expressed being "horrified" by these inexcusable errors, but horror alone does not satisfy ethical obligations.

This case crystallizes a critical truth for the legal profession: artificial intelligence remains a tool requiring rigorous human oversight, not a substitute for attorney judgment. When technology fails—and Stanford research confirms it fails at alarming rates—lawyers must understand their ethical duties and remedial obligations.

The Scope of the Problem: Even Premium Tools Hallucinate

Legal AI vendors marketed their products as hallucination-resistant, leveraging retrieval-augmented generation (RAG) technology to ground responses in authoritative legal databases. Yet as reported in our 📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻, independent testing by Stanford's Human-Centered Artificial Intelligence program and RegLab reveals persistent accuracy problems. Lexis+ AI produced incorrect information 17% of the time, while Westlaw's AI-Assisted Research hallucinated at nearly double that rate—34% of queries.

These statistics expose a dangerous misconception: that specialized legal research platforms eliminate fabrication risks. The Integrity Investment Fund case demonstrates that attorneys using established, subscription-based legal databases still face citation failures. Courts nationwide have documented hundreds of cases involving AI-generated hallucinations, with 324 incidents in U.S. federal, state, and tribal courts as of late 2025. Legal professionals can no longer claim ignorance about AI limitations.

The consequences extend beyond individual attorneys. As one federal court warned, hallucinated citations that infiltrate judicial opinions create precedential contamination, potentially "sway[ing] an actual dispute between actual parties"—an outcome the court described as "scary". Each incident erodes public confidence in the justice system and, as one commentator noted, "sets back the adoption of AI in law".

The Ethical Framework: Three Foundational Rules

When attorneys discover AI-generated errors in court filings, three Model Rules of Professional Conduct establish clear obligations.

ABA Model Rule 1.1 mandates technological competence. The 2012 amendment to Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". Forty-one jurisdictions have adopted this technology competence requirement. This duty is ongoing and non-delegable. Attorneys cannot outsource their responsibility to understand the tools they deploy, even when those tools carry premium price tags and prestigious brand names.

Technological competence means understanding that current AI legal research tools hallucinate at rates ranging from 17% to 34%. It means recognizing that longer AI-generated responses contain more falsifiable propositions and therefore pose a greater risk of hallucination. It means implementing verification protocols rather than accepting AI output as authoritative.

ABA Model Rule 3.3 requires candor toward the tribunal. This rule prohibits knowingly making false statements of law or fact to a court and imposes an affirmative duty to correct false statements previously made. The duty continues until the conclusion of the proceeding. Critically, courts have held that the standard under Federal Rule of Civil Procedure 11 is objective reasonableness, not subjective good faith. As one court stated, "An attorney who acts with 'an empty head and a pure heart' is nonetheless responsible for the consequences of his actions".

When counsel in Integrity Investment Fund discovered the miscitations, filing a Motion to Amend Complaint fulfilled this corrective duty. The attorney took responsibility and sought to rectify the record before the court relied on fabricated authority. This represents the ethical minimum. Waiting for opposing counsel or the court to discover errors invites sanctions and disciplinary referrals.

The duty of candor applies regardless of how the error originated. In Kaur v. Desso, a Northern District of New York court rejected an attorney's argument that time pressure justified inadequate verification, stating that "the need to check whether the assertions and quotations generated were accurate trumps all". Professional obligations do not yield to convenience or deadline stress.

ABA Model Rules 5.1 and 5.3 establish supervisory responsibilities. Managing attorneys must ensure that subordinate lawyers and non-lawyer staff comply with the Rules of Professional Conduct. When a supervising attorney has knowledge of specific misconduct and ratifies it, the supervisor bears responsibility. This principle extends to AI-assisted work product.

The Integrity Investment Fund matter reportedly involved an experienced attorney assisting with drafting. Regardless of delegation, the signing attorney retains ultimate accountability. Law firms must implement training programs on AI limitations, establish mandatory review protocols for AI-generated research, and create policies governing which tools may be used and under what circumstances. Partners reviewing junior associate work must apply heightened scrutiny to AI-assisted documents, treating them as first drafts requiring comprehensive validation.

Federal Rule of Civil Procedure 11: The Litigation Hammer

Reputable databases can hallucinate too!

Beyond professional responsibility rules, Federal Rule of Civil Procedure 11 authorizes courts to impose sanctions on attorneys who submit documents without a reasonable inquiry into the facts and law. Courts may sanction the attorney, the party, or both. Sanctions range from monetary penalties paid to the court or opposing party to non-monetary directives, including mandatory continuing legal education, public reprimands, and referrals to disciplinary authorities.

Rule 11 contains a 21-day safe harbor provision. Before filing a sanctions motion, the moving party must serve the motion on opposing counsel, who has 21 days to withdraw or correct the challenged filing. If counsel promptly corrects the error during this window, sanctions may be avoided. This procedural protection rewards attorneys who implement monitoring systems to catch mistakes early.

Courts have imposed escalating consequences as AI hallucination cases proliferate. Early cases resulted in warnings or modest fines. Recent sanctions have grown more severe. A Colorado attorney received a 90-day suspension after admitting in text messages that he failed to verify ChatGPT-generated citations. An Arizona federal judge sanctioned an attorney and required her to personally notify three federal judges whose names appeared on fabricated opinions, revoked her pro hac vice admission, and referred her to the Washington State Bar Association. A California appellate court issued a historic fine after discovering 21 of 23 quotes in an opening brief were fake.

Morgan & Morgan—the 42nd largest law firm by headcount—faced a $5,000 sanction when attorneys filed a motion citing eight nonexistent cases generated by an internal AI platform. The court divided the sanction among three attorneys, with the signing attorney bearing the largest portion. The firm's response acknowledged "great embarrassment" and promised reforms, but the reputational damage extends beyond the individual case.

What Attorneys Must Do: A Seven-Step Protocol

Legal professionals who discover AI-generated errors in filed documents must act decisively. The following protocol aligns with ethical requirements and minimizes sanctions risk:

First, immediately cease relying on the affected research. Do not file additional briefs or make oral arguments based on potentially fabricated citations. If a hearing is imminent, notify the court that you are withdrawing specific legal arguments pending verification.

Second, conduct a comprehensive audit. Review every citation in the affected filing. Retrieve and read the full text of each case or statute cited. Verify that quoted language appears in the source and that the legal propositions match the authority's actual holding. Check citation accuracy using Shepard's or KeyCite to confirm cases remain good law. This process cannot be delegated to the AI tool that generated the original errors.

Third, assess the materiality of errors. Determine whether fabricated citations formed the basis for legal arguments or appeared as secondary support. In Integrity Investment Fund, counsel noted that "the main precedents...and the...statutory citations are correct, and none of the Plaintiffs' claims were based on the mis-cited cases". This distinction affects the appropriate remedy but does not eliminate the obligation to correct the record.

Fourth, notify opposing counsel immediately. Candor extends to adversaries. Explain that you have discovered citation errors and are taking corrective action. This transparency may forestall sanctions motions and demonstrates good faith to the court.

Fifth, file a corrective pleading or motion. In Integrity Investment Fund, counsel filed a Motion to Amend Complaint under Federal Rule of Civil Procedure 15(a)(2). Alternative vehicles include motions to correct the record, errata sheets, or supplemental briefs. The filing should acknowledge the errors explicitly, explain how they occurred without shifting blame to technology, take personal responsibility, and specify the corrections being made.

Sixth, notify the court in writing. Even if opposing counsel does not move for sanctions, attorneys have an independent duty to inform the tribunal of material misstatements. The notification should be factual and direct. In cases where fabricated citations attributed opinions to real judges, courts have required attorneys to send personal letters to those judges clarifying that the citations were fictitious.

Seventh, implement systemic reforms. Review firm-wide AI usage policies. Provide training on verification requirements. Establish mandatory review checkpoints for AI-assisted work product. Consider technology solutions such as citation validation software that flags cases not found in authoritative databases. Document these reforms in any correspondence with the court or bar authorities to demonstrate that the incident prompted institutional change.

The Duty to Supervise: Training the Humans and the Machines

The Integrity Investment Fund case involved an experienced attorney assisting with drafting, yet errors reached the court. This pattern appears throughout AI hallucination cases. In the Chicago Housing Authority litigation, the responsible attorney had previously published an article on ethical considerations of AI in legal practice, yet still submitted a brief citing the nonexistent case Mack v. Anderson. Knowledge about AI risks does not automatically translate into effective verification practices.

Law firms must treat AI tools as they would junior associates—competent at discrete tasks but requiring supervision. Partners should review AI-generated research as they would first-year associate work, assuming errors exist and exercising vigilant attention to detail. Unlike human associates who learn from corrections, AI systems may perpetuate errors across multiple matters until their underlying models are retrained.

Training programs should address specific hallucination patterns. AI tools frequently fabricate case citations with realistic-sounding names, accurate-appearing citation formats, and plausible procedural histories. They misrepresent legal holdings, confuse arguments made by litigants with court rulings, and fail to respect the hierarchy of legal authority. They cite proposed legislation as enacted law and rely on overturned precedents as current authority. Attorneys must learn to identify these red flags.

Supervisory duties extend to non-lawyer staff. If a paralegal uses an AI grammar checker on a document containing confidential case strategy, the supervising attorney bears responsibility for any confidentiality breach. When legal assistants use AI research tools, attorneys must verify their work with the same rigor applied to traditional research methods.

Client Communication and Informed Consent

watch out for ai hallucinations!

Ethical obligations to clients intersect with AI usage in multiple ways. ABA Model Rule 1.4 requires attorneys to keep clients reasonably informed and to explain matters to the extent necessary for clients to make informed decisions. Several state bar opinions suggest that attorneys should obtain informed consent before inputting confidential client information into AI tools, particularly those that use data for model training.

The confidentiality analysis turns on the AI tool's data-handling practices. Many general-purpose AI platforms explicitly state in their terms of service that they use input data for model training and improvement. This creates significant privilege and confidentiality risks. Even legal-specific platforms may share data with third-party vendors or retain information on servers outside the firm's control. Attorneys must review vendor agreements, understand data flow, and ensure adequate safeguards exist before using AI tools on client matters.

When AI-generated errors reach a court filing, clients deserve prompt notification. The errors may affect litigation strategy, settlement calculations, or case outcome predictions. In extreme cases, such as when a court dismisses claims or imposes sanctions, malpractice liability may arise. Transparent communication preserves the attorney-client relationship and demonstrates that the lawyer prioritizes the client's interests over protecting their reputation.

Jurisdictional Variations: Illinois Sets the Standard

While the ABA Model Rules provide a national framework, individual jurisdictions have begun addressing AI-specific issues. Illinois, where the Integrity Investment Fund case was filed, has taken proactive steps.

The Illinois Supreme Court adopted a Policy on Artificial Intelligence effective January 1, 2025. The policy recognizes that AI presents challenges for protecting private information, avoiding bias and misrepresentation, and maintaining judicial integrity. The court emphasized "upholding the highest ethical standards in the administration of justice" as a primary concern.

In September 2025, Judge Sarah D. Smith of Madison County Circuit Court issued a Standing Order on Use of Artificial Intelligence in Civil Cases, later extended to other Madison County courtrooms. The order "embraces the advancement of AI" while mandating that tools "remain consistent with professional responsibilities, ethical standards and procedural rules". Key provisions include requirements for human oversight and legal judgment, verification of all AI-generated citations and legal statements, disclosure of expert reliance on AI to formulate opinions, and potential sanctions for submissions including "case law hallucinations, [inappropriate] statements of law, or ghost citations".

Arizona has been particularly active given the high number of AI hallucination cases in the state—second only to the Southern District of Florida. The State Bar of Arizona issued guidance calling on lawyers to verify all AI-generated research before submitting it to courts or clients. The Arizona Supreme Court's Steering Committee on AI and the Courts issued similar guidance emphasizing that judges and attorneys, not AI tools, are responsible for their work product.

Other states are following suit. California issued Formal Opinion 2015-93 interpreting technological competence requirements. The District of Columbia Bar issued Ethics Opinion 388 in April 2024, specifically addressing generative artificial intelligence in client matters. These opinions converge on several principles: competence includes understanding AI technology sufficiently to be confident it advances client interests, all AI output requires verification before use, and technology assistance does not diminish attorney accountability.

The Path Forward: Responsible AI Integration

The legal profession stands at a crossroads. AI tools offer genuine efficiency gains—automated document review, pattern recognition in discovery, preliminary legal research, and jurisdictional surveys. Rejecting AI entirely would place practitioners at a competitive disadvantage and potentially violate the duty to provide competent, efficient representation.

Yet uncritical adoption invites the disasters documented in hundreds of cases nationwide. The middle path provided by the Illinois courts requires human oversight and legal judgment at every stage.

Attorneys should adopt a "trust but verify" approach. Use AI for initial research, document drafting, and analytical tasks, but implement mandatory verification protocols before any work product leaves the firm. Treat AI-generated citations as provisional until independently confirmed. Read cases rather than relying on AI summaries. Check the currency of legal authorities. Confirm that quotations appear in the cited sources.

Law firms should establish tiered AI usage policies. Low-risk applications such as document organization or calendar management may require minimal oversight. High-risk applications, including legal research, brief writing, and client advice, demand multiple layers of human review. Some uses—such as inputting highly confidential information into general-purpose AI platforms—should be prohibited entirely.

Billing practices must evolve. If AI reduces the time required for legal research from eight hours to two hours, the efficiency gain should benefit clients through lower fees rather than inflating attorney profits. Clients should not pay both for AI tool subscriptions and for the same number of billable hours as traditional research methods would require. Transparent billing practices build client trust and align with fiduciary obligations.

Lessons from Integrity Investment Fund

The Integrity Investment Fund case offers several instructive elements. First, the attorney used a reputable legal database rather than a general-purpose AI. This demonstrates that brand name and subscription fees do not guarantee accuracy. Second, the attorney discovered the errors and voluntarily sought to amend the complaint rather than waiting for opposing counsel or the court to raise the issue. This proactive approach likely mitigated potential sanctions. Third, the attorney took personal responsibility, describing himself as "horrified" rather than deflecting blame to the technology.

The court's response also merits attention. Rather than immediately imposing sanctions, the court directed defendants to respond to the motion to amend and address the effect on pending motions to dismiss. This measured approach recognizes that not all AI-related errors warrant the most severe consequences, particularly when counsel acts promptly to correct the record. Defendants agreed that "the striking of all miscited and non-existent cases [is] proper", suggesting that cooperation and candor can lead to reasonable resolutions.

The fact that "the main precedents...and the...statutory citations are correct" and "none of the Plaintiffs' claims were based on the mis-cited cases" likely influenced the court's analysis. This underscores the importance of distinguishing between errors in supporting citations versus errors in primary authorities. Both require correction, but the latter carries greater risk of case-dispositive consequences and sanctions.

The Broader Imperative: Preserving Professional Judgment

Lawyers must verify their AI work!

Judge Castel's observation in Mata v. Avianca that "many harms flow from the submission of fake opinions" captures the stakes. Beyond individual case outcomes, AI hallucinations threaten systemic values: judicial efficiency, precedential reliability, adversarial fairness, and public confidence in legal institutions.

Attorneys serve as officers of the court with special obligations to the administration of justice. This role cannot be automated. AI lacks the judgment to balance competing legal principles, to assess the credibility of factual assertions, to understand client objectives in their full context, or to exercise discretion in ways that advance both client interests and systemic values.

The attorney in Integrity Investment Fund learned a costly lesson that the profession must collectively absorb: reputable databases, sophisticated algorithms, and expensive subscriptions do not eliminate the need for human verification. AI remains a tool—powerful, useful, and increasingly indispensable—but still just a tool. The attorney who signs a pleading, who argues before a court, and who advises a client bears professional responsibility that technology cannot assume.

As AI capabilities expand and integration deepens, the temptation to trust automated output will intensify. The profession must resist that temptation. Every citation requires verification. Every legal proposition demands confirmation. Every AI-generated document needs human review. These are not burdensome obstacles to efficiency but essential guardrails protecting clients, courts, and the justice system itself.

When errors occur—and the statistics confirm they will occur with disturbing frequency—attorneys must act immediately to correct the record, accept responsibility, and implement reforms preventing recurrence. Horror at one's mistakes, while understandable, satisfies no ethical obligation. Action does.

MTC

MTC: 2025 Year in Review: The "AI Squeeze," Redaction Disasters, and the Return of Hardware!

As we close the book on 2025, the legal profession finds itself in a dramatically different landscape than the one we predicted back in January. If 2023 was the year of "AI Hype" and 2024 was the year of "AI Experimentation," 2025 has undeniably been the year of the "AI Reality Check."

Here at The Tech-Savvy Lawyer.Page, we have spent the last twelve months documenting the friction between rapid innovation and the stubborn realities of legal practice. From our podcast conversations with industry leaders like Seth Price and Chris Dralla to our deep dives into the ethics of digital practice, one theme has remained constant: Competence is no longer optional; it is survival.

Looking back at our coverage from this past year, three specific highlights stand out as defining moments for legal technology in 2025. These aren't just news items; they are signals of where our profession is heading.

Highlight #1: The "Black Box" Redaction Wake-Up Call

Just days ago, on December 23, 2025, the legal world learned of a catastrophic failure of basic technological competence. As we covered in our recent post, How To: Redact PDF Documents Properly and Recover Data from Failed Redactions: A Guide for Lawyers After the DOJ Epstein Files Release “Leak”, the Department of Justice’s release of the Jeffrey Epstein files became a case study in what not to do.

The failure was simple but devastating: relying on visual "masks" rather than true data sanitization. Tech-savvy readers—and let’s be honest, anyone with a basic knowledge of copy-paste—were able to lift the "redacted" names of associates and victims directly from the PDF.

Why this matters for you: This event shattered the illusion that "good enough" tech skills are acceptable in high-stakes litigation. In 2025, we learned that the duty of confidentiality (Model Rule 1.6) is inextricably linked to the duty of technical competence (Model Rule 1.1 and its Comment 8). As we move into 2026, firms must move beyond basic PDF tools and invest in purpose-built redaction software that "burns in" changes and scrubs metadata. If the DOJ can fail this publicly, your firm is not immune.

Highlight #2: The "AI Squeeze" on Hardware

Throughout the year, we’ve heard complaints about sluggish laptops and crashing applications. In our December 22nd post, The 2026 Hardware Hike: Why Law Firms Must Budget for the 'AI Squeeze' Now, we identified the culprit. It isn’t just your imagination—it’s the supply chain.

We are currently facing a global shortage of DRAM (Dynamic Random Access Memory), driven by the insatiable appetite of data centers powering the very AI models we use daily. Manufacturers like Dell and Lenovo are pivoting their supply to these high-profit enterprise clients, leaving consumer and business laptops with a supply deficit.

Why this matters for you: The era of the 16GB RAM laptop for lawyers is dead. Running local, privacy-focused AI models (a major trend in 2025) and heavy eDiscovery platforms now requires 32GB or even 64GB of RAM as a baseline (which means you may want more than the “baseline”). The "AI Squeeze" means that in 2026, hardware will be 15-20% more expensive and harder to find. The lesson? Buy now. If your firm has a hardware refresh cycle planned for Q2 2026, accelerate it to Q1. Budgeting for technology is no longer just about software subscriptions; it’s about securing the physical silicon needed to do your job.

Highlight #3: From "Chat" to "Doing" (The Rise of Agentic AI)

Earlier this year, on the Tech-Savvy Lawyer Podcast, we spoke with Chris Dralla of TypeLaw and discussed the evolution of AI tools. 2025 marked the shift from "Chatbot AI" (asking a bot a question) to "Agentic AI" (telling a bot to do a job).

Tools like TypeLaw didn't just "summarize" cases this year; they actively formatted briefs, checked citations against local court rules, and built tables of authorities with minimal human intervention. This is the "boring" automation we have always advocated for—technology that doesn't try to be a robot lawyer, but acts as a tireless paralegal.

Why this matters for you: The novelty of chatting with an LLM has worn off. The firms winning in 2025 were the ones adopting tools that integrated directly into Microsoft Word and Outlook to automate specific, repetitive workflows. The "Generalist AI" is being replaced by the "Specialist Agent."

Moving Forward: What We Can Learn Today for 2026

As we look toward the new year, the profession must internalize a critical lesson: Technology is a supply chain risk.

Whether it is the supply of affordable memory chips or the supply of secure software that properly handles redactions, you are dependent on your tools. The "Tech-Savvy" lawyer of 2026 is not just a user of technology but a manager of technology risk.

What to Expect in 2026:

Is your firm budgeted for the anticipated 2026 hardware price hike?

  1. The Rise of the "Hybrid Builder": I predict that mid-sized firms will stop waiting for vendors to build the perfect tool and start building their own "micro-apps" on top of secure, private AI models.

  2. Mandatory Tech Competence CLEs: rigorous enforcement of tech competence rules will likely follow the high-profile data breaches and redaction failures of 2025.

  3. The Death of the Billable Hour (Again?): With "Agentic AI" handling the grunt work of drafting and formatting, clients will aggressively push back on bills for "document review" or "formatting." 2026 will force firms to bill for judgment, not just time.

As we sign off for the last time in 2025, remember our motto: Technology should make us better lawyers, not lazier ones. Check your redactions, upgrade your RAM, and we’ll see you in 2026.

Happy Lawyering and Happy New Year!

MTC: The 2026 Hardware Hike: Why Law Firms Must Budget for the "AI Squeeze" Now!

Lawyers need to be ready for $prices$ in tech to go up next year due to increased AI use!

A perfect storm is brewing in the hardware market. It will hit law firm budgets harder than expected in 2026. Reports from December 2025 confirm that major manufacturers like Dell, Lenovo, and HP are preparing to raise PC and laptop prices by 15% to 20% early next year. The catalyst is a global shortage of DRAM (Dynamic Random Access Memory). This shortage is driven by the insatiable appetite of AI servers.

While recent headlines note that giants like Apple and Samsung have the supply chain power to weather this surge, the average law firm does not. This creates a critical strategic challenge for managing partners and legal administrators.

The timing is unfortunate. Legal professionals are adopting AI tools at a record pace. Tools for eDiscovery, contract analysis, and generative drafting require significant computing power to run smoothly. In 2024, a laptop with 16GB of RAM was standard. Today, running local privacy-focused AI models or heavy eDiscovery platforms makes 32GB the new baseline. 64GB is becoming the standard for power users.

Don’t just meet today’s AI demands—exceed them. Upgrade to 32GB or 64GB of RAM now, not later. AI adoption in legal practice is accelerating exponentially. The memory you think is “enough” today will be the bottleneck tomorrow. Firms that overspec their hardware now will avoid costly mid-cycle replacements and gain a competitive edge in speed and efficiency.
— 💡 PRO TIP: Future-Proof Your Firm's Hardware Now

We face a paradox. We need more memory to remain competitive, but that memory is becoming scarce and expensive. The "AI Squeeze" is real. Chipmakers are prioritizing high-profit memory for data center AI over the standard memory used in law firm laptops. This supply shift drives up the bill of materials for every new workstation (low end when you compare them “high-profit memory data centers) you plan to buy.

Update your firm’s tech budget for 2026 by prioritizing ram for your next technology upgrade.

Law firms should act immediately. First, audit your hardware refresh cycles. If you planned to upgrade machines in Q1 or Q2 of 2026, accelerate those purchases to the current quarter. You could save 20% per unit by buying before the price hikes take full effect.

Second, adjust your 2026 technology budget. A flat budget will buy you less power next year. You cannot afford to downgrade specifications. Buying underpowered laptops will frustrate fee earners and throttle the efficiency gains you expect from your AI investments.

Finally, prioritize RAM over storage. Cloud storage is cheap and abundant. Memory is not. When configuring new machines, allocate your budget to 32GB or 64GB (or more) of RAM rather than a larger hard drive.

The hardware market is shifting. The cost of innovation is rising. Smart firms will plan for this reality today rather than paying the premium tomorrow.

🧪🎧 TSL Labs Bonus Podcast: Open vs. Closed AI — The Hidden Liability Trap in Your Firm ⚖️🤖

Welcome to TSL Labs Podcast Experiment. 🧪🎧 In this special "Deep Dive" bonus episode, we strip away the hype surrounding Generative AI to expose a critical operational risk hiding in plain sight: the dangerous confusion between "Open" and "Closed" AI systems.

Featuring an engaging discussion between our Google Notebook AI hosts, this episode unpacks the "Swiss Army Knife vs. Scalpel" analogy that every managing partner needs to understand. We explore why the "Green Light" tools you pay for are fundamentally different from the "Red Light" public models your staff might be using—and why treating them the same could trigger an immediate breach of ABA Model Rule 5.3. From the "hidden crisis" of AI embedded in Microsoft 365 to the non-negotiable duty to supervise, this is the essential briefing for protecting client confidentiality in the age of algorithms.

In our conversation, we cover the following:

  • [00:00] – Introduction: The hidden danger of AI in law firms.

  • [01:00] – The "AI Gap": Why staff confuse efficiency with confidentiality.

  • [02:00] – The Green Light Zone: Defining secure, "Closed" AI systems (The Scalpel).

  • [03:45] – The Red Light Zone: Understanding "Open" Public LLMs (The Swiss Army Knife).

  • [04:45] – "Feeding the Beast": How public queries actively train the model for everyone else.

  • [05:45]The Duty to Supervise: ABA Model Rules 5.3 and 1.1[8] implications.

  • [07:00] – The Hidden Crisis: AI embedded in ubiquitous tools (Microsoft 365, Adobe, Zoom).

  • [09:00] – The Training Gap: Why digital natives assume all prompt boxes are safe.

  • [10:00] – Actionable Solutions: Auditing tools and the "Elevator vs. Private Room" analogy.

  • [12:00] – Hallucinations: Vendor liability vs. Professional negligence.

  • [14:00] – Conclusion: The final provocative thought on accidental breaches.

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

MTC (Bonus): National Court Technology Rules: Finding Balance Between Guidance and Flexibility ⚖️

Standardizing Tech Guidelines in the Legal System

Lawyers and their staff needs to know the standard and local rules of AI USe in the courtroom - their license could depend on it.

The legal profession stands at a critical juncture where technological capability has far outpaced judicial guidance. Nicole Black's recent commentary on the fragmented approach to technology regulation in our courts identifies a genuine problem—one that demands serious consideration from both proponents of modernization and cautious skeptics alike.

The core tension is understandable. Courts face legitimate concerns about technology misuse. The LinkedIn juror research incident in Judge Orrick's courtroom illustrates real risks: a consultant unknowingly violated a standing order, resulting in a $10,000 sanction despite the attorney's good-faith disclosure and remedial efforts. These aren't theoretical concerns—they reflect actual ethical boundaries that protect litigants and preserve judicial integrity. Yet the response to these concerns has created its own problems.

The current patchwork system places practicing attorneys in an impossible position. A lawyer handling cases across multiple federal districts cannot reasonably track the varying restrictions on artificial intelligence disclosure, social media evidence protocols, and digital research methodologies. When the safe harbor is simply avoiding technology altogether, the profession loses genuine opportunities to enhance accuracy and efficiency. Generative AI's citation hallucinations justify judicial scrutiny, but the ad hoc response by individual judges—ranging from simple guidance to outright bans—creates unpredictability that chills responsible innovation.

SHould there be an international standard for ai use in the courtroom

There are legitimate reasons to resist uniform national rules. Local courts understand their communities and case management needs better than distant regulatory bodies. A one-size-fits-all approach might impose burdensome requirements on rural jurisdictions with fewer tech-savvy practitioners. Furthermore, rapid technological evolution could render national rules obsolete within months, whereas individual judges retain flexibility to respond quickly to emerging problems.

Conversely, the current decentralized approach creates serious friction. The 2006 amendments to Federal Rules of Civil Procedure for electronically stored information succeeded partly because they established predictability across jurisdictions. Lawyers knew what preservation obligations applied regardless of venue. That uniformity enabled the profession to invest in training, software, and processes. Today's lawyers lack that certainty. Practitioners must maintain contact lists tracking individual judge orders, and smaller firms simply cannot sustain this administrative burden.

The answer likely lies between extremes. Rather than comprehensive national legislation, the profession would benefit from model standards developed collaboratively by the Federal Judicial Conference, state supreme courts, and bar associations. These guidelines could allow reasonable judicial discretion while establishing baseline expectations—defining when AI disclosure is mandatory, clarifying which social media research constitutes impermissible contact, and specifying preservation protocols that protect evidence without paralyzing litigation.

Such an approach acknowledges both legitimate judicial concerns and legitimate professional needs. It recognizes that judges require authority to protect courtroom procedures while recognizing that lawyers require predictability to serve clients effectively.

I basically agree with Nicole: The question is not whether courts should govern technology use. They must. The question is whether they govern wisely—with sufficient uniformity to enable compliance, sufficient flexibility to address local concerns, and sufficient clarity to encourage rather than discourage responsible innovation.

MTC: The Hidden Danger in Your Firm: Why We Must Teach the Difference Between “Open” and “Closed” AI!

Does your staff understand the difference between “free” and “paid” aI? Your license could depend on it!

I sit on an advisory board for a school that trains paralegals. We meet to discuss curriculum. We talk about the future of legal support. In a recent meeting, a presentation by a private legal research company caught my attention. It stopped me cold. The topic was Artificial Intelligence. The focus was on use and efficiency. But something critical was missing.

The lesson did not distinguish between public-facing and private tools. It treated AI as a monolith. This is a dangerous oversimplification. It is a liability waiting to happen.

We are in a new era of legal technology. It is exciting. It is also perilous. The peril comes from confusion. Specifically, the confusion between paid, closed-system legal research tools and public-facing generative AI.

Your paralegals, law clerks, and staff use these tools. They use them to draft emails. They use them to summarize depositions. Do they know where that data goes? Do you?

The Two Worlds of AI

There are two distinct worlds of AI in our profession.

First, there is the world of "Closed" AI. These are the tools we pay for - i.e., Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. These platforms are built for lawyers. They are walled gardens. You pay a premium for them. (Always check the terms and conditions of your providers.) That premium buys you more than just access. It buys you privacy. It buys you security. When you upload a case file to Westlaw, it stays there. The AI analyzes it. It does not learn from it for the public. It does not share your client’s secrets with the world. The data remains yours. The confidentiality is baked in.

Then, there is the world of "Open" or "Public" AI. This is ChatGPT. This is Perplexity. This is Claude. These tools are miraculous. But they are also voracious learners.

When you type a query into the free version of ChatGPT, you are not just asking a question. You are training the model. You are feeding the beast. If a paralegal types, "Draft a motion to dismiss for John Doe, who is accused of embezzlement at [Specific Company]," that information leaves your firm. It enters a public dataset. It is no longer confidential.

This is the distinction that was missing from the lesson plan. It is the distinction that could cost you your license.

The Duty to Supervise

Do you and your staff know when you can and can’t use free AI in your legal work?

You might be thinking, "I don't use ChatGPT for client work, so I'm safe." You are wrong.

You are not the only one doing the work. Your staff is doing the work. Your paralegals are doing the work.

Under the ABA Model Rules of Professional Conduct, you are responsible for them. Look at Rule 5.3. It covers "Responsibilities Regarding Nonlawyer Assistance." It is unambiguous. You must make reasonable efforts to ensure your staff's conduct is compatible with your professional obligations.

If your paralegal breaches confidentiality using AI, it is your breach. If your associate hallucinates a case citation using a public LLM, it is your hallucination.

This connects directly to Rule 1.1, Comment 8. This represents the duty of technology competence. You cannot supervise what you do not understand. You must understand the risks associated with relevant technology. Today, that means understanding how Large Language Models (LLMs) handle data.

The "Hidden AI" Problem

I have discussed this on The Tech-Savvy Lawyer.Page Podcast. We call it the "Hidden AI" crisis. AI is creeping into tools we use every day. It is in Adobe. It is in Zoom. It is in Microsoft 365.

Public-facing AI is useful. I use it. I love it for marketing. I use it for brainstorming generic topics. I use it to clean up non-confidential text. But I never trust it with a client's name. I never trust it with a very specific fact pattern.

A paid legal research tool is different. It is a scalpel. It is precise. It is sterile. A public chatbot is a Swiss Army knife found on the sidewalk. It might work. But you don't know where it's been.

The Training Gap

The advisory board meeting revealed a gap. Schools are teaching students how to use AI. They are teaching prompts. They are teaching speed. They are not emphasizing the where.

The "where" matters. Where does the data go?

We must close this gap in our own firms. You cannot assume your staff knows the difference. To a digital native, a text box is a text box. They see a prompt window in Westlaw. They see a prompt window in ChatGPT. They look the same. They act the same.

They are not the same.

One protects you. The other exposes you.

A Practical Solution

I have written about this in my blog posts regarding AI ethics. The solution is not to ban AI. That is impossible. It is also foolish. AI is a competitive advantage.

* Always check the terms of use in your agreements with private platforms to determine if your client confidential data and PII are protected.

The solution is policies and training.

  1. Audit Your Tools. Know what you have. Do you have an enterprise license for ChatGPT? If so, your data might be private. If not, assume it is public.

  2. Train on the "Why." Don't just say "No." Explain the mechanism. Explain that public AI learns from inputs. Use the analogy of a confidential conversation in a crowded elevator versus a private conference room.

  3. Define "Open" vs. "Closed." Create a visual guide. List your "Green Light" tools (Westlaw, Lexis, etc.). List your "Red Light" tools for client data (Free ChatGPT, personal Gmail, etc.).

  4. Supervise Output. Review the work. AI hallucinates. Even paid tools can make mistakes. Public tools make up cases entirely. We have all seen the headlines. Don't be the next headline.

The Expert Advantage

The line between “free” and “paid” ai could be a matter of keeping your bar license!

On The Tech-Savvy Lawyer.Page, I often say that technology should make us better lawyers, not lazier ones.

Using Lexis+/Protege, Westlaw Precision, Co-Counsel, Harvey, vLex Vincent, etc. is about leveraging a curated, verified database. It is about relying on authority. Using a public LLM for legal research is about rolling the dice.

Your license is hard-earned. Your reputation is priceless. Do not risk them on a free chatbot.

The lesson from the advisory board was clear. The schools are trying to keep up. But the technology moves faster than the curriculum. It is up to us. We are the supervisors. We are the gatekeepers.

Take time this week. Gather your team. Ask them what tools they use. You might be surprised. Then, teach them the difference. Show them the risks.

Be the tech-savvy lawyer your clients deserve. Be the supervisor the Rules require.

The tools are here to stay. Let’s use them effectively. Let’s use them ethically. Let’s use them safely.

MTC

TSL Labs 🧪Bonus: 🎙️ From Cyber Compliance to Cyber Dominance: What VA's AI Revolution Means for Government Cybersecurity, Legal Ethics, and ABA Model Rule Compliance!

In this TSL Labs bonus episode, we examine this week’s editorial on how the Department of Veterans Affairs is leading a historic transformation from traditional compliance frameworks to a dynamic, AI-driven approach called "cyber dominance." This conversation unpacks what this seismic shift means for legal professionals across all practice areas—from procurement and contract law to privacy, FOIA, and litigation. Whether you're advising government agencies, representing contractors, or handling cases where data security matters, this discussion provides essential insights into how continuous monitoring, zero trust architecture, and AI-driven threat detection are redefining professional competence under ABA Model Rule 1.1. 💻⚖️🤖

Join our AI hosts and me as we discuss the following three questions and more!

  1. How has federal cybersecurity evolved from the compliance era to the cyber dominance paradigm? 🔒

  2. What are the three technical pillars—continuous monitoring, zero trust architecture, and AI-driven detection—and how do they interconnect? 🛡️

  3. What professional liability and ethical obligations do lawyers now face under ABA Model Rule 1.1 regarding technology competence? ⚖️

In our conversation, we cover the following:

  • [00:00:00] - Introduction: TSL Labs Bonus Podcast on VA's AI Revolution 🎯

  • [00:01:00] - Introduction to Federal Cybersecurity: The End of the Compliance Era 📋

  • [00:02:00] - Legal Implications and Professional Liability Under ABA Model Rules ⚖️

  • [00:03:00] - From Compliance to Continuous Monitoring: Understanding the Static Security Model 🔄

  • [00:04:00] - The False Comfort of Compliance-Only Approaches 🚨

  • [00:05:00] - The Shift to Cyber Dominance: Three Integrated Technical Pillars 💪

  • [00:06:00] - Zero Trust Architecture (ZTA) Explained: Verify Everything, Trust Nothing 🔐

  • [00:07:00] - AI-Driven Detection and Legal Challenges: Professional Competence Under Model Rule 1.1 🤖

  • [00:08:00] - The New Legal Questions: Real-Time Risk vs. Static Compliance 📊

  • [00:09:00] - Evolving Compliance: From Paper Checks to Dynamic Evidence 📈

  • [00:10:00] - Cybersecurity as Operational Discipline: DevSecOps and Security by Design 🔧

  • [00:11:00] - Litigation Risks: Discovery, Red Teaming, and Continuous Monitoring Data ⚠️

  • [00:12:00] - Cyber Governance with AI: Algorithmic Bias and Explainability 🧠

  • [00:13:00] - Synthesis and Future Outlook: Law Must Lead, Not Chase Technology 🚀

  • [00:14:00] - The Ultimate Question: Is Your Advice Ready for Real-Time Risk Management? 💡

  • [00:15:00] - Conclusion and Resources 📚

Resources

Mentioned in the Episode

Software & Cloud Services Mentioned in the Conversation

  • AI-Driven Detection Systems - Automated threat detection and response platforms

  • Automated Compliance Platforms - Dynamic evidence generation systems

  • Continuous Monitoring Systems - Real-time security assessment platforms

  • DevSecOps Tools - Automated security testing in software development pipelines

  • Firewalls - Network security hardware devices

  • Google Notebook AI - https://notebooklm.google.com/

  • Penetration Testing Software - Security vulnerability assessment tools

  • Zero Trust Architecture (ZTA) Solutions - Identity and access verification systems

MTC: From Cyber Compliance to Cyber Dominance: What VA’s AI Revolution Means for Government Cybersecurity, Legal Ethics, and ABA Model Rule Compliance 💻⚖️🤖

In the age of cyber dominance, “I did not understand the technology” is increasingly unlikely to serve as a safe harbor.

🚨 🤖 👩🏻‍💼👨‍💼

In the age of cyber dominance, “I did not understand the technology” is increasingly unlikely to serve as a safe harbor. 🚨 🤖 👩🏻‍💼👨‍💼

Government technology is in the middle of a historic shift. The Department of Veterans Affairs (VA) stands at the center of this transformation, moving from a check‑the‑box cybersecurity culture to a model of “cyber dominance” that fuses artificial intelligence (AI), zero trust architecture (a security model that assumes no user or device is trusted by default, even inside the network), and continuous risk management. 🔐

For lawyers who touch government work in any way—inside agencies, representing contractors, handling whistleblowers, litigating Freedom of Information Act (FOIA) or privacy issues, or advising regulated entities—this is not just an IT story. It is a law license story. Under the American Bar Association (ABA) Model Rules, failing to grasp core cyber and AI governance concepts can now translate into ethical risk and potential disciplinary exposure. ⚠️

Resources such as The Tech-Savvy Lawyer.Page blog and podcast are no longer “nice to have.” They are becoming essential continuing education for lawyers who want to stay competent in practice, protect their clients, and safeguard their own professional standing. 🧠🎧

Where Government Agency Technology Has Been: The Compliance Era 🗂️

For decades, many federal agencies lived in a world dominated by static compliance frameworks. Security often meant passing audits and meeting minimum requirements, including:

  • Annual or periodic Authority to Operate (ATO, the formal approval for a system to run in a production environment based on security review) exercises

  • A focus on the Federal Information Security Modernization Act (FISMA) and National Institute of Standards and Technology (NIST) security control checklists

  • Point‑in‑time penetration tests

  • Voluminous documentation, thin on real‑time risk

The VA was no exception. Like many agencies, it grappled with large legacy systems, fragmented data, and a culture in which “security” was a paperwork event, not an operational discipline. 🧾

In that world, lawyers often saw cybersecurity as a box to tick in contracts, privacy impact assessments, and procurement documentation. The legal lens focused on:

  • Whether the required clauses were in place

  • Whether a particular system had its ATO

  • Whether mandatory training was completed

The result: the law frequently chased the technology instead of shaping it.

Where Government Technology Is Going: Cyber Dominance at the VA 🚀

The VA is now in the midst of what its leadership calls a “cybersecurity awakening” and a shift toward “cyber dominance”. The message is clear: compliance is not enough, and in many ways, it can be dangerously misleading if it creates a false sense of security.

Key elements of this new direction include:

  • Continuous monitoring instead of purely static certification

  • Zero trust architecture (a security model that assumes no user, device, or system is trusted by default, and that every access request must be verified) as a design requirement, not an afterthought

  • AI‑driven threat detection and anomaly spotting at scale

  • Integrated cybersecurity into mission operations, not a separate silo

  • Real‑time incident response and resilience, rather than after‑the‑fact blame

“Cyber dominance” reframes cybersecurity as a dynamic contest with adversaries. Agencies must assume compromise, hunt threats proactively, and adapt in near real time. That shift depends heavily on data engineering, automation, and AI models that can process signals far beyond human capacity. 🤖

For both government and nongovernment lawyers, this means that the facts on the ground—what systems actually do, how they are monitored, and how decisions are made—are changing fast. Advocacy and counseling that rely on outdated assumptions about “IT systems” will be incomplete at best and unethical at worst.

The Future: Cybersecurity Compliance, Cybersecurity, and Cybergovernance with AI 🔐🌐

The future of government technology involves an intricate blend of compliance, operational security, and AI governance. Each element increasingly intersects with legal obligations and the ABA Model Rules.

1. Cybersecurity Compliance: From Static to Dynamic ⚙️

Traditional compliance is not disappearing. The FISMA, NIST standards, the Federal Risk and Authorization Management Program (FedRAMP), the Health Insurance Portability and Accountability Act (HIPAA), and other frameworks still govern federal systems and contractor environments.

But the definition of compliance is evolving:

  • Continuous compliance: Automated tools generate near real‑time evidence of security posture instead of relying only on annual snapshots.

  • Risk‑based prioritization: Not every control is equal; agencies must show how they prioritize high‑impact cyber risks.

  • Outcome‑focused oversight: Auditors and inspectors general care less about checklists and more about measurable risk reduction and resilience.

Lawyers must understand that “we’re compliant” will no longer end the conversation. Decision‑makers will ask:

  • What does real‑time monitoring show about actual risk?

  • How quickly can the VA or a contractor detect and contain an intrusion?

  • How are AI tools verifying, logging, and explaining security‑related decisions?

2. Cybersecurity as an Operational Discipline 🛡️

The VA’s push toward cyber dominance relies on building security into daily operations, not layering it on top. That includes:

  • Secure‑by‑design procurement and contract terms, which require modern controls and realistic reporting duties

  • DevSecOps (development, security, and operations) pipelines that embed automated security testing and code scanning into everyday software development

  • Data segmentation and least‑privilege access across systems, so users and services only see what they truly need

  • Routine red‑teaming (simulated attacks by ethical hackers to test defenses) and table‑top exercises (structured discussion‑based simulations of incidents to test response plans)

For government and nongovernment lawyers, this raises important questions:

  • Are contracts, regulations, and interagency agreements aligned with zero trust principles (treating every access request as untrusted until verified)?

  • Do incident response plans meet regulatory and contractual notification timelines, including state and federal breach laws?

  • Are representations to courts, oversight bodies, and counterparties accurate in light of actual cyber capabilities and known limitations?

3. Cybergovernance with AI: The New Frontier 🌐🤖

Lawyers can no longer sit idlely by their as cyber-ethic responsibilities are changing!

AI will increasingly shape how agencies, including the VA, manage cyber risk:

  • Machine learning models will flag suspicious behavior or anomalous network traffic faster than humans alone.

  • Generative AI tools will help triage incidents, search legal and policy documents, and assist with internal investigations.

  • Decision‑support systems may influence resource allocation, benefit determinations, or enforcement priorities.

These systems raise clear legal and ethical issues:

  • Transparency and explainability: Can lawyers understand and, if necessary, challenge the logic behind AI‑assisted or AI‑driven decisions?

  • Bias and fairness: Do algorithms create discriminatory impacts on veterans, contractors, or employees, even if unintentional?

  • Data governance: Is sensitive, confidential, or privileged information being exposed to third‑party AI providers or trained into their models?

Blogs and podcasts like Tech-Savvy Lawyer.Page blog and podcast often highlight practical workflows for lawyers using AI tools safely, along with concrete questions to ask vendors and IT teams. Those insights are particularly valuable as agencies and law practices both experiment with AI for document review, legal research, and compliance tracking. 💡📲

What Lawyers in Government and Nongovernment Need to Know 🏛️⚖️

Lawyers inside agencies such as the VA now sit at the intersection of mission, technology, and ethics. Under ABA Model Rule 1.1 (Competence) and its comment on technological competence, agency counsel must acquire and maintain a basic understanding of relevant technology that affects client representation.

For government lawyers and nongovernment lawyers who advise, contract with, or litigate against agencies such as the VA, technological competence now has a common core. It requires enough understanding of system architecture, cybersecurity practices, and AI‑driven tools to ask the right questions, spot red flags, and give legally sound, ethics‑compliant advice on how those systems affect veterans, agencies, contractors, and the public. ⚖️💻

For government lawyers and nongovernment lawyers who interact with agencies such as the VA, this includes:

  • Understanding the basic architecture and risk profile of key systems (for example, benefits, health data, identity, and claims platforms), so you can evaluate how failures affect legal rights and obligations. 🧠

  • Being able to ask informed questions about zero trust architecture, encryption, system logging, and AI tools used by the agency or contractor.

  • Knowing the relevant incident response plans, data breach notification obligations, and coordination pathways with regulators and law enforcement, whether you are inside the agency or across the table. 🚨

  • Ensuring that policies, regulations, contracts, and public statements about cybersecurity and AI reflect current technical realities, rather than outdated assumptions that could mislead courts, oversight bodies, or the public.

Model Rules 1.6 (Confidentiality of Information) and 1.13 (Organization as Client) are especially important. Government lawyers must:

  • Guard sensitive data, including classified, personal, and privileged information, against unauthorized disclosure or misuse.

  • Advise the “client” (the agency) when cyber or AI practices present significant legal risk, even if those practices are popular or politically convenient.

If a lawyer signs off on policies or representations about cybersecurity that they know—or should know—are materially misleading, that can implicate Rule 3.3 (Candor Toward the Tribunal) and Rule 8.4 (Misconduct). The shift to cyber dominance means that “we passed the audit” will no longer excuse ignoring operational defects that put veterans or the public at risk. 🚨

What Lawyers Outside Government Need to Know 🏢⚖️

Lawyers representing contractors, vendors, whistleblowers, advocacy groups, or regulated entities cannot ignore these changes at the VA and other agencies. Their clients operate in the same new environment of continuous oversight and AI‑informed risk management.

Key responsibilities for nongovernmental lawyers include:

  • Contract counseling: Understanding cybersecurity clauses, incident response requirements, AI‑related representations, and flow‑down obligations in government contracts.

  • Regulatory compliance: Navigating overlapping regimes (for example, federal supply chain rules, state data breach statutes, HIPAA in health contexts, and sector‑specific regulations).

  • Litigation strategy: Incorporating real‑time cyber telemetry and AI logs into discovery, privilege analyses, and evidentiary strategies.

  • Advising on AI tools: Ensuring that client use of generative AI in government‑related work does not compromise confidential information or violate procurement, export control, or data localization rules.

Under Model Rule 1.1 (Competence), outside counsel must be sufficiently tech‑savvy to spot issues and know when to bring in specialized expertise. Ignoring cyber and AI governance concerns can:

  • Lead to inadequate or misleading advice.

  • Misstate risk in negotiations, disclosures, or regulatory filings.

  • Expose clients to enforcement actions, civil liability, or debarment.

  • Expose lawyers to malpractice claims and disciplinary complaints.

ABA Model Rules: How Cyber and AI Now Touch Your License 🧾⚖️

Several American Bar Association (ABA) Model Rules are directly implicated by the VA’s evolution from compliance to cyber dominance and by the broader adoption of artificial intelligence (AI) in government operations:

  • Rule 1.1 – Competence

    • Comment 8 recognizes a duty of technological competence.

    • Lawyers must understand enough about cyber risk and AI systems to represent clients prudently.

  • Rule 1.6 – Confidentiality of Information

    • Lawyers must take reasonable measures to safeguard client information, including in cloud environments and AI‑enabled workflows.

    • Uploading sensitive or privileged content into consumer‑grade AI tools without safeguards can violate this duty.

  • Rule 1.4 – Communication

    • Clients should be informed—in clear, non‑technical terms—about significant cyber and AI risks that may affect their matters.

  • Rules 5.1 and 5.3 – Responsibilities of Partners, Managers, and Supervisory Lawyers; Responsibilities Regarding Nonlawyer Assistance

    • Law firm leaders must ensure that policies, training, vendor selection, and supervision support secure, ethical use of technology and AI by lawyers and staff.

  • Rule 1.13 – Organization as Client

    • Government and corporate counsel must advise leadership when cyber or AI governance failures pose substantial legal or regulatory risk.

  • Rules 3.3, 3.4, and 8.4 – Candor, Fairness, and Misconduct

    • Misrepresenting cyber posture, ignoring known vulnerabilities, or manipulating AI‑generated evidence can rise to ethical violations and professional misconduct.

In the age of cyber dominance, “I did not understand the technology” is increasingly unlikely to serve as a safe harbor. Judges, regulators, and disciplinary authorities expect lawyers to engage these issues competently.

Practical Next Steps for Lawyers: Moving from Passive to Proactive 🧭💼

To meet this moment, lawyers—both in government and outside—should:

  • Learn the language of modern cybersecurity:

    • Zero trust (a model that treats every access request as untrusted until verified)

    • Endpoint detection and response (EDR, tools that continuously monitor and respond to threats on endpoints such as laptops, servers, and mobile devices)

    • Security Information and Event Management (SIEM, systems that collect and analyze security logs from across the network)

    • Security Orchestration, Automation, and Response (SOAR, tools that automate and coordinate security workflows and responses)

    • Encryption at rest and in transit (protecting data when it is stored and when it moves across networks)

    • Multi‑factor authentication (MFA, requiring more than one factor—such as password plus a code—to log in)

  • Understand AI’s role in the client’s environment: what tools are used, where data goes, how outputs are checked, and how decisions are logged.

  • Review incident response plans and breach notification workflows with an eye on legal timelines, cross‑jurisdictional obligations, and contractual requirements.

  • Update engagement letters, privacy notices, and internal policies to reflect real‑world use of cloud services and AI tools.

  • Invest in continuous learning through technology‑forward legal resources, including The Tech-Savvy Lawyer.Page blog and podcast, which translate evolving tech into practical law practice strategies. 💡

Final Thoughts: The VA’s journey from compliance to cyber dominance is more than an agency story. It is a case study in how technology, law, and ethics converge. Lawyers who embrace this reality will better protect their clients, their institutions, and their licenses. Those who do not will risk being left behind—by adversaries, by regulators, and by their own professional standards. 🚀🔐⚖️

Editor’s Note: I used the VA as my “example” because Veterans mean a lot to me. I have been a Veterans Disability Benefits Advocate for nearly two decades. Their health and welfare should not be harmed by faulty tech compliance. 🇺🇸⚖️

MTC

MTC (Holiday Special🎁): Cyber Monday 2025: A Lawyer’s Defense Against Holiday Scams and ‘Bargain’ Tech Traps

The “Billable Hour” Defense: Why That $300 Laptop and "Urgent" Delivery Text Are Liabilities, Not Deals

That “deal” for a “cheaper” computer may not be worth the lack of performance issues that come with a “cheap” computer!

As legal professionals, we are trained to spot inconsistencies in testimony, identify hidden clauses in contracts, and anticipate risks before they manifest. Yet, when the holiday shopping season arrives, the same skepticism that protects our clients often evaporates in the face of a 70% off sticker.

During Cyber Mondays, lawyers must tread carefully. The digital landscape is not just a marketplace; it is a hunting ground. For a law practice, the risks of holiday shopping go beyond a wasted purchase. A compromised device or a clicked phishing link can breach attorney-client privilege, trigger ethical violations, and lock down firm operations with ransomware.

Before you open your wallet or click that “track package” link, consider this your final briefing on the threats lurking behind the holiday hype.

The "Bargain" Trap: Why Cheap Tech is Expensive for Lawyers

We all love a deal. But in the world of legal technology, there is a profound difference between "inexpensive" and "cheap."

You may see "doorbuster" deals for laptops priced under $300. The marketing copy promises they are perfect for "light productivity" or "students." You might be tempted to pick one up for a paralegal, a home office, or even a law student family member.

Resist this impulse.

Tech experts and consumer watchdogs, including Lifehacker and PCMag, consistently warn about these "derivative" holiday models. Manufacturers often build specific units solely for Black Friday and Cyber Monday (SKUs [stock keeping unit] that do not exist the rest of the year). They achieve these rock-bottom prices by cutting corners that matter deeply to legal professionals:

  • The Processor Bottleneck: Many of these bargain laptops run on Celeron or Pentium chips (or older generations of Core i3). For a lawyer running practice management software, multiple PDF contracts, and video conferencing simultaneously, these processors are insufficient. The resulting lag isn't just annoying; it costs billable time.

  • The Screen Resolution Hazard: To save costs, these laptops often feature 1366 x 768 (720p) screens. In 2025, this is unacceptable for reviewing documents. The low resolution makes text pixelated and reduces the amount of a contract you can see on screen at once, increasing eye strain and the likelihood of missing a critical detail in a clause.

  • The RAM Deficit: 4GB of RAM is common in these deals. In a modern Windows environment, the operating system alone consumes nearly that much. Once you open a web browser with your firm's research tabs, the system will crawl.

  • Security Longevity: Perhaps most critically for a law firm, these bargain-bin devices often reach their "End of Service" life much faster. They may not support the latest secure operating systems or encryption standards required by your firm’s compliance insurance.

The Verdict: A $300 laptop that frustrates your staff and cannot handle encryption is not an asset; it is e-waste in the making. Stick to business-class hardware (Lenovo, HP, Dell, Apple, inter alia.) purchased through verified channels, even if it costs more. Your peace of mind is worth the premium.

BONUS: Price Tracking Tools

Successful online shopping during promotional periods requires distinguishing genuine discounts from artificial markups. Price tracking tools provide historical data that reveals authentic savings opportunities.

CamelCamelCamel tracks Amazon price history, creating visual charts showing price fluctuations over weeks, months, and years. This free tool sends email notifications when products drop below specified price thresholds and monitors both Amazon-direct and third-party seller pricing.

Honey extends beyond its widely-known coupon functionality to offer robust price tracking across multiple retailers through its "Droplist" feature. The browser extension automatically applies discount codes during checkout and compares prices across competing stores.

Keepa provides similar Amazon-focused price tracking with browser integration that displays historical pricing directly on Amazon product pages. The tool's detailed charts reveal seasonal patterns and help identify optimal purchase timing.

For legal professionals managing firm purchasing, enterprise-grade solutions such as Prisync, Price2Spy, and Competera offer comprehensive competitor monitoring, automated pricing adjustments, and real-time market data. These platforms serve businesses tracking multiple products across various marketplaces, but require subscription fees.

The Scam Landscape 2025: You Are a High-Value Target

Be wary when purchasing items online - always use a vpn when using public wifi!

According to Malwarebytes’ 2025 Holiday Scam report, shoppers are increasingly mobile, fast, and distracted. For lawyers, who are often managing high-stress caseloads alongside holiday obligations, this distraction is dangerous.

Scammers know that law firms move money. They know we manage sensitive data. And they know that during the holidays, our guards are down. Here are the three specific vectors attacking legal professionals this season.

1. The "Urgent Delivery" Smishing Attack
We all have packages in transit. You likely receive legitimate texts from Amazon, FedEx, or UPS daily. Scammers exploit this by sending "Smishing" (SMS phishing) messages claiming a package is "delayed" or "requires a delivery fee."

For a lawyer waiting on a court transcript or a client file, the instinct to "fix" the delivery issue is strong. But clicking that link often downloads malware or leads to a credential-harvesting site that looks identical to the courier’s login page.

  • The Defense: Never click a tracking link in a text message. Copy the tracking number and paste it directly into the courier’s official app or website. If the text doesn’t have a tracking number, it’s a scam.

2. The "Malvertising" Minefield
You are searching for a specific piece of hardware—perhaps a new scanner or ergonomic chair for the office. You see an ad on Google or social media for the exact item at a beat-to-beat price.

Malwarebytes warns that "Malvertising" (malicious advertising) is surging. Scammers buy ad space on legitimate platforms. When you click the ad, you aren't taken to the retailer; you are redirected to a cloned site designed to steal your credit card info, or worse, your firm’s login credentials.

  • The Defense: Treat ads as tips, not links. If you see a deal for a Dell monitor, close the ad and navigate manually to Dell.com or BestBuy.com to find it.

3. The "Gift Card" Emergency
This is a classic that has evolved. In the past, it was a fake email from the "Managing Partner" asking an associate to buy gift cards for a client. Now, it’s more sophisticated. Scammers may pose as court clerks or government officials, claiming a "fine" or "filing fee" must be paid immediately to avoid a bench warrant, and—due to a "system error"—they can only accept payment via gift cards or crypto.

  • The Defense: Courts do not accept gift cards. Period. If you receive an urgent financial demand via text or email, verify it by calling the person or entity on a known, public number.

The "Social" Threat: Marketplace Scams

Social media marketplaces (Facebook Marketplace, OfferUp) are now major hubs for holiday shopping. They are also unregulated.

A common scam involves a "seller" offering a high-demand item (like the latest iPad or game console) at a reasonable, but slightly low, price. They claim to be a local seller but then invent a reason why they can't meet up (e.g., "I'm deployed overseas," "I moved for work"). They ask for payment via Zelle or Venmo, promising to ship the item.

Once the money is sent, the seller vanishes. For a lawyer, the embarrassment of being defrauded is compounded by the potential exposure if you used a device or account linked to your firm.

Safeguarding the Firm: A Cyber Monday Protocol

The savings you made in buying the “cheaper” tech online may amount to the loss of much more, like the loss of client confidentiality and your license!

As you navigate the sales this week, apply the same rigor to your shopping as you do to your practice.

  1. Segregate Your Tech: Do not use your firm-issued laptop for personal holiday shopping. The risk of drive-by downloads from shady "deal" sites is too high.

  2. Credit, Not Debit: Always use a credit card, not a debit card. Credit cards offer robust fraud protection and do not expose your actual bank account funds.

  3. Two-Factor Everything: Ensure 2FA is enabled on your shopping accounts (Amazon, Walmart, etc.). If a scammer gets your password, 2FA is your last line of defense.

  4. The "Too Good to Be True" Rule: If a site you’ve never heard of is selling a MacBook for $500, it is a scam. Domain age checkers (like Whois) can reveal if a website was created yesterday—a sure sign of fraud.

Final Thoughts
Your data is your most valuable currency. No discount on a laptop or gadget is worth jeopardizing your firm’s integrity or your client’s trust. This Cyber Monday, shop smart, stay skeptical, and remember: if you wouldn't sign a contract without reading it, don't click a link without checking it.

🎁 The Ultimate 2025 Tech Gift Guide for Attorneys: Expert-Curated Gadgets and Tools Every Lawyer Needs

Are you ready to the lawyers in your life a great holiday tech gift!

As we approach the holiday season, finding the perfect gift for that tech-savvy attorney in your life can feel like preparing for a complex motion hearing. Drawing from this year's episodes of The Tech-Savvy Lawyer Page Podcast and the cutting-edge discussions featured throughout 2025 on The Tech-Savvy Lawyer.Page blog, I've curated a comprehensive gift guide that spans every budget range and technology ecosystem.

The legal profession has undergone an unprecedented technological transformation this year. Artificial intelligence has moved from experimental novelty to courtroom necessity, cloud-based practice management has become the standard rather than the exception, and the ethical duties surrounding technological competence have never been more critical. This gift guide reflects these seismic shifts while maintaining focus on practical tools that enhance daily practice rather than collecting digital dust.

Whether you're shopping for a solo practitioner juggling client intake while traveling between courthouses, a BigLaw associate drowning in document review, or a tech-curious partner finally ready to embrace the digital age, this guide delivers thoughtfully selected recommendations organized by price point and technology platform. Each suggestion comes with direct purchase links and represents tools that real attorneys use to build more efficient, profitable, and balanced practices.

Important Note: All prices listed are subject to change and represent current manufacturer suggested retail pricing. The holiday shopping season typically brings significant discounts and special offers, so readers will likely find even better deals than those reflected here.

Gifts Under $25: Small Investments, Major Impact 💻⚖️

Apple & Third-Party Related

  • OWC Thunderbolt 4 USB-C Cable 0.7m ($19.99) https://eshop.macsales.com/item/OWC/CBLTB4C0.7M/
    Every iPhone and MacBook-carrying attorney needs quality connectivity cables. The OWC Thunderbolt 4 Cable delivers up to 40Gb/s data transfer speeds, supports up to 100W power delivery, and works flawlessly with all Thunderbolt 3, Thunderbolt 4, USB-C, and USB4 devices. This universal cable eliminates guesswork about compatibility.

  • AirTag Single Pack (Apple, $24) https://www.apple.com/shop/buy-airtag/airtag
    Attach this to briefcases, laptop bags, or case files to track important items. The peace of mind alone makes this essential for traveling attorneys.

  • Apple Lightning to USB Cable 1m ($19) https://www.apple.com/shop/product/MXLY2AM/A/lightning-to-usb-cable-1-m
    For attorneys still using older iPhones and iPads with Lightning ports, having reliable charging and sync cables remains essential for daily practice.

Windows & Third-Party Related

  • Logitech Pebble M350 Wireless Mouse ($19.99) https://www.logitech.com/en-us/shop/p/pebble-2-m350s-wireless-mouse.910-007022?sp=1&searchclick=Logitech
    This silent, compact mouse works seamlessly with Windows laptops and tablets. Perfect for attorneys working in quiet courtrooms or shared office spaces where traditional mouse clicks would prove disruptive.

  • Anker 341 USB-C Hub 7-in-1 Multi-Port Adapter ($19.99) https://www.anker.com/products/a8346
    Surface Pro and modern Windows laptop users need expanded connectivity. This Anker 7-in-1 hub adds HDMI 4K output, USB-A data ports, USB-C Power Delivery charging, microSD and SD card slots—all in one compact adapter perfect for courtroom presentations and document transfers.

Google/Android & Third-Party Related

  • Anker PowerCore Slim 10000 PD ($24.99) https://www.anker.com/products/a1229
    Android-using attorneys need portable power. This slim battery pack provides fast charging for Pixel phones and Galaxy devices during long court days.

  • Google Chromecast with Google TV ($20 on sale) https://store.google.com/product/chromecast_google_tv
    Transform any hotel TV into a presentation screen or entertainment center. Ideal for attorneys who travel for depositions, mediations, and conferences.

  • USB-C to HDMI Cable ($12.79) https://www.amazon.com/dp/B075V5JK36
    Essential for Android device users who need to connect phones or tablets to external displays for client presentations or courtroom exhibits.

AI-Related Tools

  • ChatGPT Plus One-Month Gift Subscription ($20) https://openai.com/chatgpt/pricing
    While not a physical gift, a month of ChatGPT Plus provides access to GPT-4 for legal research assistance, document drafting support, and productivity enhancement. Many attorneys use this for initial case assessment and client communication templates.

Accessories & Productivity Enhancers

Gifts $100 or Less: Professional-Grade Tools 💼📱

Apple & Third-Party Related

There some great tech gifts under $25 that you can get anyone whether they are in legal field or not!

Windows & Third-Party Related

Google/Android & Third-Party Related

  • Samsung Galaxy Buds FE ($99.99) https://www.samsung.com/us/mobile/audio/galaxy-buds-fe
    Android attorneys deserve quality wireless earbuds. These provide active noise cancellation, long battery life, and seamless integration with Galaxy devices.

  • Anker MagGo Wireless Charging Station (Foldable 3-in-1) (on sale for $72.99) https://www.anker.com/products/b2568
    Qi-compatible charging pads work across Android devices, AirPods, and smartwatches. This eliminates cable clutter on attorney desks while providing convenient simultaneous device charging.

AI-Related Tools

  • Grammarly Premium Annual Subscription ($96 when on sale) https://www.grammarly.com/upgrade
    AI-powered writing assistance helps attorneys improve brief quality, catch errors before filing, and maintain consistent tone across client communications. The plagiarism checker provides additional value.

Accessories & Productivity Enhancers

Find something that will enhance the lawyer-in-your life’s holiday!

Important Reminder: Prices listed are subject to change. The holiday shopping season brings exceptional deals, particularly on tech accessories and productivity tools. The AirTag 4-pack mentioned above frequently drops to $64-69 during sales events—watch for these bargains.

Gifts Over $100: Premium Technology for Serious Practitioners 🚀⚖️

Apple & Third-Party Related

  • AirPods Pro 3 ($249) https://www.apple.com/airpods-pro
    The latest AirPods Pro feature unprecedented active noise cancellation, heart rate sensing during workouts, and extended eight-hour battery life. Perfect for attorneys taking depositions, conducting virtual hearings, and maintaining focus during complex document review.

  • iPad Air (M3, $599) https://www.apple.com/ipad-air
    This represents the sweet spot for attorney tablets. Powerful enough for document review, video conferencing, and note-taking, yet more affordable than the iPad Pro. The M2 chip handles demanding legal applications effortlessly.

  • Apple Magic Keyboard for iPad Pro ($349) https://www.apple.com/shop/product/MJQJ3LL/A/magic-keyboard-for-ipad-pro-11-inch-m4-us-english-black
    Transforms iPads into laptop replacements. The floating cantilever design, backlit keys, and integrated trackpad create professional typing experiences during brief writing and client communications.

  • Apple Watch Series 11 ($399) https://www.apple.com/apple-watch-series-10
    Health monitoring, notification management, and quick communication access help attorneys maintain work-life balance. The larger display improves message readability during client emergencies.

  • MacBook Air M4 ($999) https://www.apple.com/shop/buy-mac/macbook-air
    The perfect attorney laptop balances portability, performance, and battery life. Handles document drafting, legal research, video conferencing, and case management software with ease.

CONSIDER SUPPORTING YOUR FAVORITE BLOG WITH A TSL.PP MUG: https://www.thetechsavvylawyer.page/shop/mug

🎁

CONSIDER SUPPORTING YOUR FAVORITE BLOG WITH A TSL.PP MUG: https://www.thetechsavvylawyer.page/shop/mug 🎁

Windows & Third-Party Related

Google/Android & Third-Party Related

Accessories & Productivity Enhancers

  • Herman Miller Aeron Chair ($1,351.00) https://www.hermanmiller.com/products/seating/office-chairs/aeron-chairs
    Quality seating prevents back pain during long days of document review and client meetings. Adjustable lumbar support and armrests accommodate different attorney body types with industry-leading ergonomics.

  • LG 34" Ultrawide Monitor 5K2K ($1,315.35) https://www.amazon.com/LG-34WK95U-W-34-Class-UltraWide/dp/B07FT8ZBMR
    Expanded screen real estate transforms document comparison, legal research, and multi-tasking productivity. Replaces dual monitor setups with cleaner desk aesthetics and seamless workflow.

  • Remarkable 2 Digital Notebook ($399) https://remarkable.com/store/remarkable-2
    Paper-like digital writing experience for attorneys who prefer handwritten notes. Converts handwriting to text and syncs across devices without distracting notifications.

  • Logitech C922 Pro Stream Webcam ($74.99) https://www.logitech.com/en-us/products/webcams/c922-pro-stream-webcam.960-001087.html
    Superior 1080p/30fps video quality for depositions, client consultations, and court appearances. Auto-focus and light correction ensure professional presentation during virtual proceedings.

  • Logitech Brio 4K Ultra HD Webcam ($159.99) https://www.logitech.com/en-us/products/webcams/brio-4k-hdr-webcam.html
    The premium upgrade for attorneys who demand the best video quality. The Brio delivers true 4K resolution at 30fps or 1080p at 60fps with HDR, RightLight 3 technology for challenging lighting conditions, and Windows Hello facial recognition support. Features adjustable field of view (65°/78°/90°), 5x digital zoom, and dual omnidirectional microphones with noise cancellation. Essential for attorneys conducting high-stakes virtual hearings, depositions with court reporters, and client presentations where image quality matters.

  • Samsung T7 Portable SSD 1TB ($109.99) https://www.amazon.com/dp/B0874XN4D8
    The Samsung T7 provides fast, portable storage for case files, discovery materials, and backup documents with transfer speeds up to 1,050 MB/s. Essential for attorneys handling large litigation matters and encrypted data protection.

Making the Right Choice: Strategic Gift Selection 🎯

Still can’t think of the right gift to give that lawyer in your life: Why not a The Tech-Savvy Lawyer.Page Podcast Mug?!

Selecting the perfect technology gift requires understanding the recipient's practice area, existing technology ecosystem, and daily workflow challenges. Solo practitioners benefit most from all-in-one solutions that maximize portability and minimize complexity. BigLaw associates thrive with premium productivity tools that streamline document-intensive work. Government attorneys and public defenders appreciate cost-effective solutions that deliver professional results within budget constraints.

Consider the recipient's technology platform before purchasing. Apple users invest in ecosystem integration—AirPods work seamlessly with iPhones, iPads sync notes with MacBooks, and AirTags leverage the Find My network. Windows attorneys rely on Microsoft 365 integration across Surface devices and traditional laptops. Android users appreciate Google Workspace connectivity and cross-device synchronization.

Accessories matter more than attorneys initially realize. Quality headphones transform noisy environments into focused workspaces. Ergonomic peripherals prevent repetitive stress injuries that sideline productive careers. External storage protects critical case files and discovery materials from device failures. Cable management and charging solutions reduce desktop chaos while ensuring devices remain powered during crucial client communications.

*Pricing Reminder: All prices listed throughout this guide are subject to change and represent current manufacturer suggested retail pricing or recent observed pricing. The holiday shopping season consistently delivers exceptional discounts and promotional offers across virtually every product category featured here. Savvy shoppers will find deals significantly below the prices mentioned—particularly during Black Friday, Cyber Monday, and throughout December as retailers compete for holiday sales. The AirTag 4-pack, for example, regularly drops from $99 to $64-69 during sales events, representing tremendous value. Watch for similar discounts on webcams, headphones, keyboards, mice, storage devices, and accessories that can stretch your gift-giving budget considerably further.

This holiday season, give gifts that demonstrate understanding of legal practice realities while supporting technological competence—an ethical obligation every attorney carries. Whether spending $25 on quality OWC Thunderbolt cables or $1,000 on practice-transforming AI subscriptions, thoughtful technology gifts invest in the recipient's professional success, client service excellence, and work-life balance. The attorneys in your life deserve tools that work as hard as they do while making difficult work more manageable and rewarding.

❄️❅☃️❆❄️ Have a Happy Holiday Season!❄️❅☃️❆❄️

MTC