🎙️📘 Quick reminder: The Lawyer’s Guide to Podcasting releases NEXT WEEK!

Inside title page of The Lawyer’s Guide to Podcasting, releasing January 12, 2026.

If you want a podcast that sounds professional without turning your week into a production project, this book is built for you. It’s practical. It’s workflow-first. It keeps ethics and confidentiality in view. 🔐⚖️

✅ Inside you’ll learn:

  • How to choose a podcast format that fits your goals 🎯

  • A simple, reliable setup that sounds credible 🎤

  • Recording habits that reduce editing time ⏱️

  • Repurposing steps so one episode powers your content plan ♻️

📩 Want the release link the moment it’s live? Email Admin@TheTechSavvyLawyer.Page with subject “Book Link.” I’ll send it on launch day. 🚀

MTC🪙🪙:  When Reputable Databases Fail: What Lawyers Must Do After AI Hallucinations Reach the Court

What should a lawyer do when they inadvertENTLY USE A HALLUCINATED CITE?

In a sobering December 2025 filing in Integrity Investment Fund, LLC v. Raoul, plaintiff's counsel disclosed what many in the legal profession feared: even reputable legal research platforms can generate hallucinated citations. The Motion to Amend Complaint revealed that "one of the cited cases in the pending Amended Complaint could not be found," along with other miscited cases, despite the legal team using LexisNexis and LEXIS+ Document Analysis tools rather than general-purpose AI like ChatGPT. The attorney expressed being "horrified" by these inexcusable errors, but horror alone does not satisfy ethical obligations.

This case crystallizes a critical truth for the legal profession: artificial intelligence remains a tool requiring rigorous human oversight, not a substitute for attorney judgment. When technology fails—and Stanford research confirms it fails at alarming rates—lawyers must understand their ethical duties and remedial obligations.

The Scope of the Problem: Even Premium Tools Hallucinate

Legal AI vendors marketed their products as hallucination-resistant, leveraging retrieval-augmented generation (RAG) technology to ground responses in authoritative legal databases. Yet as reported in our 📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻, independent testing by Stanford's Human-Centered Artificial Intelligence program and RegLab reveals persistent accuracy problems. Lexis+ AI produced incorrect information 17% of the time, while Westlaw's AI-Assisted Research hallucinated at nearly double that rate—34% of queries.

These statistics expose a dangerous misconception: that specialized legal research platforms eliminate fabrication risks. The Integrity Investment Fund case demonstrates that attorneys using established, subscription-based legal databases still face citation failures. Courts nationwide have documented hundreds of cases involving AI-generated hallucinations, with 324 incidents in U.S. federal, state, and tribal courts as of late 2025. Legal professionals can no longer claim ignorance about AI limitations.

The consequences extend beyond individual attorneys. As one federal court warned, hallucinated citations that infiltrate judicial opinions create precedential contamination, potentially "sway[ing] an actual dispute between actual parties"—an outcome the court described as "scary". Each incident erodes public confidence in the justice system and, as one commentator noted, "sets back the adoption of AI in law".

The Ethical Framework: Three Foundational Rules

When attorneys discover AI-generated errors in court filings, three Model Rules of Professional Conduct establish clear obligations.

ABA Model Rule 1.1 mandates technological competence. The 2012 amendment to Comment 8 requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology". Forty-one jurisdictions have adopted this technology competence requirement. This duty is ongoing and non-delegable. Attorneys cannot outsource their responsibility to understand the tools they deploy, even when those tools carry premium price tags and prestigious brand names.

Technological competence means understanding that current AI legal research tools hallucinate at rates ranging from 17% to 34%. It means recognizing that longer AI-generated responses contain more falsifiable propositions and therefore pose a greater risk of hallucination. It means implementing verification protocols rather than accepting AI output as authoritative.

ABA Model Rule 3.3 requires candor toward the tribunal. This rule prohibits knowingly making false statements of law or fact to a court and imposes an affirmative duty to correct false statements previously made. The duty continues until the conclusion of the proceeding. Critically, courts have held that the standard under Federal Rule of Civil Procedure 11 is objective reasonableness, not subjective good faith. As one court stated, "An attorney who acts with 'an empty head and a pure heart' is nonetheless responsible for the consequences of his actions".

When counsel in Integrity Investment Fund discovered the miscitations, filing a Motion to Amend Complaint fulfilled this corrective duty. The attorney took responsibility and sought to rectify the record before the court relied on fabricated authority. This represents the ethical minimum. Waiting for opposing counsel or the court to discover errors invites sanctions and disciplinary referrals.

The duty of candor applies regardless of how the error originated. In Kaur v. Desso, a Northern District of New York court rejected an attorney's argument that time pressure justified inadequate verification, stating that "the need to check whether the assertions and quotations generated were accurate trumps all". Professional obligations do not yield to convenience or deadline stress.

ABA Model Rules 5.1 and 5.3 establish supervisory responsibilities. Managing attorneys must ensure that subordinate lawyers and non-lawyer staff comply with the Rules of Professional Conduct. When a supervising attorney has knowledge of specific misconduct and ratifies it, the supervisor bears responsibility. This principle extends to AI-assisted work product.

The Integrity Investment Fund matter reportedly involved an experienced attorney assisting with drafting. Regardless of delegation, the signing attorney retains ultimate accountability. Law firms must implement training programs on AI limitations, establish mandatory review protocols for AI-generated research, and create policies governing which tools may be used and under what circumstances. Partners reviewing junior associate work must apply heightened scrutiny to AI-assisted documents, treating them as first drafts requiring comprehensive validation.

Federal Rule of Civil Procedure 11: The Litigation Hammer

Reputable databases can hallucinate too!

Beyond professional responsibility rules, Federal Rule of Civil Procedure 11 authorizes courts to impose sanctions on attorneys who submit documents without a reasonable inquiry into the facts and law. Courts may sanction the attorney, the party, or both. Sanctions range from monetary penalties paid to the court or opposing party to non-monetary directives, including mandatory continuing legal education, public reprimands, and referrals to disciplinary authorities.

Rule 11 contains a 21-day safe harbor provision. Before filing a sanctions motion, the moving party must serve the motion on opposing counsel, who has 21 days to withdraw or correct the challenged filing. If counsel promptly corrects the error during this window, sanctions may be avoided. This procedural protection rewards attorneys who implement monitoring systems to catch mistakes early.

Courts have imposed escalating consequences as AI hallucination cases proliferate. Early cases resulted in warnings or modest fines. Recent sanctions have grown more severe. A Colorado attorney received a 90-day suspension after admitting in text messages that he failed to verify ChatGPT-generated citations. An Arizona federal judge sanctioned an attorney and required her to personally notify three federal judges whose names appeared on fabricated opinions, revoked her pro hac vice admission, and referred her to the Washington State Bar Association. A California appellate court issued a historic fine after discovering 21 of 23 quotes in an opening brief were fake.

Morgan & Morgan—the 42nd largest law firm by headcount—faced a $5,000 sanction when attorneys filed a motion citing eight nonexistent cases generated by an internal AI platform. The court divided the sanction among three attorneys, with the signing attorney bearing the largest portion. The firm's response acknowledged "great embarrassment" and promised reforms, but the reputational damage extends beyond the individual case.

What Attorneys Must Do: A Seven-Step Protocol

Legal professionals who discover AI-generated errors in filed documents must act decisively. The following protocol aligns with ethical requirements and minimizes sanctions risk:

First, immediately cease relying on the affected research. Do not file additional briefs or make oral arguments based on potentially fabricated citations. If a hearing is imminent, notify the court that you are withdrawing specific legal arguments pending verification.

Second, conduct a comprehensive audit. Review every citation in the affected filing. Retrieve and read the full text of each case or statute cited. Verify that quoted language appears in the source and that the legal propositions match the authority's actual holding. Check citation accuracy using Shepard's or KeyCite to confirm cases remain good law. This process cannot be delegated to the AI tool that generated the original errors.

Third, assess the materiality of errors. Determine whether fabricated citations formed the basis for legal arguments or appeared as secondary support. In Integrity Investment Fund, counsel noted that "the main precedents...and the...statutory citations are correct, and none of the Plaintiffs' claims were based on the mis-cited cases". This distinction affects the appropriate remedy but does not eliminate the obligation to correct the record.

Fourth, notify opposing counsel immediately. Candor extends to adversaries. Explain that you have discovered citation errors and are taking corrective action. This transparency may forestall sanctions motions and demonstrates good faith to the court.

Fifth, file a corrective pleading or motion. In Integrity Investment Fund, counsel filed a Motion to Amend Complaint under Federal Rule of Civil Procedure 15(a)(2). Alternative vehicles include motions to correct the record, errata sheets, or supplemental briefs. The filing should acknowledge the errors explicitly, explain how they occurred without shifting blame to technology, take personal responsibility, and specify the corrections being made.

Sixth, notify the court in writing. Even if opposing counsel does not move for sanctions, attorneys have an independent duty to inform the tribunal of material misstatements. The notification should be factual and direct. In cases where fabricated citations attributed opinions to real judges, courts have required attorneys to send personal letters to those judges clarifying that the citations were fictitious.

Seventh, implement systemic reforms. Review firm-wide AI usage policies. Provide training on verification requirements. Establish mandatory review checkpoints for AI-assisted work product. Consider technology solutions such as citation validation software that flags cases not found in authoritative databases. Document these reforms in any correspondence with the court or bar authorities to demonstrate that the incident prompted institutional change.

The Duty to Supervise: Training the Humans and the Machines

The Integrity Investment Fund case involved an experienced attorney assisting with drafting, yet errors reached the court. This pattern appears throughout AI hallucination cases. In the Chicago Housing Authority litigation, the responsible attorney had previously published an article on ethical considerations of AI in legal practice, yet still submitted a brief citing the nonexistent case Mack v. Anderson. Knowledge about AI risks does not automatically translate into effective verification practices.

Law firms must treat AI tools as they would junior associates—competent at discrete tasks but requiring supervision. Partners should review AI-generated research as they would first-year associate work, assuming errors exist and exercising vigilant attention to detail. Unlike human associates who learn from corrections, AI systems may perpetuate errors across multiple matters until their underlying models are retrained.

Training programs should address specific hallucination patterns. AI tools frequently fabricate case citations with realistic-sounding names, accurate-appearing citation formats, and plausible procedural histories. They misrepresent legal holdings, confuse arguments made by litigants with court rulings, and fail to respect the hierarchy of legal authority. They cite proposed legislation as enacted law and rely on overturned precedents as current authority. Attorneys must learn to identify these red flags.

Supervisory duties extend to non-lawyer staff. If a paralegal uses an AI grammar checker on a document containing confidential case strategy, the supervising attorney bears responsibility for any confidentiality breach. When legal assistants use AI research tools, attorneys must verify their work with the same rigor applied to traditional research methods.

Client Communication and Informed Consent

watch out for ai hallucinations!

Ethical obligations to clients intersect with AI usage in multiple ways. ABA Model Rule 1.4 requires attorneys to keep clients reasonably informed and to explain matters to the extent necessary for clients to make informed decisions. Several state bar opinions suggest that attorneys should obtain informed consent before inputting confidential client information into AI tools, particularly those that use data for model training.

The confidentiality analysis turns on the AI tool's data-handling practices. Many general-purpose AI platforms explicitly state in their terms of service that they use input data for model training and improvement. This creates significant privilege and confidentiality risks. Even legal-specific platforms may share data with third-party vendors or retain information on servers outside the firm's control. Attorneys must review vendor agreements, understand data flow, and ensure adequate safeguards exist before using AI tools on client matters.

When AI-generated errors reach a court filing, clients deserve prompt notification. The errors may affect litigation strategy, settlement calculations, or case outcome predictions. In extreme cases, such as when a court dismisses claims or imposes sanctions, malpractice liability may arise. Transparent communication preserves the attorney-client relationship and demonstrates that the lawyer prioritizes the client's interests over protecting their reputation.

Jurisdictional Variations: Illinois Sets the Standard

While the ABA Model Rules provide a national framework, individual jurisdictions have begun addressing AI-specific issues. Illinois, where the Integrity Investment Fund case was filed, has taken proactive steps.

The Illinois Supreme Court adopted a Policy on Artificial Intelligence effective January 1, 2025. The policy recognizes that AI presents challenges for protecting private information, avoiding bias and misrepresentation, and maintaining judicial integrity. The court emphasized "upholding the highest ethical standards in the administration of justice" as a primary concern.

In September 2025, Judge Sarah D. Smith of Madison County Circuit Court issued a Standing Order on Use of Artificial Intelligence in Civil Cases, later extended to other Madison County courtrooms. The order "embraces the advancement of AI" while mandating that tools "remain consistent with professional responsibilities, ethical standards and procedural rules". Key provisions include requirements for human oversight and legal judgment, verification of all AI-generated citations and legal statements, disclosure of expert reliance on AI to formulate opinions, and potential sanctions for submissions including "case law hallucinations, [inappropriate] statements of law, or ghost citations".

Arizona has been particularly active given the high number of AI hallucination cases in the state—second only to the Southern District of Florida. The State Bar of Arizona issued guidance calling on lawyers to verify all AI-generated research before submitting it to courts or clients. The Arizona Supreme Court's Steering Committee on AI and the Courts issued similar guidance emphasizing that judges and attorneys, not AI tools, are responsible for their work product.

Other states are following suit. California issued Formal Opinion 2015-93 interpreting technological competence requirements. The District of Columbia Bar issued Ethics Opinion 388 in April 2024, specifically addressing generative artificial intelligence in client matters. These opinions converge on several principles: competence includes understanding AI technology sufficiently to be confident it advances client interests, all AI output requires verification before use, and technology assistance does not diminish attorney accountability.

The Path Forward: Responsible AI Integration

The legal profession stands at a crossroads. AI tools offer genuine efficiency gains—automated document review, pattern recognition in discovery, preliminary legal research, and jurisdictional surveys. Rejecting AI entirely would place practitioners at a competitive disadvantage and potentially violate the duty to provide competent, efficient representation.

Yet uncritical adoption invites the disasters documented in hundreds of cases nationwide. The middle path provided by the Illinois courts requires human oversight and legal judgment at every stage.

Attorneys should adopt a "trust but verify" approach. Use AI for initial research, document drafting, and analytical tasks, but implement mandatory verification protocols before any work product leaves the firm. Treat AI-generated citations as provisional until independently confirmed. Read cases rather than relying on AI summaries. Check the currency of legal authorities. Confirm that quotations appear in the cited sources.

Law firms should establish tiered AI usage policies. Low-risk applications such as document organization or calendar management may require minimal oversight. High-risk applications, including legal research, brief writing, and client advice, demand multiple layers of human review. Some uses—such as inputting highly confidential information into general-purpose AI platforms—should be prohibited entirely.

Billing practices must evolve. If AI reduces the time required for legal research from eight hours to two hours, the efficiency gain should benefit clients through lower fees rather than inflating attorney profits. Clients should not pay both for AI tool subscriptions and for the same number of billable hours as traditional research methods would require. Transparent billing practices build client trust and align with fiduciary obligations.

Lessons from Integrity Investment Fund

The Integrity Investment Fund case offers several instructive elements. First, the attorney used a reputable legal database rather than a general-purpose AI. This demonstrates that brand name and subscription fees do not guarantee accuracy. Second, the attorney discovered the errors and voluntarily sought to amend the complaint rather than waiting for opposing counsel or the court to raise the issue. This proactive approach likely mitigated potential sanctions. Third, the attorney took personal responsibility, describing himself as "horrified" rather than deflecting blame to the technology.

The court's response also merits attention. Rather than immediately imposing sanctions, the court directed defendants to respond to the motion to amend and address the effect on pending motions to dismiss. This measured approach recognizes that not all AI-related errors warrant the most severe consequences, particularly when counsel acts promptly to correct the record. Defendants agreed that "the striking of all miscited and non-existent cases [is] proper", suggesting that cooperation and candor can lead to reasonable resolutions.

The fact that "the main precedents...and the...statutory citations are correct" and "none of the Plaintiffs' claims were based on the mis-cited cases" likely influenced the court's analysis. This underscores the importance of distinguishing between errors in supporting citations versus errors in primary authorities. Both require correction, but the latter carries greater risk of case-dispositive consequences and sanctions.

The Broader Imperative: Preserving Professional Judgment

Lawyers must verify their AI work!

Judge Castel's observation in Mata v. Avianca that "many harms flow from the submission of fake opinions" captures the stakes. Beyond individual case outcomes, AI hallucinations threaten systemic values: judicial efficiency, precedential reliability, adversarial fairness, and public confidence in legal institutions.

Attorneys serve as officers of the court with special obligations to the administration of justice. This role cannot be automated. AI lacks the judgment to balance competing legal principles, to assess the credibility of factual assertions, to understand client objectives in their full context, or to exercise discretion in ways that advance both client interests and systemic values.

The attorney in Integrity Investment Fund learned a costly lesson that the profession must collectively absorb: reputable databases, sophisticated algorithms, and expensive subscriptions do not eliminate the need for human verification. AI remains a tool—powerful, useful, and increasingly indispensable—but still just a tool. The attorney who signs a pleading, who argues before a court, and who advises a client bears professional responsibility that technology cannot assume.

As AI capabilities expand and integration deepens, the temptation to trust automated output will intensify. The profession must resist that temptation. Every citation requires verification. Every legal proposition demands confirmation. Every AI-generated document needs human review. These are not burdensome obstacles to efficiency but essential guardrails protecting clients, courts, and the justice system itself.

When errors occur—and the statistics confirm they will occur with disturbing frequency—attorneys must act immediately to correct the record, accept responsibility, and implement reforms preventing recurrence. Horror at one's mistakes, while understandable, satisfies no ethical obligation. Action does.

MTC

“How To” Happy New Year 2026 Edition! 🎉 Future-Proof Your Firm: The Essential Guide to Law Firm Technology for 2026

FUture proof your firm and make sure you have the right technology to get your legal work done in 2026!

The year 2025 was a wake-up call for the legal industry. We watched Artificial Intelligence move from a shiny toy to a serious business tool. We saw cybersecurity threats evolve faster than our firewalls. And we faced the reality of aging infrastructure as the "Windows 10 era" officially ended in October.

Now we look toward 2026. The theme for the coming year is not just adoption. It is integration and security. You do not need to be a coder to run a modern law firm. You just need to make smart, practical decisions.

This guide aggregates lessons from 2025, including insights from my blog, The Tech-Savvy Lawyer.Page, and top legal tech reporters. Here is how to prepare your firm for 2026.

1. The Hardware Reality Check: Windows 11 or Bust

The most critical lesson from 2025 was the "End of Life" for Windows 10. Microsoft stopped supporting it on October 14, 2025. If your firm is still running Windows 10 in 2026, you are driving a car without brakes. You have no security updates. You are non-compliant with most client data protection mandates.

The Action Plan:

  • Audit Your Fleet: Check every laptop and desktop. If it cannot run Windows 11, replace it. Do not try to bypass the requirements.

  • The 2026 Standard Spec: When buying new computers, ignore the "minimum" requirements. You need longevity.

    • Processor: Intel Core i7 (13th gen or newer) or AMD Ryzen 7.

    • RAM: 32GB is the new 16GB. AI tools built into Windows (like Copilot) consume significant memory. 16GB is no longer the bare minimum; 32GB is the minimum, 64GB is future-proof.

    • Storage: 1TB NVMe SSD. Cloud storage is great, but local speed still matters for caching large case files. 2TB gives you breathing room; 4TB will help you in the years to come.

  • Monitors: Dual monitors are standard. But for 2026, consider a single 34-inch ultrawide curved monitor. It eliminates the bezel gap. It simplifies cable management. Or consider a three-monitor setup with the center monitor a little better than other two

2. Software: The Shift from "Open" to "Closed" AI

In 2025, we learned the hard way about "shadow AI." This happens when staff paste client data into public tools like the free version of ChatGPT. That is a major ethics violation.

For 2026, you must pivot to "Closed" AI systems.

The Action Plan:

  • Define "Closed" AI: These are tools where your data is not used to train the public model. Microsoft 365 Copilot is a prime example. Most practice management platforms (like Clio or MyCase) now have embedded AI features. These are generally safe "closed" environments.

  • Enable Copilot (Carefully): Microsoft 365 Copilot is likely already in your subscription options. It can summarize email threads. It can draft initial responses. Turn it on, but train your team on "The Review Rule."

  • The Review Rule: The Tech-Savvy Lawyer.Page emphasizes this constantly. AI is a drafter, not a lawyer. Every output must be verified. Human verification is the standard for 2026.

3. Security: The "Triple-E" Framework

Cybersecurity is no longer just for the IT department. It is a core competency for every lawyer. The "Triple-E Framework" that is perfect for 2026 planning: Educate, Empower, Elevate.

The Action Plan:

Be confident with your technology and make sure everything is up to date for 2026!

  • Educate: Run phishing simulations monthly. The attacks are getting smarter. AI is being used to write convincing phishing emails. Your team needs to see examples of these AI-generated scams.

  • Empower: Force the use of Password Managers (like 1Password or Bitwarden). Stop letting partners save passwords in their browsers. It is not secure.

  • Elevate: Implement "Zero Trust" access. This means verifying identity at every step, not just at the front door. Multi-Factor Authentication (MFA) must be on everything. No exceptions for senior partners.

4. The Cloud Ecosystem: Consolidation

In 2024 and 2025, firms bought too many separate apps. One app for billing. One for intake. One for signatures. This created "subscription fatigue."

The trend for 2026 is Platformization.

The Action Plan:

  • Audit Your Subscriptions: Look at your credit card statement. Do you have three tools that do the same thing?

  • Lean on Your Core Platform: If you use a major practice management system, check their new features. They likely added texting, e-signatures, or payments recently. Use the built-in tools. It is cheaper. It keeps your data in one place. It reduces security risks.

5. Mobile Lawyering: Professionalism Anywhere

Remote work is not "new" anymore. It is just "work." But looking unprofessional on Zoom is no longer acceptable.

The Action Plan:

  • Audio: Buy noise-canceling headsets for everyone. Laptop microphones are not good enough for court records. There are plenty of wired and Bluetooth noise-canceling headphones on the market - find the one that is best for you (most Bluetooth headphones will work on any operating system (Windows, Apple, Android, etc. - yes, Apple AirPods will work on Windows and Android devices).

  • Connectivity: Stop relying on public Wi-Fi. It is dangerous. Equip your lawyers with mobile hotspots or 5G-enabled laptops. Consider having phones/hotspots from two different providers in case one provider is down or if it just doesn’t have the signal strength necessary at a particular location.

  • The "ScanSnap" Standard: Every remote lawyer needs a dedicated scanner. The Ricoh (fka “Fujitsu”) ScanSnap remains the gold standard. It is reliable. It is fast. It keeps your paperless office actually paperless. But don’t forget about your smart device. Our phones’ cameras take great pictures, and there is plenty of scanning software that lets you capture a few pages easily when you are on the go.

Final Thoughts

Advances in technology ARE going to require some tech updates for your law practice - are you ready?

Preparing for 2026 is not about buying the most expensive futuristic gadgets. It is about solidifying your foundation. Upgrade your hardware to handle Windows 11. Move your AI use into secure, paid channels. Consolidate your software.

Technology is the nervous system of your firm. It can get out of control and even overly expensive. Treat it with the same care you treat your case files.

📖 WORD OF THE WEEK YEAR🥳:  Verification: The 2025 Word of the Year for Legal Technology ⚖️💻

all lawyers need to remember to check ai-generated legal citations

After reviewing a year's worth of content from The Tech-Savvy Lawyer.Page blog and podcast, one word emerged to me as the defining concept for 2025: Verification. This term captures the essential duty that separates competent legal practice from dangerous shortcuts in the age of artificial intelligence.

Throughout 2025, The Tech-Savvy Lawyer consistently emphasized verification across multiple contexts. The blog covered proper redaction techniques following the Jeffrey Epstein files disaster. The podcast explored hidden AI in everyday legal tools. Every discussion returned to one central theme: lawyers must verify everything. 🔍

Verification means more than just checking your work. The concept encompasses multiple layers of professional responsibility. Attorneys must verify AI-generated legal research to prevent hallucinations. Courts have sanctioned lawyers who submitted fictitious case citations created by generative AI tools. One study found error rates of 33% in Westlaw AI and 17% in Lexis+ AI. Note the study's foundation is from May 2024, but a 2025 update confirms these findings remain current—the risk of not checking has not gone away. "Verification" cannot be ignored.

The duty extends beyond research. Lawyers must verify that redactions actually remove confidential information rather than simply hiding it under black boxes. The DOJ's failed redaction of the Epstein files demonstrated what happens when attorneys skip proper verification steps. Tech-savvy readers simply copied text from beneath the visual overlays. ⚠️

use of ai-generated legal work requires “verification”, “Verification”, “Verification”!

ABA Model Rule 1.1 requires technological competence. Comment 8 specifically mandates that lawyers understand "the benefits and risks associated with relevant technology." Verification sits at the heart of this competence requirement. Attorneys cannot claim ignorance about AI features embedded in Microsoft 365, Zoom, Adobe, or legal research platforms. Each tool processes client data differently. Each requires verification of settings, outputs, and data handling practices. 🛡️

The verification duty also applies to cybersecurity. Zero Trust Architecture operates on the principle "never trust, always verify." This security model requires continuous verification of user identity, device health, and access context. Law firms can no longer trust that users inside their network perimeter are authorized. Remote work and cloud-based systems demand constant verification.

Hidden AI poses another verification challenge. Software updates automatically activate AI features in familiar tools. These invisible assistants process confidential client data by default. Lawyers must verify which AI systems operate in their technology stack. They must verify data retention policies. They must verify that AI processing does not waive attorney-client privilege. 🤖

ABA Formal Opinion 512 eliminates the "I didn't know" defense. Lawyers bear responsibility for understanding how their tools use AI. Rule 5.3 requires attorneys to supervise software with the same care they supervise human staff members. Verification transforms from a good practice into an ethical mandate.

verify your ai-generated work like your bar license depends on it!

The year 2025 taught legal professionals that technology competence means verification competence. Attorneys must verify redactions work properly. They must verify AI outputs for accuracy. They must verify security settings protect confidential information. They must verify that hidden AI complies with ethical obligations. ✅

Verification protects clients, preserves attorney licenses, and maintains the integrity of legal practice. As The Tech-Savvy Lawyer demonstrated throughout 2025, every technological advancement creates new verification responsibilities. Attorneys who master verification will thrive in the AI era. Those who skip verification steps risk sanctions, malpractice claims, and disciplinary action.

The legal profession's 2025 Word of the Year is verification. Master it or risk everything. 💼⚖️

ANNOUNCEMENT (BOOK RELEASE): The Lawyer’s Guide to Podcasting: The Simple, Ethics-Aware Playbook to Launch a Professional Podcast (Release mid-January, 2026)

Anticipated release is mid-january 2026.

🎙️📘 Podcasting is still one of the fastest ways to build trust. It works for lawyers, legal professionals, and any expert who needs to explain complex topics in plain language.

On January 12, 2026, I’m releasing The Lawyer’s Guide to Podcasting. This book is designed for busy professionals who want a podcast that sounds credible, protects confidentiality, and fits into a real workflow. No studio required. No tech overwhelm.

✅ Inside the book, you’ll learn:

  • How to pick a podcast format that matches your goals 🎯

  • The “minimum viable setup” that sounds professional 🎤

  • Recording workflows that reduce editing time ⏱️

  • Practical ethics and risk habits for public content 🔐

  • Repurposing steps so one episode becomes a week of marketing ♻️

📩 Get the release link: Email Admin@TheTechSavvyLawyer.Page with the subject line “Podcasting Book Link” and I’ll send the link as soon as the book is released. 📩🎙️

MTC: 2025 Year in Review: The "AI Squeeze," Redaction Disasters, and the Return of Hardware!

As we close the book on 2025, the legal profession finds itself in a dramatically different landscape than the one we predicted back in January. If 2023 was the year of "AI Hype" and 2024 was the year of "AI Experimentation," 2025 has undeniably been the year of the "AI Reality Check."

Here at The Tech-Savvy Lawyer.Page, we have spent the last twelve months documenting the friction between rapid innovation and the stubborn realities of legal practice. From our podcast conversations with industry leaders like Seth Price and Chris Dralla to our deep dives into the ethics of digital practice, one theme has remained constant: Competence is no longer optional; it is survival.

Looking back at our coverage from this past year, three specific highlights stand out as defining moments for legal technology in 2025. These aren't just news items; they are signals of where our profession is heading.

Highlight #1: The "Black Box" Redaction Wake-Up Call

Just days ago, on December 23, 2025, the legal world learned of a catastrophic failure of basic technological competence. As we covered in our recent post, How To: Redact PDF Documents Properly and Recover Data from Failed Redactions: A Guide for Lawyers After the DOJ Epstein Files Release “Leak”, the Department of Justice’s release of the Jeffrey Epstein files became a case study in what not to do.

The failure was simple but devastating: relying on visual "masks" rather than true data sanitization. Tech-savvy readers—and let’s be honest, anyone with a basic knowledge of copy-paste—were able to lift the "redacted" names of associates and victims directly from the PDF.

Why this matters for you: This event shattered the illusion that "good enough" tech skills are acceptable in high-stakes litigation. In 2025, we learned that the duty of confidentiality (Model Rule 1.6) is inextricably linked to the duty of technical competence (Model Rule 1.1 and its Comment 8). As we move into 2026, firms must move beyond basic PDF tools and invest in purpose-built redaction software that "burns in" changes and scrubs metadata. If the DOJ can fail this publicly, your firm is not immune.

Highlight #2: The "AI Squeeze" on Hardware

Throughout the year, we’ve heard complaints about sluggish laptops and crashing applications. In our December 22nd post, The 2026 Hardware Hike: Why Law Firms Must Budget for the 'AI Squeeze' Now, we identified the culprit. It isn’t just your imagination—it’s the supply chain.

We are currently facing a global shortage of DRAM (Dynamic Random Access Memory), driven by the insatiable appetite of data centers powering the very AI models we use daily. Manufacturers like Dell and Lenovo are pivoting their supply to these high-profit enterprise clients, leaving consumer and business laptops with a supply deficit.

Why this matters for you: The era of the 16GB RAM laptop for lawyers is dead. Running local, privacy-focused AI models (a major trend in 2025) and heavy eDiscovery platforms now requires 32GB or even 64GB of RAM as a baseline (which means you may want more than the “baseline”). The "AI Squeeze" means that in 2026, hardware will be 15-20% more expensive and harder to find. The lesson? Buy now. If your firm has a hardware refresh cycle planned for Q2 2026, accelerate it to Q1. Budgeting for technology is no longer just about software subscriptions; it’s about securing the physical silicon needed to do your job.

Highlight #3: From "Chat" to "Doing" (The Rise of Agentic AI)

Earlier this year, on the Tech-Savvy Lawyer Podcast, we spoke with Chris Dralla of TypeLaw and discussed the evolution of AI tools. 2025 marked the shift from "Chatbot AI" (asking a bot a question) to "Agentic AI" (telling a bot to do a job).

Tools like TypeLaw didn't just "summarize" cases this year; they actively formatted briefs, checked citations against local court rules, and built tables of authorities with minimal human intervention. This is the "boring" automation we have always advocated for—technology that doesn't try to be a robot lawyer, but acts as a tireless paralegal.

Why this matters for you: The novelty of chatting with an LLM has worn off. The firms winning in 2025 were the ones adopting tools that integrated directly into Microsoft Word and Outlook to automate specific, repetitive workflows. The "Generalist AI" is being replaced by the "Specialist Agent."

Moving Forward: What We Can Learn Today for 2026

As we look toward the new year, the profession must internalize a critical lesson: Technology is a supply chain risk.

Whether it is the supply of affordable memory chips or the supply of secure software that properly handles redactions, you are dependent on your tools. The "Tech-Savvy" lawyer of 2026 is not just a user of technology but a manager of technology risk.

What to Expect in 2026:

Is your firm budgeted for the anticipated 2026 hardware price hike?

  1. The Rise of the "Hybrid Builder": I predict that mid-sized firms will stop waiting for vendors to build the perfect tool and start building their own "micro-apps" on top of secure, private AI models.

  2. Mandatory Tech Competence CLEs: rigorous enforcement of tech competence rules will likely follow the high-profile data breaches and redaction failures of 2025.

  3. The Death of the Billable Hour (Again?): With "Agentic AI" handling the grunt work of drafting and formatting, clients will aggressively push back on bills for "document review" or "formatting." 2026 will force firms to bill for judgment, not just time.

As we sign off for the last time in 2025, remember our motto: Technology should make us better lawyers, not lazier ones. Check your redactions, upgrade your RAM, and we’ll see you in 2026.

Happy Lawyering and Happy New Year!

🚨BOLO: Last-Minute Procurement Scams Targeting Firms on Christmas Eve🎄

It is Christmas Eve! The pressure to secure last-minute client gifts, finalize year-end office supply orders, or purchase personal items is at its peak. Scammers anticipate this desperation. They are currently flooding social media and search engines with "Out-of-Stock" Purchase Scams designed to exploit your urgency.

Whether you are ordering toner for year-end filings or a rush gift for a partner, the mechanism remains the same. You locate a vendor promising immediate delivery of a hard-to-find item. You purchase it. Minutes later, an email arrives claiming the item is "out of stock" due to holiday volume.

This notification is the trap. It promises an instant refund but requires you to click a link to "confirm" your details. This link does not lead to a payment processor; it leads to a credential-harvesting site. By trying to recoup your funds, you may inadvertently hand over firm credit card data or banking login credentials to a threat actor.

Immediate Risk Mitigation:

  • Verify the Vendor: If a deal appears for an item sold out everywhere else, it is likely a lure. Stick to established, major retailers today.

  • Isolate Transactions: Do not mix firm procurement with personal panic buying. Use a dedicated credit card for any new vendor.

  • Pause Before Clicking: If you receive a refund link, do not click it. Legitimate refunds happen automatically; they never require you to log in again.

Stay safe. Do not let a shipping deadline become a security breach. 🎄🔒

🎙️ Ep. #127: Mastering Legal Storytelling and AI Automation with Joshua Altman 🎙️⚖️

In Episode 127, I sit down with Joshua Altman, Managing Director of Beltway.Media, to decode the intersection of legal expertise and narrative strategy. 🏛️ We dive deep into the tech stack that powers a modern communications firm and explore how lawyers can leverage AI without losing their unique professional voice. Joshua shares actionable insights on using tools like Gumloop and Abacus.AI to automate workflows, the critical mistakes to avoid during high-stakes crisis management, and the real metrics you need to track to prove marketing ROI. 📊 Whether you are a solo practitioner or part of a large firm, this conversation bridges the gap between complex legal work and compelling public communication.

Join Joshua Altman and me as we discuss the following three questions and more!

  1. What are the top three technology tools or platforms you recommend that would help attorneys transform a single piece of thought leadership into multiple content formats across channels, and how can they use AI to accelerate this process without sacrificing their professional voice?

  2. What are the top three mistakes attorneys and law firms make when communicating during high-stakes situations—whether that’s managing negative publicity, navigating a client crisis, or pitching to potential investors—and how can technology help them avoid these pitfalls while maintaining their ethical obligations?

  3. What are the top three metrics for their online marketing technology investments that attorneys should actually be tracking to demonstrate return on investment, and what affordable technology solutions would you recommend to help them capture and analyze this data?

In our conversation, we cover the following:

  • [00:00] Introduction to Joshua Altman and Beltway.Media.

  • [01:06] Joshua’s current secure tech stack: From Mac setups to encrypted communications.

  • [03:52] Strategic content repurposing: Using AI as a tool, not a replacement for your voice.

  • [05:30] The "Human in the Loop" necessity: Why lawyers must proofread AI content.

  • [10:00] Tech Recommendation #1: using Abacus.AI and Root LLM for model routing.

  • [11:00] Tech Recommendation #2: Automating workflows with Gumloop.

  • [15:43] Tech Recommendation #3: The "Low Tech" solution of human editors.

  • [16:47] Crisis Communications: Navigating the Court of Public Opinion vs. the Court of Law.

  • [20:00] Using social listening tools for litigation support and witness tracking.

  • [24:30] Metric #1: Analyzing Meaningful Engagement (comments vs. likes).

  • [26:40] Metric #2: Understanding Impressions and network reach (1st vs. 2nd degree).

  • [28:40] Metric #3: Tracking Clicks to validate interest and sales funnels.

  • [31:15] How to connect with Joshua.

RESOURCES:

Connect with Joshua Altman

Mentioned in the episode

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

  • Abacus.AI - AI platform mentioned for its "Root LLM" model routing feature.

  • ChatGPT - AI language model.

  • Claude - AI language model.

  • Constant Contact - Email marketing platform.

  • Gumloop - AI automation platform for newsletters and social listening.

  • LinkedIn - Professional social networking.

  • MailChimp - Email marketing platform.

  • Proton Mail - Encrypted email service.

  • Tresorit - End-to-end encrypted file sharing (secure Dropbox alternative).