MTC: Are Lawyers Really Ready for a Wallet‑Free Future? Digital Wallets, ABA Ethics, and the Reality of Going Fully Cashless 💳⚖️

Tech-savvy lawyers should not leave their physical wallets at home, BUT YOU CAN PROBABLY pare THEM down some.

When previous podcast guest David Sparks over at MacSparky shared his recent post about accidentally going out without his physical wallet—and still making it through the day just fine on his iPhone and Apple Wallet—it captured a quiet shift many of us in the legal profession are grappling with. He walked into his appointment armed only with a digital ID, digital insurance card, and Apple Pay, and everything worked. For a growing number of professionals, that is the new normal. The question for lawyers is more specific: not can we go wallet‑free, but should we—ethically, practically, and professionally—given our obligations under the ABA Model Rules?

Digital wallets are no longer niche tools reserved for tech enthusiasts. Apple Wallet and similar platforms have matured into robust ecosystems that can store payment cards, IDs, insurance cards, transit passes, and even car keys. They sit at the intersection of convenience, security, and risk. As attorneys, we have to examine that intersection with greater rigor than the average consumer, because our technology choices are framed by duties of competence, confidentiality, and client service.

The promise of a wallet‑free practice

On paper, the case for a full digital wallet is compelling. Digital payments can reduce friction at the courthouse café, client lunches, and bar events. Digital IDs eliminate worries about misplacing a physical card. Many platforms add layers of biometric security that traditional wallets can’t match. David notes that Apple Wallet has “been quietly getting better for years,” allowing storage of physical card numbers behind Face ID and making peer‑to‑peer payments a tap‑away. For a solo or small‑firm lawyer, that friction reduction compounds over time into real efficiency.

From a malpractice‑avoidance standpoint, a digital wallet can be safer than a billfold. Losing a traditional wallet means scrambling to cancel credit cards, monitoring for identity theft, and possibly dealing with unauthorized use of your bar ID or access cards. A lost phone, by contrast, can be located, remotely wiped, or locked with strong authentication. Properly configured, it can reduce risk rather than increase it.

This is where ABA Model Rule 1.1 on competence, particularly Comment 8, becomes relevant. The Comment notes that competent representation includes understanding “the benefits and risks associated with relevant technology.” A digital wallet is very much “relevant technology” for a modern practitioner. Choosing not to understand or use it, especially when it offers better security and traceability than analog methods, may itself become a competence question as the bar’s expectations evolve.

The gaps: cash, IDs, and access to justice

There are plenty of reasons not to go “cashless” when leaving home or the office.

Still, David’s hesitation—“there’s a part of me that still feels compelled to carry a small wallet with my driver’s license in it”—should resonate with lawyers. There are pockets of our professional lives where the ecosystem is not ready, and those pockets matter.

First, cash. Many lawyers still tip courthouse staff, parking attendants, baristas near the courthouse, and others in cash—including, in my case, using $2 bills (yes, they are still produced, still accepted, and can be obtained at many banks across the U.S. [At least as of the time of this posting]. I almost always get an excited smile when I tip my barista for his/her work with a $2 bill). Cash remains the lowest‑friction, most universally accepted “protocol” for small-scale human interactions. Refusing to carry any cash at all can put you in awkward social and professional situations, especially in older courthouses or local establishments that either do not take cards or resent micro‑transactions by card. For those committed to cash tipping as a personal or professional habit, a purely digital wallet is not yet a substitute.

Second, physical IDs. While TSA and some states are piloting and accepting digital IDs, acceptance is not universal, and the rules are in flux. David notes he has a state digital ID that “shows up nicely” in Apple Wallet. That is great—until you encounter an agency, judge, clerk, or officer who simply will not accept it. Not all jurisdictions recognize mobile driver’s licenses or digital IDs, and some procedures (e.g., certain filings or in‑person notarizations) still presume a physical, inspectable card. The risk is not hypothetical: show up with the wrong form of ID for a flight or a court security checkpoint, and you may face delay, additional fees, or outright denial of entry.

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.”

✈️ 🌎 ‼️

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.” ✈️ 🌎 ‼️

For lawyers, this is not just an inconvenience—it is a competence and diligence issue under Model Rules 1.1 and 1.3. If your failure to carry an accepted ID means you miss a hearing, delay a filing, or cannot visit a client, you have a professional problem, not just a tech annoyance. Likewise, local court rules and security policies may require a specific bar card or government‑issued ID to enter restricted areas. A digital ID on your phone will not help if the sheriff’s deputy at the door has not been trained or authorized to accept it.

Third, connectivity. A digital wallet that is fully dependent on live internet access is a fragile tool in old courthouses with thick stone walls, in rural jurisdictions, or during emergencies. Many modern digital wallets do allow offline transactions at NFC terminals using stored tokens, but not all. If your payment method, ID, or membership pass depends on a cloud verification step and you are in a dead zone—or your battery dies—you effectively have no wallet. Lawyers who rely on public transit, rideshares, or mobile office setups need to consider this in contingency planning, particularly when punctuality is essential.

Digital wallets and legal ethics

From an ethics perspective, digital wallets intersect with several core duties.

Under Model Rule 1.6, protecting client confidentiality extends to how you pay for and manage client‑related expenses. If you are using peer‑to‑peer payment apps or storing client‑related account details in a digital wallet, you must understand their privacy and data‑sharing practices. Some services expose transaction histories, social feeds, or metadata that could inadvertently reveal client relationships or matter details. Configuring strict privacy settings and separating personal from firm accounts is not optional; it is part of your duty of confidentiality.

Model Rule 1.15 on safekeeping property also comes into play if you ever use digital tools to handle client funds, reimbursements, or settlement distributions. While most bars still require traditional trust accounts and closely regulate payment processors, the trend toward digital payments will continue. Using any digital payment or wallet solution around client funds requires careful vetting, written policies, and—ideally—consultation with your malpractice carrier and bar ethics guidance.

Finally, Model Rule 5.3 on responsibilities regarding nonlawyer assistance extends to IT providers and wallet platforms. If your firm relies on third‑party providers to manage mobile device management (MDM), security, or payment integrations, you must make reasonable efforts to ensure their conduct aligns with your professional obligations. Managing digital wallets on firm‑owned or BYOD devices should be governed by a clear policy that addresses encryption, remote wipe, lock‑screen settings, and acceptable use.

Practical guidance: a hybrid, not a cliff

As advanced as our digital wallets are, the legal professional should carry a combination of digital and physical identification, means of payment, and cash!

Given these realities, are we “truly there” yet for lawyers to go fully wallet‑free? Not quite. For most practitioners, the prudent path is a hybrid approach:

  • Carry a slim physical wallet with a government‑issued ID, bar card (if used locally), a minimal backup payment card, and a small amount of cash for tipping and edge cases.

  • Use a digital wallet as your primary payment and convenience layer, especially in environments where it is well‑supported and secure.

  • Confirm, in advance, what IDs your courthouse, correctional facilities, and agencies accept, and do not assume your digital ID will suffice.

  • Harden your digital wallet: enable strong biometrics, ensure a reputable MDM or security solution manages any firm devices, and separate personal from professional payment flows where possible.

This hybrid approach aligns with Model Rule 1.1’s requirement to understand and responsibly adopt relevant technology while honoring the practical demands of courtroom work and client service. It allows you to benefit from the security and efficiency of digital wallets without betting your professional obligations on the most fragile parts of the ecosystem: universal acceptance and ubiquitous connectivity.

David ends his reflection by asking whether he will ever “truly go out knowingly wallet‑free” and whether he is alone in his hesitation. Lawyers should feel no pressure to be first in line to abandon physical wallets entirely. Our job is to advocate, counsel, and appear—on time, properly identified, and fully prepared. That may mean, for the foreseeable future, living comfortably in both worlds: with a well‑tuned digital wallet in your hand and a minimal, carefully curated physical wallet in your pocket.

MTC

WoW: “Telephobia” in Law Practice: How Fear of Phone Calls Hurts Lawyers, Clients, and Cases 📞⚖️

Fear of phone 📞 calls creates anxiety and impacts legal competence. ⚖️

Telephobia is the fear or intense anxiety associated with making or receiving phone calls, and it shows up more often in law practice than many lawyers admit. 😬📱 Telephobia is not a dislike of the telephone as an object; it is a form of social anxiety centered on real‑time verbal communication, fear of judgment, and the pressure to respond quickly without the safety net of drafting and editing. Lawyers who excel in written advocacy can still feel a spike of anxiety when the phone lights up with a client, partner, or opposing counsel. This reluctance to pick up or dial out is not a character flaw; it is a risk factor that can affect competence, communication, and client service.

What Telephobia Looks Like for Lawyers

Telephobia often appears as avoidance rather than obvious panic. Lawyers may let calls go to voicemail, delay returning calls, or delegate phone calls whenever possible. You might recognize behaviors such as over‑reliance on email, extensively scripting what you plan to say before dialing, or replaying conversations in your head for hours after hanging up. These patterns are common in people with phone anxiety and can exist on a spectrum from mild discomfort to significant impairment.

In legal practice, that avoidance has concrete consequences. Time‑sensitive issues sit in the inbox instead of getting resolved in a five‑minute call. Misunderstandings grow because no one is willing to pick up the phone and clarify. Judges and clients may perceive “radio silence” as a lack of diligence, even when the real issue is anxiety about the call itself. Over time, telephobia can contribute to bottlenecks in case management, strained relationships, and missed opportunities to resolve disputes early.

Telephobia, Opposing Counsel, and Professionalism

Telephone conversations with opposing counsel are still one of the most effective tools for narrowing issues, avoiding motion practice, and reaching practical solutions. Many experienced litigators emphasize the value of “picking up the phone” instead of escalating via email volleys. Yet telephobia can make newer or more anxious lawyers dread direct calls with adversaries, especially those who are aggressive, fast‑talking, or prone to “verballing” (misstating or spinning what was said in the conversation).

Avoiding phone contact with opposing counsel can have several impacts:

  • It can prolong discovery disputes that might have been resolved in a short meet‑and‑confer call.

  • It can increase the tone and temperature of written communications because nuance and rapport are missing.

  • It can reduce opportunities to build professional relationships that later help with scheduling, stipulations, or informal resolutions.

On the other hand, telephobia does not mean a lawyer should accept every unscheduled call or tolerate abusive conversations. Thoughtful boundaries are appropriate. Some practitioners manage risk by taking (or perhaps returning) calls only at set times, ensuring a colleague is nearby, or contemporaneously documenting the substance of the call in a follow‑up email. The key is intentional management, not blanket avoidance.

Telephobia and Client Communication Duties

Avoiding phone calls strains client Relations, and professionalism failure.

Telephobia directly intersects with your ethical duty to communicate with clients. ABA Model Rule 1.4 requires lawyers to keep clients reasonably informed and to promptly comply with reasonable requests for information. Modern guidance recognizes that “client communications” include phone calls, emails, and other electronic channels. If anxiety leads to chronic delay in returning calls or to a pattern of pushing every interaction into email when a call would be more effective, the lawyer may be edging toward a communication problem, not just a preference.

Clients often interpret unanswered calls as a sign of indifference. Many clients—especially those under stress—need a live conversation to feel heard and to understand their case strategy. While written follow‑up is essential, a short, empathetic phone call can prevent distrust and complaints. Telephobia can also create inequity: clients who are comfortable with email may get robust contact, while those who rely on the phone feel neglected.

At the same time, ethics authorities acknowledge that lawyers can use multiple communication tools, not just phone calls, as long as communication is prompt, understandable, and appropriate to the client’s needs. For some neurodivergent lawyers or lawyers with genuine anxiety disorders, establishing a communication plan that mixes scheduled calls, video meetings, and structured emails can satisfy both client needs and the lawyer’s mental health needs. Clear expectation‑setting is critical.

Technology Competence and the Phone in a Digital Age

ABA Model Rule 1.1, Comment 8, emphasizes that competence now includes understanding the benefits and risks associated with relevant technology. Many lawyers hear “technology competence” and think about e‑discovery platforms or cybersecurity, not the humble phone. Yet modern telephony—VoIP, softphones, smartphone apps, call‑recording tools, and integrated practice‑management systems—is very much part of that competence landscape.

For lawyers with telephobia, technology can both help and hinder:

  • VoIP and softphone systems can route calls through your laptop, support call notes, and provide voicemail‑to‑email transcripts, which can reduce anxiety about missing key points.

  • Scheduled video or audio calls through secure platforms can feel more controlled, especially when combined with a shared agenda.

  • Over‑reliance on text‑based channels (email, messaging) because they feel safer can, however, undermine the advantages of real‑time voice communication.

Competence does not require you to love the phone. It does require that you understand the tools available, use them to communicate effectively, and avoid letting anxiety silently undercut your ability to serve clients and manage cases.

Practical Strategies to Manage Telephobia in Practice

Telephobia is manageable, and many of the strategies come from established approaches to phone anxiety. The aim is not to turn every lawyer into an extroverted caller. The aim is to reduce the anxiety enough that telephony becomes a functional, ethical communication tool rather than a source of procrastination.

Practical steps include:

  • Use structured call plans. Before a client or opposing‑counsel call, sketch a brief outline: goals, key points, and closing next steps. This reduces the “blank mind” fear and keeps calls efficient.

  • Start with low‑stakes calls. Build tolerance by making brief, simple calls (e.g., scheduling, confirmations) rather than jumping straight into high‑conflict negotiations.

  • Schedule instead of surprise. Use calendar invites or quick emails: “Can we set a 10‑minute call at 2:30 p.m. to discuss X?” Predictability lowers anxiety for both you and the other side.

  • Pair calls with written follow‑up. After important calls, send a confirming email summarizing agreements and action items. This supports clarity, protects the record, and reassures anxious lawyers who worry they misspoke.

  • Leverage firm support. For very difficult conversations, consider having a colleague present (on the call or in the room), both for support and as a witness.

  • Seek professional help when needed. When anxiety is persistent, intense, or interfering with your practice, consulting a mental health professional familiar with social anxiety or telephobia is a sign of professionalism, not weakness.

These techniques align with ethical duties rather than conflict with them. They help ensure prompt, clear communication (Model Rule 1.4) and support technological and practical competence (Model Rule 1.1) in a digital environment.

Telephobia, Wellness, and Culture in the Profession

Avoiding phone calls lead to miscommunication, delays, and frustration!

Finally, telephobia is also a wellness issue. The legal profession already carries high rates of stress, depression, and anxiety. Telephobia can add another layer of dread to a typical workday, as lawyers watch call notifications with a racing pulse. Open conversation about phone anxiety—especially among younger lawyers and those trained in email‑first environments—can normalize the experience and lead to practical accommodations.v

Mentors and firm leaders can help by modeling balanced behavior. That includes choosing calls when they will truly advance the matter, avoiding unnecessary surprise calls that feel performative, and encouraging associates to prepare for and debrief difficult conversations. Thoughtful phone use, supported by technology and grounded in ethics, can turn telephobia from a hidden liability into a manageable professional challenge.

If you or someone you know is suffering from an imminent mental health crisis, call 988 (in the United States) or 911 or equivalent in the relevant jurisdiction!

🚨 ⛑️ 🚨

If you or someone you know is suffering from an imminent mental health crisis, call 988 (in the United States) or 911 or equivalent in the relevant jurisdiction! 🚨 ⛑️ 🚨

MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖.  Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool’s Terms of Use can trigger a privilege waiver, and what “tech competence” really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff’s wake-up-call analysis of confidentiality and third-party disclosure risk.

🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters.

In our conversation, we cover the following

  • 00:00 — The “superhuman assistant” promise, and the procedural nightmare risk. 🧠⚖️

  • 00:01 — The core warning: AI use can “blow a hole” in privilege.

  • 00:02 — Editorial overview: “The AI Privilege Trap” by Michael D.J. Eisenberg.

  • 00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.

  • 00:03 — Why Judge Jed Rakoff’s opinion gets attention (tech-literate, influential).

  • 00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.

  • 00:04 — The court’s conclusion: no attorney-client privilege, no work product protection.

  • 00:05 — Privilege basics applied to AI: “confidential + lawyer” and why AI fails that test.

  • 00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾

  • 00:07 — The “stranger on the street” analogy: you can’t retroactively make it confidential.

  • 00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.

  • 00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.

  • 00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.

  • 00:10 — “Reasonable safeguards”: read policies, adjust settings, and know training/logging.

  • 00:11 — Public vs. enterprise AI: why contracts and “walled gardens” matter.

  • 00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.

  • 00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.

  • 00:13 — Redefining “tech-savvy lawyer” in 2026: judgment and restraint. 🧭

  • 00:14 — The “straight-face test”: could you defend confidentiality after a judge reads the policy?

  • 00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.

  • 00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

Word 📖 of the Week: Why Lawyers Need to Know the Term “Constitutional AI”

“Constitutional AI” is a design framework for artificial intelligence that aims to make AI systems helpful, harmless, and honest by training them to follow a defined set of higher‑level rules, much like a constitution. 🤖📜 For lawyers, this is not abstract theory; it connects directly to duties of technological competence, confidentiality, and supervision under the ABA Model Rules.

Most legal professionals now rely on AI‑enabled tools in research, drafting, e‑discovery, document automation, and client communication. These tools may use generative AI in the background even when the marketing materials do not emphasize “AI.” Constitutional AI gives you a practical way to evaluate those tools: are they structured to avoid hallucinations, protect confidential data, and resist being prompted into unethical behavior.

At a high level, a Constitutional AI system is trained to follow explicit principles, such as “do not fabricate legal citations,” “do not disclose confidential information,” and “do not assist in unlawful conduct.” The model learns to critique and revise its own outputs against those principles. For law firms, that aligns with the core expectations in ABA Model Rule 1.1 (competence) and its Comment 8, which require lawyers to understand the benefits and risks of relevant technology and stay current with changes in how these systems work. ⚖️

Constitutional AI also intersects with ABA Model Rule 1.6 on confidentiality. If an AI tool is not designed with strong guardrails, prompts, and outputs can expose sensitive client information to external systems or vendors. When you evaluate an AI platform, you should ask where data is stored, how prompts are logged, whether training data will include your matters, and whether the provider has implemented “constitutional” safeguards against data leakage and unsafe uses.

Supervision is another critical angle. ABA Formal Opinion 512 and Model Rules 5.1 and 5.3 stress that supervising lawyers must set policies and training for how attorneys and staff use generative AI. Constitutional AI can reduce risk, yet it does not replace supervisory duties. You still must review AI‑generated work product, confirm citations, validate factual assertions, and ensure the output is consistent with Rules 3.1, 3.3, and 8.4(c) on meritorious claims, candor to the tribunal, and avoiding dishonesty or misrepresentation.

For practitioners with limited to moderate tech skills, the key is to treat Constitutional AI as a practical checklist rather than a buzzword. ✅ Ask three questions about any AI tool you use:

  1. Is this AI actually helpful to the client’s matter, or is it just saving time while adding risk.

  2. Could this output harm the client through inaccuracy, bias, or disclosure of confidential data.

  3. Is the AI acting honestly, meaning it is not hallucinating cases or claiming certainty where none exists.

If any answer is “no,” you must pause, verify, and revise before relying on the AI output.

In the AI era, your ethical risk often turns on how you select, supervise, and document the use of AI in your practice. Constitutional AI will not make you bulletproof, but it gives you a structured way to align your technology choices with ABA Model Rules while protecting your clients, your license, and your reputation. 

⭐ First Five-Star Amazon Review for “The Lawyer’s Guide to Podcasting” – Why Tech-Savvy Lawyers Should Care About ABA Ethics, Client Trust, and Smart Marketing 🎙️⚖️

“The Lawyer’s Guide to Podcasting” by your favorite blogger/podcaster just earned its first five-star Amazon review, and it’s a milestone worth your attention. 🎉📘 The reviewer highlights what many of us in legal tech have been saying: podcasting is no longer a fringe hobby; it is a strategic, ethics-aware marketing channel for modern law practice. 🎙️

For lawyers with limited to moderate tech skills, this book demystifies microphones, workflows, and publishing tools without assuming you want to become an engineer. Instead, it walks you through practical steps to share your expertise in a format today’s clients already trust—long-form, authentic audio. 🔊

From a professional responsibility perspective, the guidance aligns with ABA Model Rule 1.1 on technology competence and Model Rule 1.6 on confidentiality by emphasizing the use of secure platforms, thoughtful content planning, and careful handling of client-identifying details. The book reinforces that podcasting can showcase your substantive knowledge while staying within the guardrails of Model Rule 7.1, avoiding misleading claims about your services. ⚖️

QR Code for Amazon book link

The first five-star review underlines two themes: listeners want real conversations, and they quickly recognize when a lawyer respects both the audience’s time and the profession’s ethical duties. That is exactly the posture this book encourages—credible, compliant, and client-centered. 🌟

If you are ready to build authority, differentiate your practice, and satisfy your tech-competence obligations without drowning in jargon, now is the perfect time to get your copy of “The Lawyer’s Guide to Podcasting” on Amazon and start planning your first ethically sound episode. 🚀

MTC: AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖

SDNY Heppner Ruling: Public AI Use Breaks Attorney-Client PrivilegE!

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that documents a criminal defendant generated with a publicly accessible AI tool and later sent to his lawyers were not protected by either attorney‑client privilege or the work‑product doctrine. That decision should be a wake‑up call for every lawyer who has ever dropped client facts into a public chatbot.

The court’s analysis followed traditional privilege principles rather than futuristic AI theory. Privilege requires confidential communication between a client and a lawyer made for the purpose of obtaining legal advice. In Heppner, the AI tool was “obviously not an attorney,” and there was no “trusting human relationship” with a licensed professional who owed duties of loyalty and confidentiality. Moreover, the platform’s privacy policy disclosed that user inputs and outputs could be collected and shared with third parties, undermining any reasonable expectation of confidentiality. In short, the defendant’s AI‑generated drafts looked less like protected client notes and more like research entrusted to a third‑party service.

For sometime now, I’ve warned on The Tech‑Savvy Lawyer.Page has warned practitioners not to paste client PII or case‑specific facts into generative AI tools, particularly public models whose terms of use and training practices erode confidentiality. We have consistently framed AI as an extension of a lawyer’s existing ethical duties, not a shortcut around them. I have encouraged readers to treat these systems like any other non‑lawyer vendor that must be vetted, contractually constrained, and configured before use. That perspective aligns squarely with Heppner’s outcome: once you treat a public AI as a casual brainstorming partner, you risk treating your client’s confidences as discoverable data.

A Tech-Savvy Lawyer Avoids AI Privilege Waiver With Confidentiality Safeguards!

For lawyers, this has immediate implications under the ABA Model Rules. Model Rule 1.1 on competence now explicitly includes understanding the “benefits and risks associated” with relevant technology, and recent ABA guidance on generative AI emphasizes that uncritical reliance on these tools can breach the duty of competence. A lawyer who casually uses public AI tools with client facts—without reading the terms of use, configuring privacy, or warning the client—may fail the competence test in both technology and privilege preservation. The Tech‑Savvy Lawyer.Page repeatedly underscores this point, translating dense ethics opinions into practical checklists and workflows so that even lawyers with only moderate tech literacy can implement safer practices.

Model Rule 1.6 on confidentiality is equally implicated. If a lawyer discloses client confidential information to a public AI platform that uses data for training or reserves broad rights to disclose to third parties, that disclosure can be treated like sharing with any non‑necessary third party, risking waiver of privilege. Ethical guidance stresses that lawyers must understand whether an AI provider logs, trains on, or shares client data and must adopt reasonable safeguards before using such tools. That means reading privacy policies, toggling enterprise settings, and, in many cases, avoiding consumer tools altogether for client‑specific prompts.

Does a private, paid AI make a difference? Possibly, but only if it is structured like other trusted legal technology. Enterprise or legal‑industry tools that contractually commit not to train on user data and to maintain strict confidentiality can better support privilege claims, because confidentiality and reasonable expectations are preserved. Tools like Lexis‑style or Westlaw‑style AI offerings, deployed under robust business associate and security agreements, look more like traditional research platforms or litigation support vendors within Model Rules 5.1 and 5.3, which govern supervisory duties over non‑lawyer assistants. The Tech‑Savvy Lawyer.Page has emphasized this distinction, encouraging lawyers to favor vetted, enterprise‑grade solutions over consumer chatbots when client information is involved.

Enterprise AI Vetting Checklist for Lawyers: Contracts, NDA, No Training

The tech‑savvy lawyer in 2026 is not the one who uses the most AI; it is the one who knows when not to use it. Before entering client facts into any generative AI, lawyers should ask: Is this tool configured to protect client confidentiality? Have I satisfied my duties of competence and communication by explaining the risks to my client (Model Rules 1.1 and 1.4)? And if a court reads this platform’s privacy policy the way Judge Rakoff did, will I be able to defend my privilege claims with a straight face to a court or to a disciplinary bar?

AI may be a powerful drafting partner, but it is not your co‑counsel and not your client’s confidant. The tech‑savvy lawyer—of the sort championed by The Tech‑Savvy Lawyer.Page—treats it as a tool: carefully vetted, contractually constrained, and ethically supervised, or not used at all. 🔒🤖

📌 Too Busy to Read This Week’s Editorial: “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use!” ⚖️🤖

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, “Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖” and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer.

In our conversation, we cover the following

  • 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that’s a problem ⚖️

  • 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a “mirror” for the legal profession 🩺➡️⚖️

  • 00:03:00 – “Speed without governance”: what the VA Inspector General actually warned about, and why it matters to your practice

  • 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law

  • 00:06:00 – Shadow AI in law firms: staff “just trying out” public chatbots on live matters and the unseen risk this creates

  • 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking

  • 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice

  • 00:09:00 – Competence in the age of AI: why “I’m not a tech person” is no longer a safe answer 🧠

  • 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box

  • 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI

  • 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations

  • 00:12:00 – From slogan to system: why “meaningful human engagement” must be operationalized, not just admired 

  • 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪

  • 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test

  • 00:14:00 – You don’t need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 

  • 00:15:00 – Risk mapping: distinguishing administrative AI use from “safety-critical” lawyering tasks

  • 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards

  • 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use

  • 00:17:00 – Why governance is not “just for BigLaw” and how solos can implement checklists and simple documentation 📋

  • 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters

  • 00:18:00 – Redefining the “human touch” as the safety mechanism that makes AI ethically usable at all 🤝

  • 00:19:00 – AI as power tool: why lawyers must remain the “captain of the ship” even when AI drafts at lightning speed 🚢

  • 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for?

  • 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time?

  • 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI

RESOURCES

Mentioned in the episode

Software & Cloud Services mentioned in the conversation

🎙️ My Law School Library Adds The Lawyer’s Guide to Podcasting to Empower Ethical, Tech-Savvy Attorneys ⚖️

https://law-capital.libguides.com/SpecialCollections/NewBooks

I’m thrilled to share that my alma mater, Capital University Law School, has added my book, The Lawyer’s Guide to Podcasting, to its Law Library Special Collections. 🎉📚 Seeing this guide on the same shelves where I learned to think like a lawyer underscores how central ethical technology use has become to modern advocacy. 🎙️ Written for attorneys with limited to moderate tech skills, it walks readers through planning, recording, and promoting a law‑firm podcast while honoring ABA Model Rules on technology competence, confidentiality, and attorney advertising, helping you communicate confidently, credibly, and compliantly. ⚖️🚀

You can pick up your copy on Amazon Today!

🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️

My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy.

Join Justin and me as we discuss the following three questions and more!

  1. What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?

  2. What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk? 

  3. Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? 

In our conversation, we cover the following

  • 00:00 – Welcome and guest introduction

    • Justin joins the show and shares his current tech setup at his desk. 

  • 00:00–01:00 – Justin’s current tech stack

    • Lenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.

    • Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making.

  • 01:00–02:00 – Android vs. iPhone for AI use

    • Why Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability.

  • 02:00–05:30 – Q1: Top three ways litigators should be using AI right now

    • Using AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.

    • Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.

    • Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks.

  • 05:30–07:30 – StrongSuit vs. basic tools like Word grammar check

    • How StrongSuit aims to “up-level” a lawyer’s writing, not just catch typos.

    • Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents.

  • 06:00–08:00 – AI context limits and scaling doc review

    • Constraints of large models’ context windows (around ~1M tokens ≈ ~750 pages).

    • How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights.

  • 08:00–09:00 – Handling tens of thousands of documents

    • How StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters.

  • 09:00–11:30 – Origin story of StrongSuit

    • Why Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.

    • StrongSuit’s focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step.

  • 11:30–13:30 – From intake to brief drafting in minutes

    • Generating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.

    • StrongSuit’s long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment.

  • 12:00–14:30 – How StrongSuit tackles hallucinations

    • Building a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.

    • Validating citations by checking whether the Bluebook citation actually exists in StrongSuit’s case database before surfacing it to the user.

    • Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations.

  • 14:30–16:30 – Coverage and jurisdictions

    • Coverage of all U.S. jurisdictions, federal and state, focused on precedential cases.

    • Handling most regulations from administrative agencies, and limits around local ordinances.

    • Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows.

  • 15:00–17:00 – Security and confidentiality for litigators

    • SOC 2 compliance and industry-standard encryption at rest and in transit.

    • No model training on user data.

    • Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys.

  • 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigation

    • Mistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.

    • Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract drafting).

    • How to shortlist tools: look for SOC 2, real product depth, awards, and a focus on your specific workflows.

    • Mistake #2: Expecting immediate mastery instead of moving through predictable adoption stages—from learning the tool, to daily use, to stringing workflows together.

  • 20:30–22:30 – Building firm-wide AI workflows over time

    • Moving from isolated experiments to integrated, low-friction workflows, such as automatic intake-to-research pipelines.

    • Using client intake audio or transcripts to automatically extract facts, issues, and research paths.

  • 22:30–24:30 – Time constraints and “no-time” lawyers

    • Why lawyers don’t need to be “technical” to use StrongSuit.

    • Reframing AI as text-based tools where lawyers’ writing skills and analytical thinking are assets, not obstacles. 

  • 24:00–26:00 – Practical workflows beyond intake

    • Using AI to prepare for expert depositions, including reviewing valuation analyses, flagging departures from market consensus, and generating targeted questions.

    • Reinforcing the value of AI-enhanced legal research and drafting as core litigation workflows.

  • 26:00–29:30 – Q3: 2026 and beyond – AI-driven workflows every litigator should master

    • Rapid improvement of baseline models (e.g., jumping from single-digit to high double-digit performance on difficult benchmarks year over year). 

    • The idea of “tipping points,” where small performance gains turn AI from marginally useful to essential in specific tasks.

    • Why legal research is a great training ground for understanding where AI excels, where it falls short, and how to divide labor between human and machine.

    • The value of learning basic prompting skills to get more from AI systems, even when platforms offer visual workflows.

  • 29:30–32:30 – Will workflows actually change—or just get better?

    • Why Justin expects familiar litigation workflows (doc review, research, drafting) to remain structurally similar, but become far faster and more sophisticated.

    • AI agents handling the grind work while lawyers focus on synthesis, judgment, and strategy.

    • A future where “AI + lawyer vs. AI + lawyer” resembles high-level chess: same rules, but much deeper thinking on both sides.

  • 32:30–End – Where to find Justin and StrongSuit

    • How to connect with Justin and learn more about StrongSuit’s litigation tools.

Resources

Connect with Justin

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation