MTC: Hidden AI, GEO, and the ABA Model Rules: What Every Lawyer Needs to Know Before Their Next Client Finds Them Online ⚖️🤖

Generative AI is already talking about you, your law firm, and your practice area—even if you have never opened ChatGPT. 😳 Clients ask AI tools legal questions in natural language, and those systems answer by pulling from whatever content they trust online. For lawyers, that raises two intertwined issues: “hidden AI” inside everyday tools and the rise of Generative Engine Optimization (GEO). Together, they sit squarely in the path of your duties under the ABA Model Rules.

Legal Ethics Meets GEO and Hidden AI!

Hidden AI is everywhere in modern law practice tools. Microsoft 365 suggests text, summarizes long email threads, and drafts documents. Zoom transcribes and sometimes “enhances” meetings. Practice‑management platforms now market AI assistants that review documents, summarize matters, and even suggest next steps. Much of this AI runs quietly in the background, so it is easy to forget it exists—or to assume it is “just another feature.” Yet under ABA Model Rule 1.1, technological competence now includes understanding the benefits and risks of the technology you choose for your clients’ work. You cannot competently supervise what you do not even realize is there.

At the same time, AI tools sit on the front end of client development. When a potential client types, “How does a New Jersey divorce work and when should I hire a lawyer?” into an AI chatbot, that system gives an answer based on content it considers reliable. GEO—Generative Engine Optimization—is about making your content understandable, quotable, and safe for those systems to lift into the response. Where SEO asks, “How do I rank in Google’s blue links?”, GEO asks, “How do I become the answer AI gives when someone in my jurisdiction asks a real client question?” 🧠

Where the ABA Model Rules Fit

GEO and hidden AI are not just marketing trends; they are ethics issues.

  • Model Rule 1.1 (Competence). Comment 8 extends competence to relevant technology. ABA guidance on AI (including Formal Opinion 512) explains that lawyers must understand how AI tools work in broad strokes, their limitations, and their failure modes. If you expect clients to find you through AI‑generated answers, you should know what those systems are likely to say about your area of law and how your own content feeds into that ecosystem. ⚖️

  • Model Rule 1.6 (Confidentiality). You do not need to paste client facts into AI tools to do GEO. Good GEO content relies on hypotheticals and public law, not on confidential stories. But when you use AI inside Word, your practice platform, or a browser‑based assistant, you must know where the data goes, whether it is used for training, and whether additional client consent or stronger safeguards are required. 🔐

  • Model Rule 1.4 (Communication). When AI tools materially affect how you handle a matter—such as drafting, research, or review—you may need to explain that to clients in clear, non‑technical terms. In marketing, that same communication duty supports honest disclaimers: your GEO‑optimized articles must state that they are general information, not legal advice, and that AI summaries of your content are no substitute for a direct attorney‑client consultation.

  • Model Rules 7.1–7.3 (Advertising and Solicitation). GEO content must still be truthful and non‑misleading. You cannot let AI‑targeted content slide into promises of “guaranteed results” or vague claims of being “the best.” The fact that you are writing for AI as well as humans does not relax your duties under the advertising rules—it amplifies them, because misstatements can get replicated and amplified by AI tools. 📢

Handled thoughtfully, GEO can actually help you satisfy these rules. It encourages you to publish accurate, current, and jurisdiction‑specific explanations that educate the public and reduce confusion. Done poorly, it can push you into ethically dangerous territory where AI retells your overbroad claims to countless readers you never see.

What Is “Hidden AI” in Law Practice?

How AI Shapes Legal Ethics and Client Discovery

For many lawyers with limited or moderate tech skills, the biggest risk is not exotic AI research—it is quiet defaults.

Examples:

  • Word processors that turn on AI‑assisted drafting by default.

  • Email services that summarize conversations using third‑party models.

  • Cloud DMS, i.e., a cloud-based document management system, or practice platforms that offer “smart” suggestions based on client documents.

These tools can be legitimate productivity boosts, but under Rules 1.1 and 1.6, you must understand enough about them to decide when and how to use them. That includes asking:

  • Does this feature send client content to an external provider?

  • Is that provider training on my data?

  • Can I turn that training off?

  • Is there a business or enterprise version with better confidentiality terms?

You do not need to become a software engineer. You do need to know the basic data‑flow story well enough to make an informed risk judgment and to explain that judgment if a client or disciplinary authority asks. 🙋‍♀️

Moving from SEO to GEO—Ethically

Traditional SEO still matters. You still want clear titles, descriptive meta tags, fast and mobile‑friendly pages, and basic schema markup so search engines can understand your site. GEO builds on that foundation and asks you to go one step further: write in a way that large language models can safely quote.

GEO‑friendly legal content usually has:

✅   An answer‑first summary at the top: a short, plain‑English overview of the main question.

✅   Strong jurisdiction signals: repeated references to the state, province, or country, relevant courts, and applicable statutes.

✅   Specific client questions: headings written in the same conversational style clients use (“How long do I have to sue after a car accident in Ohio?”).

✅   Trust signals: bylines, credentials, bar memberships, links to statutes and court sites, and recent update dates.

For example, if you serve veterans in disability benefits work, your GEO page might be titled “How VA Disability Claims Work for [Your State] Veterans” and open with a five‑sentence, answer‑first summary in plain English. You would clearly note that you practice in specific jurisdictions, link to the VA and governing statutes, and spell out when someone should seek legal counsel. An AI system looking for a safe, jurisdiction‑clear answer is more likely to treat that content as a reliable source.

From an ethics standpoint, this structure helps you:

  • Stay in your lane (Rule 1.1) by emphasizing your actual jurisdiction and practice scope.

  • Provide accurate, non‑misleading information (Rules 7.1–7.3).

  • Communicate clearly about what your content is—and is not (Rule 1.4).

Practical First Steps for Non‑Techy Lawyers

You do not need to rebuild your entire site this week. A focused, incremental approach works well, especially if you are still building your tech confidence. Here is a practical sequence that maintains compliance with the Model Rules:

Legal Ethics Meets GEO and Hidden AI

  1. Audit your “hidden AI.” With your IT provider or vendor reps, identify where AI is already in use in your stack: Microsoft 365, Google Workspace, Zoom, your case‑management system, research tools, and any browser extensions. Turn off any features you cannot yet explain to yourself in basic terms. 🛠️

  2. Pick one practice area to GEO‑optimize. Choose the area that drives most of your matters. List the 10 most common client questions you actually hear. Those are the headings for your first GEO page.

  3. Write answer‑first, jurisdiction‑specific content. Use short paragraphs and plain language, and embed jurisdiction cues and citations to official sources. Include clear disclaimers about general information, no legal advice, and the need for a consultation.

  4. Refresh and expand over time. Revisit that page whenever law or practice changes, add FAQs, and link related posts. This keeps content current for both search engines and AI tools.

  5. Document your choices. If you decide to use specific AI tools in drafting content or in client work, note your reasoning: confidentiality safeguards, vendor terms, and how you supervise outputs. This helps show that you approached AI use thoughtfully under Rules 1.1, 1.4, 1.6, 5.1, and 5.3. 📚

The core message is simple: you do not have to master every technical detail to be a tech‑savvy lawyer, but you do have to stop pretending that AI is optional. Your clients are already using it; your vendors are already embedding it; and AI systems are already shaping how clients find you. Taking a deliberate, ethics‑aware approach to hidden AI and GEO is no longer extra credit—it is part of protecting your clients, your reputation, and your license. 🚀⚖️

MTC

📰 ABA TECHSHOW 2026 Recap: From AI Hype to LLM Reality, Google Workspace, and Ethical Lawyering in the Age of Bots ⚖️🤖

The Real Story Behind ABA TECHSHOW 2026

The techshow is the conference to go to keep your pulse on the technology lawyers should be using every day!

Walking into ABA TECHSHOW 2026 this year, I wasn’t thinking about shiny gadgets; I was thinking about competence, client service, and what it will mean to practice law in an era dominated not just by “AI,” but by large language models (LLMs) quietly shaping almost everything we see and share online. During my work on The Tech-Savvy Lawyer.Page blog and podcast, I keep running into the same pattern: lawyers know they should understand legal technology, yet they worry they’ll break something, breach a rule, or look foolish in front of their staff. TECHSHOW 2026 aimed directly at that anxiety — but this year, the conversation needs to go beyond what AI and generative AI can do and toward how LLMs and search bots are already shaping our professional identities online and offline. ⚖️💻

Keynotes: The “AI Dividend” and Your Time

The keynote lineup captured the tension between promise and risk. Legal market analysts highlighted what some called the “AI Dividend”: when machines take over routine drafting and research, lawyers gain time to think, advise, and advocate at a higher level. The real question — one I’ve been hammering on The Tech-Savvy Lawyer.Page for years — is what you will do with the time technology gives back (some of that time should include reviewing your work, e.g., your case citations). Tech-savvy speakers pushed attendees to look past vendor hype and focus on the broader digital environment, where consumer-facing tools, search engines, and recommendation algorithms are setting new expectations for speed, transparency, and availability.

Practical AI in the Sessions

Inside the conference rooms, the “Taming the Machines” and related AI tracks met baseline concerns (some with hands-on workshops) focused on realistic use cases: assisted drafting, pattern spotting in discovery, and summarizing voluminous documents. These sessions were built for lawyers who live in Word, Outlook, Google Workspace, and practice management systems and who simply want to stop retyping the same paragraphs. The faculty hammered home a critical point: generative AI is an assistant, not a decision-maker; you remain the lawyer, responsible for accuracy, judgment, and ethics under the ABA Model Rules. 🤖📄

Google Workspace, Microsoft 365, and Using What You Already Own

Mathew Krebis’ session on Google Workspace drove that message home in very practical terms. He showed how many firms are only scratching the surface of tools they already pay for: shared Drives with well-structured permissions, real-time collaboration in Google Docs, Gmail automation for intake and follow-up, and Google Calendar combined with Tasks to keep matter timelines under control. When you layer in emerging AI features in Workspace — smart replies, document summaries, suggested outlines — you see how even modest use of these tools can dramatically reduce friction in daily practice, and the tools Mathew discussed are not isolated to “law practice management” systems.

The takeaway was powerful: before you chase a new platform, fully exploit the ecosystem you already have. For many firms, “being more tech-savvy” starts with properly configuring their Google Workspace, Microsoft 365, or other SaaS platform, rather than buying yet another service.

Podcasting, Social Media, and LLM-Driven Visibility

Meanwhile, one other yet important frontier — and one that still feels underexplored — is what happens when LLMs and search bots become the primary lens through which clients, colleagues, and even opposing counsel discover you. That’s where my panel, 🎧 Podcasting for Lawyers: The Truth Behind the Mic, came in.

Ruby L. Powers, Gyi Tsakalakis, Stephanie Everett, and I discussed podcasting and social media not just as marketing channels, but as structured signals fed into LLM-driven engines that are constantly indexing, ranking, and inferring who is an authority on a given topic. Whether you talk about appellate practice, family law, or even a hobby outside the law, your content becomes training data for Generative Engine Optimization/LLM bots that decide which voices surface first when someone types a question into an AI chatbox. 🎙️🌐

In other words, your digital footprint is no longer static. It is being interpreted, reassembled, and presented as answers — often without you ever seeing the intermediate steps. That reality raises a new layer of ethical questions under the ABA Model Rules. Model Rule 7.1’s prohibition on false or misleading communications about the lawyer or the lawyer’s services takes on a new twist when LLMs remix snippets of your posts, podcasts, Google Workspace–hosted client alerts, and blog articles into composite “advice.”

You might be scrupulously accurate in your content, but if an LLM mischaracterizes it or presents it out of context, what then? TECHSHOW 2026 addressed traditional risks like hallucinated case citations, but there is room for a deeper, explicit conversation about how LLM-driven discovery intersects with advertising, communication, and competence duties.

EXPO Hall: Tools, Timekeeping, and Vendor Reality Checks

The EXPO Hall, as always, served as a laboratory of possibilities. Practice management platforms, billing tools, document automation, and a wave of AI-enhanced products competed for attention. Timekeeping tools that automatically capture activity across devices and applications and then propose draft time entries have grown dramatically since last year. For lawyers still reconstructing their days from memory and sticky notes, this is more than a marginal upgrade; it directly affects revenue, work-life balance, and accuracy.

But the fair warning comes here: make sure vendors are showing you what their product can do today, not what they hope it will do someday. In the LLM era, marketing decks are often several steps ahead of deployed reality. 🧾⏱️

Remember, you have an obligation under Model Rule 1.1 (competence) and Model Rule 5.3 (responsibilities regarding non-lawyer assistance) to understand the capabilities and limitations of any tech you “delegate” work to. Asking hard questions about current functionality, data handling, and audit trails is not being difficult; it is part of your duty of care.

Cybersecurity, Confidentiality, and LLM Risk

networking oppOrtunities like the taste of tecHshow” is a great way to talk with and learn from other lawyers about using tech in the practice of law.

The sessions on cybersecurity and confidentiality continued to do vital work. Under Model Rule 1.6, our obligation to protect client information extends to cloud storage, email, video conferencing, and the mobile devices we casually use in airport lounges. The “Guardians of the Data” track walked through practical checklists rather than abstract fearmongering: password managers, multi-factor authentication, properly configured backups, and vendor due diligence.

For firms running on Google Workspace, that translated into concrete steps: enforcing two-step verification, tightening Drive sharing settings, using client-specific shared Drives instead of ad hoc personal folders, and monitoring admin logs for suspicious access. The move from generic “AI” to LLM-powered services on any platform increases data risk, because many tools rely on ingesting your content — sometimes including client information — to improve their models. If you don’t understand where your data is going and how it is used, you cannot credibly say you are meeting confidentiality obligations. 🔐☁️

Competence, Human-in-the-Loop, and Everyday Workflows

You have an obligation under Model Rule 1.1 (competence) and Model Rule 5.3 (responsibilities regarding non-lawyer assistance) to understand the capabilities and limitations of any tech you “delegate” work to. Asking hard questions about current functionality, data handling, and audit trails is part of your duty of care.

Balancing this skepticism, though, is an equally important truth: becoming proficient with AI and LLM-based tools is not a spectator sport. You cannot satisfy your duty of technological competence from the sidelines. You have to use the tools first on a small scale, then progressively in more critical workflows, always with appropriate supervision and verification.

That might mean piloting an AI drafting feature in Google Docs and Microsoft Word for internal templates, or testing structured intake forms and automations inside Google Workspace or Microsoft 365 before rolling them out firm-wide. Ignoring AI because it feels uncomfortable is no longer the safer option. In some practices, failing to integrate it intelligently — while peers and opposing counsel do — may itself raise competence concerns as expectations evolve in courts and among clients. 🧩📈

Saturday Sessions: From “Use AI” to “Use AI Responsibly”

On Saturday, the 9 a.m. conversation among ABA President Michelle A. Behnke, Immediate Past President William R. “Bill” Bay, and President-Elect Barbara J. Howard, underscored how all of this ties into the rule of law and access to justice, framing AI as something lawyers now have a responsibility to actually use, not simply watch from the sidelines. The 10 a.m. session with Judge Timothy S. Driscoll then shifted the focus from “use AI or be left behind” to “use AI responsibly,” making it clear that judges, too, are integrating AI into their work and that they are not immune from mistakes when they rely on it.

The message for everyone in the courtroom ecosystem was simple and blunt: “Review, review, and review” any work touched by AI, because AI is a non‑infallible tool that does make errors and can mislead the unwary. Together, these sessions acknowledged the growing digital divide: lawyers and clients who can’t or won’t adopt technology risk falling out of the mainstream of legal services, while those who adopt it recklessly risk eroding confidence in both their own work and the justice system as a whole.

We are not merely debating convenience; we are deciding who gets effective representation and who is left out because the lawyer they might have hired never appeared in their LLM‑driven search results — or appeared with AI‑boosted visibility but poor ethical judgment. Technology, in this sense, is not optional; it is one of the few levers we have to expand meaningful access to legal help, provided we wield it with intent, humility, and rigorous human review. ⚖️🧠

LLM Literacy: The Next Core Competency

That balance — between caution and experimentation — is where TECHSHOW 2026 both excelled and showed its next frontier. Many sessions made AI approachable, breaking down concepts for lawyers with limited to moderate tech skills and providing concrete workflows they could apply on Monday. What I would like to see more explicitly next year is programming that treats LLM literacy as a core competency: understanding how LLMs are built, how they index and surface information, how your content feeds into them, and how that affects everything from client intake to reputation, whether you are working in Microsoft 365, Google Workspace, or a specialized legal platform.

From my vantage point as a legal tech ambassador at The Tech-Savvy Lawyer, the most successful sessions respected that many lawyers are highly capable professionals who simply haven’t had the time or guidance to modernize their workflows. They don’t need to become prompt engineers. They need guardrails, roadmaps, and clear examples of how to align AI, LLM tools, and mainstream platforms like Microsoft 365 and Google Workspace with the ABA Model Rules and local bar guidance. When faculty focused on incremental steps — tightening cybersecurity configurations, adding a layer of AI-assisted drafting under strict human review, building a consistent content strategy that LLMs can reliably recognize — the room should lead in.

A Tough-Love Takeaway for Lawyers

If you are a lawyer who still feels behind, here’s the core message I took away from TECHSHOW 2026, with a bit of tough love: you don’t need to chase every new tool, but you can’t afford to ignore LLM-driven AI and the platforms you already live in, like Microsoft 365 and Google Workspace, any longer. Understand the basics; pilot one or two well-vetted tools to start improving your efficiency without sacrificing the need for a true human-in-the-loop.

SEE YOU IN CHICAGO FOR ABA TECHSHOW 2027!!!

Read your jurisdiction’s ethics opinions on AI and technology. Build habits that protect client data by default. Use your own content — whether blog posts, newsletters, or podcasts — to train the bots to see you as a trusted authority rather than a digital afterthought. Ultimately, your bar license may be at more risk from not engaging with AI than from engaging with it carefully and intelligently.

The future of legal practice will not wait until we are all comfortable; it is here now, embedded in the search boxes, recommendation engines, and tools your clients already use. TECHSHOW 2026 made that clear. The next move is yours. 🚀⚖️

MTC

MTC: Staying Ahead of the Curve: Why ABA Techshow Is Not Optional for Today's Practicing Lawyer

the aba techshow is the perfect place for lawyers to learn the skills they need to know to meet aba requirements to stay abreast of the benefits and risks associated with relevant technOlogy used in the practice of law!

Let me be direct: technology is no longer a "nice-to-have" in legal practice. It is an ethical obligation. 🎯

The American Bar Association made that clear in 2012 when it amended Comment 8 to Model Rule 1.1 — the foundational rule governing competence. That comment explicitly states that a lawyer must "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." Not someday. Not when it's convenient. Now — and continuously. If you are a practicing attorney and you are not actively engaging in legal technology education, you are not just leaving efficiency on the table. You may be skating dangerously close to an ethical violation.

That is precisely why I keep coming back to ABA TECHSHOW — and precisely why I encourage every lawyer I speak with, regardless of their comfort level with technology, to attend.

🔑 ABA TECHSHOW Is Built for You — Yes, You

I want to address something head-on: the assumption that Techshow is a conference for tech enthusiasts and IT professionals. It is not. The 2026 conference, running March 25–28 at McCormick Place in Chicago, features over 100 technology vendors and programming explicitly designed for lawyers at every skill level — including those who still break into a cold sweat opening a new software interface. The sessions span everything from AI fundamentals to cybersecurity to practice management to video communication. There is a deliberate on-ramp built into the conference structure because the organizers understand that the legal profession is diverse in its relationship with technology.

I have been privileged to serve as a speaker and faculty member at TECHSHOW, and this year is no exception. At TECHSHOW 2026, I am co-presenting two sessions that I believe speak directly to where the legal profession is right now.

The first, Podcasting for Lawyers: The Truth Behind the Mic, pulls back the curtain on how lawyers can leverage podcasting as a powerful tool for building authority, deepening client relationships, and positioning themselves as thought leaders in their practice areas. In a media landscape saturated with blogs and social media posts, a podcast gives you something rare: an intimate, sustained connection with your audience. As you know, I run my own podcast — The Tech-Savvy Lawyer.Page Podcast — and in that session, alongside colleagues and previous podcast guest Ruby Powers and hopefully future podcast guests Gyi Tsakalakis, and Stephanie Everett, we share the real, actionable steps behind building compelling legal content. 🎙️ 

learn how setting up a podcast studio carries over to help with other legal events!

My second session, Camera Ready Anywhere: Mastering Video Meetings with Clients, Courts, and Colleagues, my co-presenter, Temi Siyanbade, and I explore the practical, professional, and ethical dimensions of virtual communication. As virtual meetings have become a permanent fixture of legal practice — whether you are conducting client consultations on Zoom, appearing remotely before a tribunal, or negotiating with opposing counsel on TECHSHOW — looking and sounding competent on camera is no longer optional. This session covers audio and video setup, lighting, platform best practices, and how to project professionalism in a digital environment. The irony is that many lawyers who are meticulous about their appearance in a courtroom give almost no thought to how they present themselves on a video call. That gap matters. It matters to clients. It matters to judges. And yes, it can matter to your reputation.

⚖️ The ABA Model Rules Are Not Suggestions

Let us return to the ethics piece, because I think it deserves more than a passing mention. ABA Model Rule 1.1 sets the standard for competent representation. Most lawyers understand this in terms of legal knowledge — knowing the law, understanding procedure, being prepared. Fewer appreciate that the ABA's 2012 amendment has extended that standard to technology.

As of today, 40-plus states have adopted some version of the technology competence obligation articulated in Comment 8. The District of Columbia most recently joined that group in 2025. This is not a fringe interpretation. It is a growing national consensus about what it means to be a competent lawyer in the modern era.

Rule 1.6 — governing confidentiality — also carries technology implications. A lawyer who fails to understand how their email system works, who stores client data on unsecured devices, or who falls victim to a phishing attack that exposes client files has potentially breached their duty of confidentiality. Rule 5.3 requires that supervisors ensure non-lawyer staff are also compliant with the Rules — and that includes how they use firm technology. The tentacles of technology competence reach throughout the Model Rules.

Conferences like TECHSHOW exist, in part, to help you satisfy these obligations in a practical, hands-on way. The ABA Law Practice Division has consistently described Techshow as an opportunity to understand the "benefits and risks" of technology — the exact language of Comment 8. This is not accidental. It is intentional alignment between the programming and your professional duties.

🚀 The Future Is Already Here — Are You Ready?

The 2026 theme — Innovation That Protects the Rule of Law — reflects something I have believed for years: technology, when adopted thoughtfully, does not undermine the legal profession. It strengthens it. AI tools are transforming how lawyers research, draft, and communicate. Wearable technology and augmented reality are beginning to reshape how we work and collaborate. Deposition technology is being revolutionized by AI-powered transcript tools and remote video platforms. None of this is science fiction. It is happening right now, in law firms across the country.

The question is not whether you will engage with these tools. The question is whether you will engage with them proactively — understanding their benefits and their risks — or reactively, scrambling to catch up after a client complaint or a disciplinary inquiry.

I am not here to alarm you. I am here to invite you. 🤝

your podcast studio set up iMpacts on you are perceived in the virtual legal landscape!

Whether you are a solo practitioner trying to figure out which AI tool is worth your subscription fee, or a partner at a mid-size firm wondering how to lead your team through a technology transition, Techshow offers you a safe, supportive, and genuinely energizing environment to learn. Most of the sessions are CLE-eligible. The vendors are accessible and eager to demonstrate — not sell. The community is collaborative.

More than four decades of working with technology and nearly 30 years of those in the legal arena have taught me one thing above all else: the lawyers who thrive are not necessarily the most tech-savvy. They are the most tech-willing — the ones who stay curious, stay engaged, and never stop learning. 💡

TECHSHOW is where that learning happens. I will see you there.

REGISTER HERE!

MTC

MTC: Is Apple’s MacBook Neo the Real Game Changer for Lawyers Stuck Between Windows and Mac? 🤔💼

A lawyer’s choice between the MacBook Neo vs. Windows is not only a strategic business choice but a professional ethics one too!

For years, many lawyers have treated the move from Windows to Mac as a luxury upgrade rather than a strategic business decision. 💻⚖️ Apple new MacBook Neo, with its $599 starting price (and lower with education discounts), directly challenges that mindset by bringing a true macOS laptop into the same budget range as many mid-tier Windows machines. The question for lawyers on the fence is no longer “Can I justify a Mac?” but “Is the Neo a responsible, ethically sound choice for my law practice, under both my budget and my professional duties?”

From a hardware and price perspective, the Neo matters because it compresses the long‑standing price gap between Windows laptops and MacBooks. At around $599, it lives squarely in the territory where most solos and small firms previously defaulted to Windows PCs or even Chromebooks, not because they preferred them, but because MacBooks seemed out of reach. Apple is using its Apple Silicon and tight supply chain control to keep Neo’s price relatively stable even as RAM, SSD, and CPU prices push other laptop prices up as much as 40 percent. In an environment where many PC makers must raise prices or cut corners, the Neo offers lawyers a predictable, brand‑name option that is less vulnerable to component price spikes in the short to mid term.

Dream itTech‑Savvy Lawyers: If your workflow already runs on Microsoft 365, webmail like Gmail, cloud‑based practice management, and browser‑based legal research tools, your computer’s operating system is now just invisible plumbing 🧑‍🔧 —focus on security, value, and productivity, not whether it’s Windows or Mac. 🔔

Dream itTech‑Savvy Lawyers: If your workflow already runs on Microsoft 365, webmail like Gmail, cloud‑based practice management, and browser‑based legal research tools, your computer’s operating system is now just invisible plumbing 🧑‍🔧 —focus on security, value, and productivity, not whether it’s Windows or Mac. 🔔

That said, lawyers should not mistake the Neo for a no‑compromise replacement for every Windows laptop. The device cannot run Windows natively, and running Windows in a virtual machine on Apple Silicon is possible but not ideal as a core strategy. If your practice still depends on a specific legacy Windows desktop app that has no modern web or Mac equivalent—think an older on‑premises case management system or niche desktop timekeeping tool—you must factor that in, because the Neo is not the machine for you. For everyone else, especially those whose workflow is already centered on Microsoft 365, webmail (e.g., Google), cloud practice management, and browser‑based research tools, the operating system is increasingly just the plumbing under the hood.

This is where today’s SaaS‑driven legal stack changes the analysis. Many of the core tools lawyers now rely on—cloud practice management, document automation, e‑signature, e‑billing, calendaring, and research platforms—are delivered through the browser or platform‑agnostic apps. 🌐 Most modern law‑focused SaaS platforms are built to be OS‑agnostic so they can serve both Windows and Mac firms with a single codebase, and they function similarly across Chrome, Edge, and Safari. That means the historical “Windows has all the legal software” argument is rapidly losing relevance for general practice, especially for solos and small firms that choose mainstream platforms over custom legacy systems.

The ABA Model Rules, however, keep this from being just a hardware shopping discussion. ABA Model Rule 1.1, and especially Comment 8, recognizes that competence now includes understanding “the benefits and risks associated with relevant technology.” That duty of technological competence does not require you to buy the most expensive device, but it does require you to make informed, reasonable choices about the systems you use to handle client information and conduct your practice. When you evaluate the Neo, you are not just deciding what laptop you prefer—you are deciding whether this platform lets you meet your obligations around confidentiality, reliability, uptime, and data handling in a way that is at least as competent as what you have on Windows.

Short‑term costs are where the MacBook Neo is most obviously attractive. At its launch price, it competes directly with mid‑range Windows laptops that often sacrifice build quality, thermals, or battery life to hit a number on the sticker. The Neo offers a brighter display, premium build, and Apple Silicon performance in that same price band, which can translate into less time fighting sluggish hardware and more time focused on client work. For a lawyer with limited to moderate tech skills, that smoother baseline experience can reduce friction, support better document handling, and lower the odds of user‑induced system instability. 🚀

Can Attorneys juggle a macbook Neo, their firm’s SaaS tools, and their ethical duties?

Mid‑term costs—three to five years—are where Apple’s supply chain and design decisions become relevant. Industry reports suggest that rising memory and CPU costs could force many Windows laptop manufacturers to push prices up sharply, while Apple’s long‑term supplier agreements help buffer its MacBooks from the worst of these increases. At the same time, the Neo introduces a more modular, repair‑friendly design than previous MacBooks, with lower out‑of‑warranty battery replacement costs, making mid‑life repairs less painful. For a law firm budgeting over the life of a device, this combination of more stable pricing and more manageable repair costs can make the total cost of ownership more predictable than a similarly priced Windows machine that may face steeper price hikes or cheaper construction.

Long‑term expenses involve more than just hardware. You must consider training, support, integration, and the risk of vendor lock‑in or disruptive platform changes. The Neo ties you more deeply into the macOS ecosystem, which can be a strength if you commit to it, but may introduce friction in a mixed Windows–Mac environment. On the Windows side, there are signs that Microsoft may move more aggressively toward subscription‑driven Windows licensing, especially for Pro editions, which could affect firms that rely heavily on Windows‑specific features. Lawyers already shoulder subscriptions for research services, practice management, and office suites, so a shift toward OS‑level subscription pricing could make the Mac’s relatively stable OS model more attractive over time.tech.

From an ethical perspective, the operating system decision intersects directly with data security and confidentiality. ABA technology‑competence guidance stresses that lawyers must understand the risks of the tools they use, including operating systems, cloud storage, and third‑party services. MacOS offers strong sandboxing, disk encryption, and built‑in security protections, but Windows has mature security controls as well, especially in managed environments. The real question is whether, given your own tech comfort level, can you configure and maintain a secure environment more reliably on Windows or macOS? For many small firms without dedicated IT, the Neo’s controlled hardware–software stack may reduce complexity and thereby reduce risk.(One added, but separate, benefit option is the availability to purchase AppleCare; this is Apple’s well-regarded extended warranty program, which can alleviate some of your concerns about future repairs.)

Still, the Neo is not a universal solution. If you are a litigator embedded in a court system that mandates Windows‑only e‑filing tools, if your firm uses an on‑prem Windows server that depends on Windows‑only integrations, or if you rely on specialized Windows‑only deposition or trial software, you will either need to keep a Windows machine in parallel or stay with Windows as your primary platform. Under Model Rule 1.1, knowingly moving to a platform that breaks critical parts of your workflow without a realistic workaround would raise competence concerns. In that sense, the Neos’s OS limitations force you to map your actual workflow—software, integrations, court requirements—rather than treating this as a purely personal preference decision.

can a lawyer leverage a macbook Neo and cloud platforms for secure practice?

So does the MacBook Neo qualify as a true “game changer” for lawyers sitting on the Windows‑to‑Mac fence? For a large subset of practitioners—especially solos and small firms who primarily use browser‑based SaaS tools, Microsoft 365, PDF software, and mainstream practice management platforms—the answer is increasingly yes. ✅ The Neo dramatically lowers the entry cost of joining the Mac ecosystem while offering a stable supply‑chain story and credible mid‑term repairability, all within a security model that can satisfy ABA technology‑competence expectations when used thoughtfully.

For others—those deeply tied to legacy Windows software or court‑mandated tools—the Neo may be more of a secondary device than a replacement. But even in those cases, its presence will pressure Windows OEMs to improve build quality, pricing transparency, and long‑term value, which benefits the legal profession regardless of which platform individual lawyers choose. In short, the MacBook Neo is less about abandoning Windows and more about forcing every lawyer to ask a more sophisticated, ethics‑aware question: which platform—Windows, Mac, or a hybrid—best supports competent, secure, and sustainable representation for my clients in the decade ahead?

MTC

MTC: Are Lawyers Really Ready for a Wallet‑Free Future? Digital Wallets, ABA Ethics, and the Reality of Going Fully Cashless 💳⚖️

Tech-savvy lawyers should not leave their physical wallets at home, BUT YOU CAN PROBABLY pare THEM down some.

When previous podcast guest David Sparks over at MacSparky shared his recent post about accidentally going out without his physical wallet—and still making it through the day just fine on his iPhone and Apple Wallet—it captured a quiet shift many of us in the legal profession are grappling with. He walked into his appointment armed only with a digital ID, digital insurance card, and Apple Pay, and everything worked. For a growing number of professionals, that is the new normal. The question for lawyers is more specific: not can we go wallet‑free, but should we—ethically, practically, and professionally—given our obligations under the ABA Model Rules?

Digital wallets are no longer niche tools reserved for tech enthusiasts. Apple Wallet and similar platforms have matured into robust ecosystems that can store payment cards, IDs, insurance cards, transit passes, and even car keys. They sit at the intersection of convenience, security, and risk. As attorneys, we have to examine that intersection with greater rigor than the average consumer, because our technology choices are framed by duties of competence, confidentiality, and client service.

The promise of a wallet‑free practice

On paper, the case for a full digital wallet is compelling. Digital payments can reduce friction at the courthouse café, client lunches, and bar events. Digital IDs eliminate worries about misplacing a physical card. Many platforms add layers of biometric security that traditional wallets can’t match. David notes that Apple Wallet has “been quietly getting better for years,” allowing storage of physical card numbers behind Face ID and making peer‑to‑peer payments a tap‑away. For a solo or small‑firm lawyer, that friction reduction compounds over time into real efficiency.

From a malpractice‑avoidance standpoint, a digital wallet can be safer than a billfold. Losing a traditional wallet means scrambling to cancel credit cards, monitoring for identity theft, and possibly dealing with unauthorized use of your bar ID or access cards. A lost phone, by contrast, can be located, remotely wiped, or locked with strong authentication. Properly configured, it can reduce risk rather than increase it.

This is where ABA Model Rule 1.1 on competence, particularly Comment 8, becomes relevant. The Comment notes that competent representation includes understanding “the benefits and risks associated with relevant technology.” A digital wallet is very much “relevant technology” for a modern practitioner. Choosing not to understand or use it, especially when it offers better security and traceability than analog methods, may itself become a competence question as the bar’s expectations evolve.

The gaps: cash, IDs, and access to justice

There are plenty of reasons not to go “cashless” when leaving home or the office.

Still, David’s hesitation—“there’s a part of me that still feels compelled to carry a small wallet with my driver’s license in it”—should resonate with lawyers. There are pockets of our professional lives where the ecosystem is not ready, and those pockets matter.

First, cash. Many lawyers still tip courthouse staff, parking attendants, baristas near the courthouse, and others in cash—including, in my case, using $2 bills (yes, they are still produced, still accepted, and can be obtained at many banks across the U.S. [At least as of the time of this posting]. I almost always get an excited smile when I tip my barista for his/her work with a $2 bill). Cash remains the lowest‑friction, most universally accepted “protocol” for small-scale human interactions. Refusing to carry any cash at all can put you in awkward social and professional situations, especially in older courthouses or local establishments that either do not take cards or resent micro‑transactions by card. For those committed to cash tipping as a personal or professional habit, a purely digital wallet is not yet a substitute.

Second, physical IDs. While TSA and some states are piloting and accepting digital IDs, acceptance is not universal, and the rules are in flux. David notes he has a state digital ID that “shows up nicely” in Apple Wallet. That is great—until you encounter an agency, judge, clerk, or officer who simply will not accept it. Not all jurisdictions recognize mobile driver’s licenses or digital IDs, and some procedures (e.g., certain filings or in‑person notarizations) still presume a physical, inspectable card. The risk is not hypothetical: show up with the wrong form of ID for a flight or a court security checkpoint, and you may face delay, additional fees, or outright denial of entry.

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.”

✈️ 🌎 ‼️

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.” ✈️ 🌎 ‼️

For lawyers, this is not just an inconvenience—it is a competence and diligence issue under Model Rules 1.1 and 1.3. If your failure to carry an accepted ID means you miss a hearing, delay a filing, or cannot visit a client, you have a professional problem, not just a tech annoyance. Likewise, local court rules and security policies may require a specific bar card or government‑issued ID to enter restricted areas. A digital ID on your phone will not help if the sheriff’s deputy at the door has not been trained or authorized to accept it.

Third, connectivity. A digital wallet that is fully dependent on live internet access is a fragile tool in old courthouses with thick stone walls, in rural jurisdictions, or during emergencies. Many modern digital wallets do allow offline transactions at NFC terminals using stored tokens, but not all. If your payment method, ID, or membership pass depends on a cloud verification step and you are in a dead zone—or your battery dies—you effectively have no wallet. Lawyers who rely on public transit, rideshares, or mobile office setups need to consider this in contingency planning, particularly when punctuality is essential.

Digital wallets and legal ethics

From an ethics perspective, digital wallets intersect with several core duties.

Under Model Rule 1.6, protecting client confidentiality extends to how you pay for and manage client‑related expenses. If you are using peer‑to‑peer payment apps or storing client‑related account details in a digital wallet, you must understand their privacy and data‑sharing practices. Some services expose transaction histories, social feeds, or metadata that could inadvertently reveal client relationships or matter details. Configuring strict privacy settings and separating personal from firm accounts is not optional; it is part of your duty of confidentiality.

Model Rule 1.15 on safekeeping property also comes into play if you ever use digital tools to handle client funds, reimbursements, or settlement distributions. While most bars still require traditional trust accounts and closely regulate payment processors, the trend toward digital payments will continue. Using any digital payment or wallet solution around client funds requires careful vetting, written policies, and—ideally—consultation with your malpractice carrier and bar ethics guidance.

Finally, Model Rule 5.3 on responsibilities regarding nonlawyer assistance extends to IT providers and wallet platforms. If your firm relies on third‑party providers to manage mobile device management (MDM), security, or payment integrations, you must make reasonable efforts to ensure their conduct aligns with your professional obligations. Managing digital wallets on firm‑owned or BYOD devices should be governed by a clear policy that addresses encryption, remote wipe, lock‑screen settings, and acceptable use.

Practical guidance: a hybrid, not a cliff

As advanced as our digital wallets are, the legal professional should carry a combination of digital and physical identification, means of payment, and cash!

Given these realities, are we “truly there” yet for lawyers to go fully wallet‑free? Not quite. For most practitioners, the prudent path is a hybrid approach:

  • Carry a slim physical wallet with a government‑issued ID, bar card (if used locally), a minimal backup payment card, and a small amount of cash for tipping and edge cases.

  • Use a digital wallet as your primary payment and convenience layer, especially in environments where it is well‑supported and secure.

  • Confirm, in advance, what IDs your courthouse, correctional facilities, and agencies accept, and do not assume your digital ID will suffice.

  • Harden your digital wallet: enable strong biometrics, ensure a reputable MDM or security solution manages any firm devices, and separate personal from professional payment flows where possible.

This hybrid approach aligns with Model Rule 1.1’s requirement to understand and responsibly adopt relevant technology while honoring the practical demands of courtroom work and client service. It allows you to benefit from the security and efficiency of digital wallets without betting your professional obligations on the most fragile parts of the ecosystem: universal acceptance and ubiquitous connectivity.

David ends his reflection by asking whether he will ever “truly go out knowingly wallet‑free” and whether he is alone in his hesitation. Lawyers should feel no pressure to be first in line to abandon physical wallets entirely. Our job is to advocate, counsel, and appear—on time, properly identified, and fully prepared. That may mean, for the foreseeable future, living comfortably in both worlds: with a well‑tuned digital wallet in your hand and a minimal, carefully curated physical wallet in your pocket.

MTC

MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC

MTC: AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖

SDNY Heppner Ruling: Public AI Use Breaks Attorney-Client PrivilegE!

In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that documents a criminal defendant generated with a publicly accessible AI tool and later sent to his lawyers were not protected by either attorney‑client privilege or the work‑product doctrine. That decision should be a wake‑up call for every lawyer who has ever dropped client facts into a public chatbot.

The court’s analysis followed traditional privilege principles rather than futuristic AI theory. Privilege requires confidential communication between a client and a lawyer made for the purpose of obtaining legal advice. In Heppner, the AI tool was “obviously not an attorney,” and there was no “trusting human relationship” with a licensed professional who owed duties of loyalty and confidentiality. Moreover, the platform’s privacy policy disclosed that user inputs and outputs could be collected and shared with third parties, undermining any reasonable expectation of confidentiality. In short, the defendant’s AI‑generated drafts looked less like protected client notes and more like research entrusted to a third‑party service.

For sometime now, I’ve warned on The Tech‑Savvy Lawyer.Page has warned practitioners not to paste client PII or case‑specific facts into generative AI tools, particularly public models whose terms of use and training practices erode confidentiality. We have consistently framed AI as an extension of a lawyer’s existing ethical duties, not a shortcut around them. I have encouraged readers to treat these systems like any other non‑lawyer vendor that must be vetted, contractually constrained, and configured before use. That perspective aligns squarely with Heppner’s outcome: once you treat a public AI as a casual brainstorming partner, you risk treating your client’s confidences as discoverable data.

A Tech-Savvy Lawyer Avoids AI Privilege Waiver With Confidentiality Safeguards!

For lawyers, this has immediate implications under the ABA Model Rules. Model Rule 1.1 on competence now explicitly includes understanding the “benefits and risks associated” with relevant technology, and recent ABA guidance on generative AI emphasizes that uncritical reliance on these tools can breach the duty of competence. A lawyer who casually uses public AI tools with client facts—without reading the terms of use, configuring privacy, or warning the client—may fail the competence test in both technology and privilege preservation. The Tech‑Savvy Lawyer.Page repeatedly underscores this point, translating dense ethics opinions into practical checklists and workflows so that even lawyers with only moderate tech literacy can implement safer practices.

Model Rule 1.6 on confidentiality is equally implicated. If a lawyer discloses client confidential information to a public AI platform that uses data for training or reserves broad rights to disclose to third parties, that disclosure can be treated like sharing with any non‑necessary third party, risking waiver of privilege. Ethical guidance stresses that lawyers must understand whether an AI provider logs, trains on, or shares client data and must adopt reasonable safeguards before using such tools. That means reading privacy policies, toggling enterprise settings, and, in many cases, avoiding consumer tools altogether for client‑specific prompts.

Does a private, paid AI make a difference? Possibly, but only if it is structured like other trusted legal technology. Enterprise or legal‑industry tools that contractually commit not to train on user data and to maintain strict confidentiality can better support privilege claims, because confidentiality and reasonable expectations are preserved. Tools like Lexis‑style or Westlaw‑style AI offerings, deployed under robust business associate and security agreements, look more like traditional research platforms or litigation support vendors within Model Rules 5.1 and 5.3, which govern supervisory duties over non‑lawyer assistants. The Tech‑Savvy Lawyer.Page has emphasized this distinction, encouraging lawyers to favor vetted, enterprise‑grade solutions over consumer chatbots when client information is involved.

Enterprise AI Vetting Checklist for Lawyers: Contracts, NDA, No Training

The tech‑savvy lawyer in 2026 is not the one who uses the most AI; it is the one who knows when not to use it. Before entering client facts into any generative AI, lawyers should ask: Is this tool configured to protect client confidentiality? Have I satisfied my duties of competence and communication by explaining the risks to my client (Model Rules 1.1 and 1.4)? And if a court reads this platform’s privacy policy the way Judge Rakoff did, will I be able to defend my privilege claims with a straight face to a court or to a disciplinary bar?

AI may be a powerful drafting partner, but it is not your co‑counsel and not your client’s confidant. The tech‑savvy lawyer—of the sort championed by The Tech‑Savvy Lawyer.Page—treats it as a tool: carefully vetted, contractually constrained, and ethically supervised, or not used at all. 🔒🤖

MTC: Lawyers and AI Oversight: What the VA’s Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖

Human-in-the-loop is the point: Effective oversight happens where AI meets care—aligning clinical judgment, privacy, and compliance with real-world workflows.

The Department of Veterans Affairs’ experience with generative AI is not a distant government problem; it is a mirror held up to every law firm experimenting with AI tools for drafting, research, and client communication. I recently listened to an interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, “VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line” and gained some insights on how lawyers can learn from this perhaps hastilly impliment AI program. VA clinicians are using AI chatbots to document visits and support clinical decisions, yet a federal watchdog has warned that there is no formal mechanism to identify, track, or resolve AI‑related risks—a “potential patient safety risk” created by speed without governance. In law, that same pattern translates into “potential client safety and justice risk,” because the core failure is identical: deploying powerful systems without a structured way to catch and correct their mistakes.

The oversight gap at the VA is striking. There is no standardized process for reporting AI‑related concerns, no feedback loop to detect patterns, and no clearly assigned responsibility for coordinating safety responses across the organization. Clinicians may have helpful tools, but the institution lacks the governance architecture that turns “helpful” into “reliably safe.” When law firms license AI research platforms, enable generative tools in email and document systems, or encourage staff to “try out” chatbots on live matters without written policies, risk registers, or escalation paths, they recreate that same governance vacuum. If no one measures hallucinations, data leakage, or embedded bias in outputs, risk management has given way to wishful thinking.

Existing ethics rules already tell us why that is unacceptable. Under ABA Model Rule 1.1, competence now includes understanding the capabilities and limitations of AI tools used in practice, or associating with someone who does. Model Rule 1.6 requires lawyers to critically evaluate what client information is fed into self‑learning systems and whether informed consent is required, particularly when providers reuse inputs for training. Model Rules 5.1, 5.2, and 5.3 extend these obligations across partners, supervising lawyers, and non‑lawyer staff: if a supervised lawyer or paraprofessional relies on AI in a way that undermines client protection, firm leadership cannot plausibly claim ignorance. And rules on candor to tribunals make clear that “the AI drafted it” is never a defense to filing inaccurate or fictitious authority.

Explaining the algorithm to decision-makers: Oversight means making AI risks understandable to judges, boards, and the public—clearly and credibly.

What the VA story adds is a vivid reminder that effective AI oversight is a system, not a slogan. The inspector general emphasized that AI can be “a helpful tool” only if it is paired with meaningful human engagement: defined review processes, clear routes for reporting concerns, and institutional learning from near misses. For law practice, that points directly toward structured workflows. AI‑assisted drafts should be treated as hypotheses, not answers. Reasonable human oversight includes verifying citations, checking quotations against original sources, stress‑testing legal conclusions, and documenting that review—especially in high‑stakes matters involving liberty, benefits, regulatory exposure, or professional discipline.

For lawyers with limited to moderate tech skills, this should not be discouraging; done correctly, AI governance actually makes technology more approachable. You do not need to understand model weights or training architectures to ask practical questions: What data does this tool see? When has it been wrong in the past? Who is responsible for catching those errors before they reach a client, a court, or an opposing party? Thoughtful prompts, standardized checklists for reviewing AI output, and clear sign‑off requirements are all well within reach of every practitioner.

The VA’s experience also highlights the importance of mapping AI uses and classifying their risk. In health care, certain AI use cases are obviously safety‑critical; in law, the parallel category includes anything that could affect a person’s freedom, immigration status, financial security, public benefits, or professional license. Those use cases merit heightened safeguards: tighter access control, narrower scoping of AI tasks, periodic sampling of outputs for quality, and specific training for the lawyers who use them. Importantly, this is not a “big‑law only” discipline. Solo and small‑firm lawyers can implement proportionate governance with simple written policies, matter‑level notes showing how AI was used, and explicit conversations with clients where appropriate.

Critically, AI does not dilute core professional responsibility. If a generative system inserts fictitious cases into a brief or subtly mischaracterizes a statute, the duty of candor and competence still rests squarely on the attorney who signs the work product. The VA continues to hold clinicians responsible for patient care decisions, even when AI is used as a support tool; the law should be no different. That reality should inform how lawyers describe AI use in engagement letters, how they supervise junior lawyers and staff, and how they respond when AI‑related concerns arise. In some situations, meeting ethical duties may require forthright client communication, corrective filings, and revisions to internal policies.

AI oversight starts at the desk: Lawyers must be able to interrogate model outputs, data quality, and risk signals—before technology impacts patient care.

The practical lesson from the VA’s AI warning is straightforward. The “human touch” in legal technology is not a nostalgic ideal; it is the safety mechanism that makes AI ethically usable at all. Lawyers who embrace AI while investing in governance—policies, training, and oversight calibrated to risk—will be best positioned to align with the ABA’s evolving guidance, satisfy courts and regulators, and preserve hard‑earned client trust. Those who treat AI as a magic upgrade and skip the hard work of oversight are, knowingly or not, accepting that their clients may become the test cases that reveal where the system fails. In a profession grounded in judgment, the real innovation is not adopting AI; it is designing a practice where human judgment still has the final word.

MTC

MTC: Everyday Tech, Extraordinary Evidence—Again: How Courts Are Punishing Fake Digital and AI Data ⚖️📱

Check your Ai work - AI fraud can meet courtroom consequences.

In last month’s editorial, “Everyday Tech, Extraordinary Evidence,” we walked through how smartphones, dash cams, and wearables turned the Minnesota ICE shooting into a case study in modern evidence practice, from rapid preservation orders to multi‑angle video timelines.📱⚖️ We focused on the positive side: how deliberate intake, early preservation, and basic synchronization tools can turn ordinary devices into case‑winning proof.📹 This follow‑up tackles the other half of the equation—what happens when “evidence” itself is fake, AI‑generated, or simply unverified slop, and how courts are starting to respond with serious sanctions.⚠️

From Everyday Tech to Everyday Scrutiny

The original article urged you to treat phones and wearables as critical evidentiary tools, not afterthoughts: ask about devices at intake, cross‑reference GPS trails, and treat cars as rolling 360‑degree cameras.🚗⌚ We also highlighted the Minnesota Pretti shooting as an example of how rapid, court‑ordered preservation of video and other digital artifacts can stop crucial evidence from “disappearing” before the facts are fully understood.📹 Those core recommendations still stand—if anything, they are more urgent now that generative AI makes it easier to fabricate convincing “evidence” that never happened.🤖

The same tools that helped you build robust, data‑driven reconstructions—synchronized bystander clips, GPS logs, wearables showing movement or inactivity—are now under heightened scrutiny for authenticity.📊 Judges and opposing counsel are no longer satisfied with “the video speaks for itself”; they want to know who created it, how it was stored, whether metadata shows AI editing, and what steps counsel took to verify that the file is what it purports to be.📁

When “Evidence” Is Fake: Sanctions Arrive

We have moved past the hypothetical stage. Courts are now issuing sanctions—sometimes terminating sanctions—when parties present fake or AI‑generated “evidence” or unverified AI research.💥

These are not “techie” footnotes; they are vivid warnings that falsified or unverified digital and AI data can end careers and destroy cases.🚨

ABA Model Rules: The Safety Rails You Ignore at Your Peril

Train to verify—defend truth in the age of AI.

Your original everyday‑tech playbook already fits neatly within ABA Model Rule 1.1 and Comment 8’s duty of technological competence; the new sanctions landscape simply clarifies the stakes.📚

  • Rule 1.1 (Competence): You must understand the benefits and risks of relevant technology, which now clearly includes generative AI and deepfake tools.⚖️ Using AI to draft or “enhance” without checking the output is not a harmless shortcut—it is a competence problem.

  • Rule 1.6 (Confidentiality): Uploading client videos, wearable logs, or sensitive communications to consumer‑grade AI sites can expose them to unknown retention and training practices, risking confidentiality violations.🔐

  • Rule 3.3 (Candor to the Tribunal) and Rule 4.1 (Truthfulness): Presenting AI‑altered video or fake citations as if they were genuine is the very definition of misrepresentation, as the New York and California sanction orders make clear.⚠️ Even negligent failure to verify can be treated harshly once the court’s patience for AI excuses runs out.

  • Rules 5.1–5.3 (Supervision): Supervising lawyers must ensure that associates, law clerks, and vendors understand that AI outputs are starting points, not trustworthy final products, and that fake or manipulated digital evidence will not be tolerated.👥

Bridging Last Month’s Playbook With Today’s AI‑Risk Reality

In Last month’s editorial, we urged three practical habits: ask about devices, move fast on preservation, and build a vendor bench for extraction and authentication.📱⌚🚗 This month, the job is to wrap those habits in explicit AI‑risk controls that lawyers with modest tech skills can realistically follow.🧠

  1. Never treat AI as a silent co‑counsel. If you use AI to draft research, generate timelines, or “enhance” video, you must independently verify every factual assertion and citation, just as you would double‑check a new associate’s memo.📑 “The AI did it” is not a defense; courts have already said so.

  2. Preserve the original, disclose the enhancement. Our earlier advice to keep raw smartphone files and dash‑cam footage now needs one more step: if you use any enhancement (AI or otherwise), label it clearly and be prepared to explain what was done, why, and how you ensured that the content did not change.📹

  3. Use vendors and examiners as authenticity firewalls. Just as we suggested, bringing in digital forensics vendors to extract phone and wearable data, you should now consider them for authenticity challenges as well—especially where the opposing side may have incentives or tools to create deepfakes.🔍 A simple expert declaration that a file shows signs of AI manipulation can be the difference between a credibility battle and a terminating sanction.

  4. Train your team using real sanction orders. Nothing clarifies the risk like reading Judge Castel’s order in the ChatGPT‑citation case or Judge Kolakowski’s deepfake ruling in Mendones.⚖️ Incorporate those cases into short internal trainings and CLEs; they translate abstract “AI ethics” into concrete, courtroom‑tested consequences.

  5. Document your verification steps. For everyday tech evidence, a simple log—what files you received, how you checked metadata, whether you compared against other sources, which AI tools (if any) you used, and what you did to confirm their outputs—can demonstrate good faith if a judge later questions your process.📋

Final Thoughts: Authenticity as a First‑Class Question

be the rock star! know how to use ai responsibly in your work!

In the first editorial, the core message was that everyday devices are quietly turning into your best witnesses.📱⌚ The new baseline is that every such “witness” will be examined for signs of AI contamination, and you will be expected to have an answer when the court asks, “What did you do to make sure this is real?”🔎

Lawyers with limited to moderate tech skills do not need to reverse‑engineer neural networks or master forensic software. Instead, they must combine the practical habits from January’s piece—asking, preserving, synchronizing—with a disciplined refusal to outsource judgment to AI.⚖️ In an era of deepfakes and hallucinated case law, authenticity is no longer a niche evidentiary issue; it is the moral center of digital advocacy.✨

Handled wisely, your everyday tech strategy can still deliver “extraordinary evidence.” Handled carelessly, it can just as quickly produce extraordinary sanctions.🚨

MTC

MTC: Clio–Alexi Legal Tech Fight: What CRM Vendor Litigation Means for Your Law Firm, Client Data and ABA Model Rule Compliance ⚖️💻

Competence, Confidentiality, Vendor Oversight!

When the companies behind your CRM and AI research tools start suing each other, the dispute is not just “tech industry drama” — it can reshape the practical and ethical foundations of your practice. At a basic to moderate level, the Clio–Alexi fight is about who controls valuable legal data, how that data can be used to power AI tools, and whether one side is using its market position unfairly. Clio (a major practice‑management and CRM platform) is tied to legal research tools and large legal databases. Alexi is a newer AI‑driven research company that depends on access to caselaw and related materials to train and deliver its products. In broad strokes, one side claims the other misused or improperly accessed data and technology; the other responds that the litigation is “sham” or anticompetitive, designed to limit a smaller rival and protect a dominant ecosystem. There are allegations around trade secrets, data licensing, and antitrust‑style behavior. None of that may sound like your problem — until you remember that your client data, workflows, and deadlines live inside tools these companies own, operate, or integrate with.

For lawyers with limited to moderate technology skills, you do not need to decode every technical claim in the complaints and counterclaims. You do, however, need to recognize that vendor instability, lawsuits, and potential regulatory scrutiny can directly touch: your access to client files and calendars, the confidentiality of matter information stored in the cloud, and the long‑term reliability of the systems you use to serve clients and get paid. Once you see the dispute in those terms, it becomes squarely an ethics, risk‑management, and governance issue — not just “IT.”

ABA Model Rule 1.1: Competence Now Includes Tech and Vendor Risk

Model Rule 1.1 requires “competent representation,” which includes the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. In the modern practice environment, that has been interpreted to include technology competence. That does not mean you must be a programmer. It does mean you must understand, in a practical way, the tools on which your work depends and the risks they bring.

If your primary CRM, practice‑management system, or AI research tool is operated by a company in serious litigation about data, licensing, or competition, that is a material fact about your environment. Competence today includes: knowing which mission‑critical workflows rely on that vendor (intake, docketing, conflicts, billing, research, etc.); having at least a baseline sense of how vendor instability could disrupt those workflows; and building and documenting a plan for continuity — how you would move or access data if the worst‑case scenario occurred (for example, a sudden outage, injunction, or acquisition). Failing to consider these issues can undercut the “thoroughness and preparation” the Rule expects. Even if your firm is small or mid‑sized, and even if you feel “non‑technical,” you are still expected to think through these risks at a reasonable level.

ABA Model Rule 1.6: Confidentiality in a Litigation Spotlight

Model Rule 1.6 is often front of mind when lawyers think about cloud tools, and the Clio–Alexi dispute reinforces why. When a technology company is sued, its systems may become part of discovery. That raises questions like: what types of client‑related information (names, contact details, matter descriptions, notes, uploaded files) reside on those systems; under what circumstances that information could be accessed, even in redacted or aggregate form, by litigants, experts, or regulators; and how quickly and completely you can remove or export client data if a risk materializes.

You remain the steward of client confidentiality, even when data is stored with a third‑party provider. A reasonable, non‑technical but diligent approach includes: understanding where your data is hosted (jurisdictions, major sub‑processors, data‑center regions); reviewing your contracts or terms of service for clauses about data access, subpoenas, law‑enforcement or regulatory requests, and notice to you; and ensuring you have clearly defined data‑export rights — not only if you voluntarily leave, but also if the vendor is sold, enjoined, or materially disrupted by litigation. You are not expected to eliminate all risk, but you are expected to show that you considered how vendor disputes intersect with your duty to protect confidential information.

ABA Model Rule 5.3: Treat Vendors as Supervised Non‑Lawyer Assistants

ABA Rules for Modern Legal Technology can be a factor when legal tech companies fight!

Model Rule 5.3 requires lawyers to make reasonable efforts to ensure that non‑lawyer assistants’ conduct is compatible with professional obligations. In 2026, core technology vendors — CRMs, AI research platforms, document‑automation tools — clearly fall into this category.

You are not supervising individual programmers, but you are responsible for: performing documented diligence before adopting a vendor (security posture, uptime, reputation, regulatory or litigation history); monitoring for material changes (lawsuits like the Clio–Alexi matter, mergers, new data‑sharing practices, or major product shifts); and reassessing risk when those changes occur and adjusting your tech stack or contracts accordingly. A litigation event is a signal that “facts have changed.” Reasonable supervision in that moment might mean: having someone (inside counsel, managing partner, or a trusted advisor) read high‑level summaries of the dispute; asking the vendor for an explanation of how the litigation affects uptime, data security, and long‑term support; and considering whether you need contractual amendments, additional audit rights, or a backup plan with another provider. Again, the standard is not perfection, but reasoned, documented effort.

How the Clio–Alexi Battle Can Create Problems for Users

A dispute at this scale can create practical, near‑term friction for everyday users, quite apart from any final judgment. Even if the platforms remain online, lawyers may see more frequent product changes, tightened integrations, shifting data‑sharing terms, or revised pricing structures as companies adjust to litigation costs and strategy. Any of these changes can disrupt familiar workflows, create confusion around where data actually lives, or complicate internal training and procedures.

There is also the possibility of more subtle instability. For example, if a product roadmap slows down or pivots under legal pressure, features that firms were counting on — for automation, AI‑assisted drafting, or analytics — may be delayed or re‑scoped. That can leave firms who invested heavily in a particular tool scrambling to fill functionality gaps with manual workarounds or additional software. None of this automatically violates any rule, but it can introduce operational risk that lawyers must understand and manage.

In edge cases, such as a court order that forces a vendor to disable key features on short notice or a rapid sale of part of the business, intense litigation can even raise questions about long‑term continuity. A company might divest a product line, change licensing models, or settle on terms that affect how data can be stored, accessed, or used for AI. Firms could then face tight timelines to accept new terms, migrate data, or re‑evaluate how integrated AI features operate on client materials. Without offering any legal advice about what an individual firm should do, it is fair to say that paying attention early — before options narrow — is usually more comfortable than reacting after a sudden announcement or deadline.

Practical Steps for Firms at a Basic–Moderate Tech Level

You do not need a CIO to respond intelligently. For most firms, a short, structured exercise will go a long way:

Practical Tech Steps for Today’s Law Firms

  1. Inventory your dependencies. List your core systems (CRM/practice management, document management, time and billing, conflicts, research/AI tools) and note which vendors are in high‑profile disputes or under regulatory or antitrust scrutiny.

  2. Review contracts for safety valves. Look for data‑export provisions, notice obligations if the vendor faces litigation affecting your data, incident‑response timelines, and business‑continuity commitments; capture current online terms.

  3. Map a contingency plan. Decide how you would export and migrate data if compelled by ethics, client demand, or operational need, and identify at least one alternative provider in each critical category.

  4. Document your diligence. Prepare a brief internal memo or checklist summarizing what you reviewed, what you concluded, and what you will monitor, so you can later show your decisions were thoughtful.

  5. Communicate without alarming. Most clients care about continuity and confidentiality, not vendor‑litigation details; you can honestly say you monitor providers, have export and backup options, and have assessed the impact of current disputes.

From “IT Problem” to Core Professional Skill

The Clio–Alexi litigation is a prominent reminder that law practice now runs on contested digital infrastructure. The real message for working lawyers is not to flee from technology but to fold vendor risk into ordinary professional judgment. If you understand, at a basic to moderate level, what the dispute is about — data, AI training, licensing, and competition — and you take concrete steps to evaluate contracts, plan for continuity, and protect confidentiality, you are already practicing technology competence in a way the ABA Model Rules contemplate. You do not have to be an engineer to be a careful, ethics‑focused consumer of legal tech. By treating CRM and AI providers as supervised non‑lawyer assistants, rather than invisible utilities, you position your firm to navigate future lawsuits, acquisitions, and regulatory storms with far less disruption. That is good risk management, sound ethics, and, increasingly, a core element of competent lawyering in the digital era. 💼⚖️