MTC: Should Lawyers Host Their Own AI (or Hybrid AI)?

Lawyers need to weigh hosting AI against ABA ethics in modern practice.

Lawyers are being pushed to decide whether to host their own artificial intelligence systems, rely entirely on cloud tools, or adopt a hybrid model that uses both local and cloud-based AI.🌐 At the same time, the American Bar Association’s Formal Opinion 512 makes clear that AI use sits squarely inside existing duties of competence, confidentiality, communication, candor, supervision, and fees under the Model Rules of Professional Conduct.

Perplexity’s new “Personal Computer” platform is a vivid example of how this can work in practice: it can run as an always‑on AI agent on a Mac mini, with access to local files, native apps, and cloud models, effectively turning a spare Mac into a dedicated digital worker. For lawyers, that kind of setup is appealing because a Mac mini can sit in the office as a sandboxed machine, disconnected from the main network and primary cloud file storage, to tightly control what AI can see and where client data goes.🧱

Why Lawyers Are Tempted to Host Their Own or Hybrid AI

There are several practical reasons lawyers and law firms are looking at running AI locally, or in a hybrid configuration that blends on‑premise and cloud tools:

  • Control over client data. Running AI on a dedicated Mac mini or similar device gives the firm direct control over where data is stored, which apps it can touch, and whether it ever leaves the office environment.

  • 24/7 “digital worker.” Platforms like Perplexity’s Personal Computer can operate continuously, orchestrating multiple models, moving between local files and the web, and even continuing work that you start on your phone while you are away.⚙️

  • Integration with local files and apps. A local or hybrid agent can read your document management folders, draft or revise motions in your word processor, and compare local files with online sources without sending entire client datasets to a general‑purpose cloud chatbot.

  • Potential cost and performance benefits. For some workflows, once the hardware is in place, local or hybrid AI can be more predictable in cost and latency than pure pay‑per‑token cloud services, especially when workloads are steady and repetitive.💸

From an ethics standpoint, these benefits map directly onto Model Rule 1.1’s requirement that lawyers maintain technological competence, which now includes a duty to understand both the capabilities and the limitations of AI tools they deploy in practice. If you can explain how your on‑premise or hybrid AI is configured, what data it sees, and why you chose that architecture, you are already moving toward satisfying that duty of competence in your technology choices.

ABA Model Rules: Key Considerations for Self‑Hosted and Hybrid AI

The ABA’s Formal Opinion 512 does not mandate or prohibit self‑hosting, but it does identify core ethical duties that must guide any AI deployment. For lawyers thinking about a sandboxed computer or hybrid AI, several Model Rules are especially important:

  • Model Rule 1.1 (Competence). You must understand enough about the AI system—local or cloud—to evaluate its reliability, security, and appropriate use, including risks like hallucinations, outdated information, and bias.

  • Model Rule 1.4 (Communication). In many situations, you may need to tell clients that you are using generative AI—and how—so they can make informed decisions about the representation.

  • Model Rule 1.5 (Fees). If you bill for AI‑assisted work, your fees still must be reasonable; you cannot simply pass through AI costs without regard to value, and you cannot charge as if the work were done entirely by hand.

  • Model Rule 1.6 (Confidentiality). Client information must be protected whether it is processed on‑premise or in the cloud, which means assessing encryption, access controls, logging, and whether AI vendors can use your data to train their models.

  • Model Rules 3.3 and 4.1 (Candor). You must not present AI‑generated work product that you have not verified, and you must correct any false or misleading statements to tribunals or others if AI contributes to those errors. 

  • Model Rules 5.1 and 5.3 (Supervision). Partners and managing lawyers must implement reasonable policies, training, and oversight to ensure that both lawyers and non‑lawyer staff use AI tools in compliance with ethical obligations. 

Formal Opinion 512 underscores that using generative AI does not reduce any of these obligations; rather, it adds new vectors for potential violations, including inadvertent disclosure through “self‑learning” tools that retain prompts to improve their models. A self‑hosted or sandboxed system can reduce some of these risks but does not eliminate the need for careful configuration, testing, and ongoing oversight.🔍

The Case for a Sandboxed Mac Mini or Similar Setup

Attorneys can test sandboxed computers for aba compliant, secure ai workflows.

A compelling middle road is to run your AI assistant as an always‑on agent on a dedicated, sandboxed machine—such as a Mac mini—segregated from your primary network and cloud storage, and then carefully curate what you allow it to access. Perplexity’s Personal Computer is designed to run 24/7 on a Mac mini, with secure sandboxed file creation, visible actions, and a kill switch, which can help align AI use with ethical expectations of control and auditability.🧑‍💻

For law practices with limited to moderate technology skills, this architecture offers practical advantages:

  • You can keep the AI’s working directory separate from your main document management system, copying in only those files you want it to analyze.

  • You can disconnect the sandbox machine from your firm’s primary VPN and file‑syncing tools, reducing the attack surface for client data.💽

  • You can log and periodically review what the AI agent is doing—what files it opens, what tasks it runs—to support your supervisory duties under Rules 5.1 and 5.3.

Because a personal computer can orchestrate teams of models and interact with local files and cloud services in one system, it embodies the hybrid AI idea: use local control for sensitive matters, and selectively rely on cloud models for broader research or drafting where appropriate safeguards are in place. That kind of hybrid strategy aligns well with the ABA’s focus on risk‑based analysis rather than a one‑size‑fits‑all prohibition.⚖️

Why Some Lawyers Should Not Host Their Own AI (At Least Not Yet)

Self‑hosting or running a hybrid computer‑based AI platform is not the right answer for every firm, and in some practices, it may actually increase risk. If your firm cannot realistically manage updates, patches, access controls, and backups for a dedicated AI machine, a reputable cloud provider with strong security and clear contractual commitments may be a safer option. Many lawyers underestimate the work required to securely configure and maintain specialized systems, which can lead to misconfigurations that expose confidential information or disable audit logs you may need for internal investigations or regulatory inquiries.

There is also a risk of overconfidence: having an AI agent running on your own hardware can create a false sense that everything processed on that machine is automatically safe and ethically sound.😬 Formal Opinion 512 warns that self‑learning AI tools can leak information across matters, even within a single firm, if they are not properly isolated; that risk exists whether the system runs on your computer or in the cloud. For many small firms and solos, the most ethical and efficient path may be to use vetted, well‑documented cloud AI tools under strict internal policies rather than trying to build and secure a home‑grown AI infrastructure.

Finally, if you lack even moderate technology literacy, jumping straight to a self‑hosted AI environment can distract from more foundational tasks like implementing a written AI policy, training staff on prompt hygiene, and integrating AI use into your conflict checks and quality control processes. In those cases, simpler deployments—such as using browser‑based AI tools with no client identifiers and careful manual review—can be more defensible under the Model Rules.

Practical Takeaways for Ethics‑Focused AI Adoption

an ETHICS-FOCUSED LAWYER CAN CONSIDER USING A HYBRID AI UNDER THE ABA Model Rules.

For lawyers and firms considering self‑hosted or hybrid AI, several practical steps emerge from the ABA guidance and from the new generation of self‑hosted AI platforms:

  • Start with a written AI policy that maps to Model Rules 1.1, 1.4, 1.5, 1.6, 3.3, 4.1, 5.1, and 5.3, that distinguishes between internal experimentation and client‑facing use.

  • If you deploy a sandboxed Mac mini or similar, define precisely which files and apps it may access, how it will be backed up, and who has administrative control.🔐

  • Treat AI outputs as drafts that require human review, not as final work product, and document your review in a way that aligns with your quality‑control procedures.

  • Train all users—not just IT—on how the Personal Computer or other AI system operates, what logs are available, and how to shut it down if it behaves unexpectedly.

  • Revisit your configuration and vendor contracts regularly, including any terms about data retention, training, and breach notification, to ensure ongoing compliance with Revised ethics guidance and state‑level opinions.📜

In that light, the question is not whether lawyers should or should not host their own AI, but whether they can do so in a way that satisfies the ABA’s expectations for competence, confidentiality, and supervision while delivering real value to clients. For some, a carefully configured sandboxed Mac mini running a hybrid AI agent will be a powerful, ethical accelerator; for others, the more responsible choice is to rely on well‑governed cloud tools until their internal capabilities catch up.

MTC

MTC: Smart Recording, Client Secrets, and HeyPocket: What Every Lawyer Needs to Know in 2026 📱⚖️

Your smartphone and AI note‑taking tools now sit in on more client conversations than many junior associates.📱 They track where you are, who you talk to, and—if you let them—what you and your clients say in real time. For lawyers, that convenience comes with concrete privilege, confidentiality, and compliance risks that cannot be ignored.⚖️

Smart Devices, AI Note‑Takers, and Constant Surveillance 📍

Modern smart devices already log GPS coordinates, Wi‑Fi networks, Bluetooth connections, and app activity, creating a rich behavioral profile of you and your clients. Smart speakers and voice assistants listen for wake words, but they sometimes capture snippets of nearby conversations and send them to remote servers for processing. Fitness wearables, in‑car systems, and “always‑on” microphones further increase the volume of ambient data that can be collected.

Against that background, AI‑enabled recorders and summarizers like Pocket add a new layer: deliberate recording, transcription, and AI analysis of your conversations. Pocket is marketed as an AI‑powered “thought companion” and conversation recorder that creates searchable summaries and action items; by design it captures each conversation as its own object to improve clarity and support consent‑based use. For a busy lawyer, this is appealing—automatic notes, organized insights, and fewer missed follow‑ups.🤖

Yet the same capabilities that make HeyPocket useful also make it ethically sensitive. You are no longer just allowing your phone to passively log metadata; you are actively routing client speech through a third‑party AI stack that stores and processes that data, subject to its own privacy policy, security posture, and retention rules.

ABA Model Rules: Competence, Confidentiality, and Truthfulness ⚖️

The ABA Model Rules already give you a clear framework for evaluating whether and how to use tools like HeyPocket in practice.

  • Model Rule 1.1 (Competence) and Comment 8 require lawyers to understand “the benefits and risks associated with relevant technology.” In this context, “relevant technology” includes AI‑driven recorders, their data flows, and their vendor terms. Using a tool you do not understand can be a competence problem, not just a convenience choice.⚠️

  • Model Rule 1.6 (Confidentiality) requires “reasonable efforts” to prevent unauthorized access or disclosure of client information, which now includes avoiding casual sharing of contacts, calendars, and conversations with apps or cloud services that may let humans review or monetize the data. Several state bar opinions already warn that lawyers may not simply click “Allow” when apps request access to contacts or case‑related data unless they determine the information will not be viewed by humans or transferred without client consent.

  • ABA Formal Opinion 477R outlines a risk‑based analysis for electronic communications, asking you to weigh sensitivity, likelihood of disclosure, cost of safeguards, impact on representation, client expectations, and requests for enhanced security. That same method applies directly to AI recorders: you must ask whether routing privileged discussions through an AI vendor is “reasonable” given the stakes of the matter.

  • ABA Formal Opinion 498 specifically calls out always‑listening smart devices and recommends disabling them during client communications to avoid unnecessary exposure to third parties. If you would mute Alexa for an intake call, you should think even more carefully before inviting an AI recording service into the room.

Model Rules 5.1 and 5.3 (supervision of lawyers and non‑lawyer assistants) also matter. If you roll out AI note‑takers firmwide, you must implement policies, training, and oversight to ensure that lawyers, staff, and vendors handle client data consistently with confidentiality obligations. And Rule 8.4(c) (prohibition on dishonesty or deception) can be implicated if you secretly record clients, witnesses, or opposing parties even in one‑party consent jurisdictions; at least one ethics authority has treated undisclosed recordings as unethical despite being legal.

When AI Recordings and Smart Data Become Evidence 🧾

Courts have already embraced smart‑device data as evidence: location records, communication metadata, calendar entries, and app logs routinely appear in both criminal and civil litigation. Forensic tools can image a device and surface location histories, messages, and app‑generated artifacts that can reconstruct events with surprising precision.

AI tools are now entering that evidentiary picture. In United States v. Heppner (S.D.N.Y. 2026), a defendant’s use of a public AI platform to analyze his legal situation—and the documents he generated from those conversations—was held not to be protected by attorney‑client privilege or the work‑product doctrine. The court emphasized that the AI provider’s terms of service allowed collection and disclosure of prompts and outputs, so the defendant had no reasonable expectation of confidentiality.

The lesson for lawyers is direct: if you or your clients feed sensitive matter details into an AI recorder or note‑taker whose policies allow human review, secondary uses, or disclosure to third parties, privilege can be placed at risk. Vendor marketing language about security cannot substitute for a real review of actual terms, retention practices, and opt‑out mechanisms.heydata+3

Using HeyPocket and Similar Tools Ethically in Practice 🎙️

Ethical use of HeyPocket and similar tools is possible, but it is not “plug‑and‑play.” You should treat these platforms more like outsourced e‑discovery vendors than like harmless productivity apps.✅

Key practical steps include:

  1. Perform a documented vendor risk review. Read the privacy policy and data‑processing terms to see what is recorded, how long it is stored, whether data is used to train models, and what rights you and your clients have to delete or export recordings. Confirm that access is logged and limited, and that data is encrypted in transit and at rest.

  2. Limit what you record. Default to not recording privileged conversations unless you have a clear, articulable reason, a defensible risk assessment, and—in higher‑risk matters—informed client consent. Use tools like HeyPocket in lower‑sensitivity contexts (internal debriefs, CLE notes, public presentations) rather than as an automatic recorder of all client meetings.

  3. Use explicit disclosures and consent. In many jurisdictions, recording requires the consent of all parties; even where only one‑party consent is required, an undisclosed recording can still trigger ethical concerns. A short, plain‑language explanation (“We use an AI note‑taking assistant that will record and transcribe this call; here is how we protect your information…”) respects client autonomy and supports informed consent under Model Rules 1.4 and 1.6.

  4. Segment data and control access. Configure firm accounts so that recordings are tied to matters, not to individuals’ personal devices wherever possible. Restrict who can review recordings and summaries, and enforce role‑based permissions consistent with Rule 5.1 and 5.3 obligations.

  5. Define bright‑line “no AI” categories. Certain matters—criminal defense, internal investigations, sensitive family or immigration cases, high‑value trade secret disputes—may warrant a categorical ban on AI recorders because the downside of any leak is catastrophic. Document these categories in your technology and confidentiality policies.

  6. Train your team and your clients. Explain to lawyers, staff, and key clients that not every AI interaction is confidential or privileged and that using consumer‑grade tools on their own may waive important protections. Encourage clients to avoid entering matter‑specific facts into public AI systems without discussing it with you first.

Approached this way, a tool like HeyPocket can be used as a controlled, auditable note‑taking assistant rather than a stealth surveillance risk. The ethical question is not “AI recorder: yes or no?” but “Under what conditions, with what safeguards, and in which matters, if any, is this tool a reasonable choice?”

Technology Competence as a Continuous Obligation 🚀

Technology will only grow more invasive, more ambient, and more tightly integrated with everyday law practice.📈 ABA and state bar guidance increasingly treats technology competence as an ongoing duty, tied directly to confidentiality, supervision, and even malpractice exposure. Smart devices and AI platforms are not going away, so opting out entirely is rarely realistic.

For lawyers with limited to moderate technical skills, the path forward is practical: build a short, repeatable checklist for evaluating tools; lean on reputable vendors with clear, lawyer‑friendly terms; seek help from cybersecurity professionals when stakes are high; and treat client confidentiality as the non‑negotiable anchor for every technology decision. When you do that, you can leverage products like HeyPocket to improve focus and memory while still honoring the core promise that underlies every engagement letter: your client’s secrets stay safe.🔐

MTC

MTC: Why 2026’s PC Price Hikes Put Law Firms at Risk 💻⚖️ (and Why Many Lawyers Are Quietly Switching to Macs)

2026 PC price hikes threaten law firm budgets, performance, ethical compliance!

Lawyers and Legal Professionals, the warning signs have been flashing for more than a year: 2026 was never going to be a normal hardware refresh cycle for law firms. 💸 Economists tracking the global memory crunch and AI‑driven demand have been clear that PCs and laptops would see double‑digit price hikes as Dynamic Random-Access Memory (DRAM) and other components were redirected to lucrative data‑center workloads. For lawyers who depend on reliable, reasonably priced computers to run practice‑critical applications, this is not an abstract macroeconomic story; it is a direct hit to margins, access to justice, and even ethical compliance.

Recent moves by Microsoft have made the problem impossible to ignore. In mid‑April, Microsoft sharply raised prices across its Surface lineup, including the Surface Pro and Surface Laptop families that many lawyers and law firms rely on for their Windows‑based workflows. Entry‑level machines that once started under $1,000 now begin well above that mark, with some configurations jumping several hundred dollars over their launch prices. In some cases, high‑end Surface laptops now cost more than roughly comparable MacBook Pro configurations, erasing the longstanding assumption that Windows hardware is always the cheaper option.

Here, at the Tech‑Savvy Lawyer blog, I have been chronicling these developments for months, noting that major PC manufacturers signaled 15–20 percent price increases thanks to the AI‑driven memory squeeze and ongoing geopolitical tariff pressures. Those predictions are now a reality. For solo practitioners, small firms, and even midsize practices with thin IT budgets, the message is simple: if you are buying new Windows hardware in 2026, expect to pay more for the same level of performance, or accept underpowered machines that will age badly under AI‑enhanced workflows. 🧾

Apple, by contrast, has maneuvered itself into a relatively stronger position, even though it is not completely immune to component inflation. By tightly integrating Apple Silicon, storage, and other components under its own supply chain, Apple has been able to hold the line on some key configurations in a way that many PC Original Equipment Manufacturers (OEM) cannot. Commentators focusing on the legal market have already highlighted products like the MacBook Neo as examples of Apple using its vertical control to keep pricing relatively stable while competitors raise prices or quietly cut specifications. At the same time, Apple’s M‑series and M5‑generation chips continue to deliver strong performance per watt, especially for on‑device AI tasks and productivity applications, which matters when you are running multiple research tools, document management systems, videoconferencing platforms, and AI assistants on a single machine.

This does not mean Apple has avoided all price movement. Newer MacBook Air and MacBook Pro models with M5 chips have seen list price increases of around $ 100–$ 400, depending on configuration. However, when Microsoft’s updated Surface pricing pushes many midrange Windows machines into the same or higher price tiers than comparable Macs, the calculus for lawyers becomes more nuanced. A Windows laptop that used to be the “budget” choice can now be as expensive as, or more expensive than, a MacBook that delivers similar or better performance and longer support life.

MacBooks outperform rising-cost Windows laptops for lawyers seeking value, security!

For the legal sector, this convergence of price and performance has three important implications.

First, hardware purchasing is no longer a purely IT or “back office” concern. It is an integral part of risk management and client‑service strategy. The ABA Model Rules, particularly Model Rule 1.1 on competence and Comment 8 to that rule, make clear that lawyers have a duty to maintain competence in relevant technology. Using outdated, underpowered hardware can impair your ability to use secure videoconferencing, e‑discovery tools, AI‑driven research platforms, and document automation systems. That, in turn, can compromise both efficiency and the quality of representation. ⚖️ When price hikes push firms toward “cheap but weak” machines, they risk falling behind on this duty of technological competence.

Second, Model Rule 1.6 on confidentiality and related ethics opinions underscore the importance of protecting client information in digital environments. In an era when AI tools increasingly run on‑device, machines that can perform more work locally reduce reliance on cloud processing and third‑party data transfers. Apple’s integrated hardware and on‑device AI capabilities, combined with its strong security posture, can make Macs appealing from a confidentiality standpoint, especially for sensitive practices such as criminal defense, family law, and complex commercial litigation. That does not mean Windows machines are inherently less secure, but when high‑end, well‑secured Windows hardware costs significantly more than it used to, some firms may find that Apple’s offerings now deliver a stronger security‑to‑cost ratio.

Third, long‑term budgeting must adapt to the new reality that technology lifecycles will cost more. Economists and industry groups have projected that tariffs and component shortages could add hundreds of dollars to the average laptop by the time those costs are fully passed through. For law firms, this means that hardware refresh cycles should be planned more deliberately, with strategic staggering of purchases, careful evaluation of total cost of ownership, and perhaps a willingness to stretch the lifecycle of existing machines that still meet performance and security requirements. 🗓️

So where does this leave the practicing lawyer or small firm managing technology with limited internal IT support? 🤔

One practical approach is to stop treating the Windows versus Mac decision as a matter of habit and start treating it as a structured, documented evaluation. Build a simple matrix that compares specific models—such as a midrange Surface Laptop and a MacBook Air or MacBook Neo—on price, performance, storage, memory, security features, support life, and compatibility with your core practice software. Involving firm leadership in these decisions and tying them explicitly to ABA Model Rule 1.1 and 1.6 considerations will help demonstrate that you are exercising reasonable diligence in technology selection.

At the same time, lawyers should not assume that Apple is the default winner. Many legal‑industry tools, case management systems, and document workflows remain optimized for Windows, especially in litigation and specialized practice areas. If your practice depends heavily on Windows‑only software, the cost of moving to Macs (including virtualization or remote desktop solutions) may outweigh hardware price advantages. However, even in a Windows‑centric environment, the new pricing landscape may push firms to consider non‑Surface OEMs or to buy fewer, higher‑quality machines and share them across teams rather than treating laptops as disposable commodities.

Strategic legal tech planning improves performance, security, and long-term cost control for lawyers!

Ultimately, the predicted—and now visible—price hikes on PCs are not just a story about higher invoices from vendors. They are a stress test of how seriously law firms take technological competence, security, and long‑term planning. The firms that respond by proactively reassessing their hardware standards, considering platforms like Apple that have weathered the pricing storm more gracefully, and explicitly aligning purchasing decisions with ABA Model Rules will not only control costs; they will position themselves as trustworthy, efficient, and forward‑looking in a market where clients increasingly notice the difference. 🚀

MTC

When AI Falls Short - What Legal Professionals Must Know Before Relying on Microsoft Copilot and Similar Embedded AIs.

AI Errors in Legal Practice Demand Vigilant Attorney Oversight!

Any reader of my blog should realize by now that artificial intelligence is no longer a novelty in law practice; it is embedded in research platforms, document automation, e‑discovery, and now in tools like Microsoft Copilot that appear inside the same Microsoft 365 ecosystem lawyers already live in. Yet Copilot’s own terms of use long described it as being “for entertainment purposes only,” while Microsoft has simultaneously marketed it as an enterprise‑grade productivity assistant and is now backing away from prominent Copilot buttons in several Windows 11 apps. For lawyers who must live under the ABA Model Rules of Professional Conduct, this tension is not an amusing footnote; it is an ethics problem waiting to happen. 

Microsoft’s Copilot terms have advised that the service “can make mistakes,” “may not work as intended,” and should not be relied on for important advice. At the same time, Microsoft has begun removing or rebranding Copilot buttons from Notepad, Snipping Tool, Photos, and Widgets in Windows 11, framing this move as an effort to reduce “unnecessary Copilot entry points” and be “more intentional” about where AI shows up. The features, or at least the underlying AI, are not disappearing entirely; they are simply becoming less conspicuous. For the practicing lawyer, the message is clear: powerful AI is being woven into everyday tools, but its creators still do not want you to rely on it the way you rely on a human associate. 🤖

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators.

⚠️

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. ⚠️

That is precisely where the ABA Model Rules step in. Model Rule 1.1 requires competent representation and, through Comment 8, includes a duty to keep abreast of the benefits and risks of relevant technology. Using AI in law practice is increasingly seen as part of that competence obligation, but competence does not mean blind trust in unvetted outputs from a system whose own terms warn you not to rely on it. A lawyer who treats Copilot’s draft as a finished research memo, brief, or contract without independent verification risks violating the duty of competence every bit as much as a lawyer who never learned to use electronic research tools in the first place.

Model Rule 1.6 on confidentiality presents a second, and in many ways more pressing, concern. Generative AI systems may store, log, or otherwise use prompt content for analysis and improvement, which means uncritical copying and pasting of confidential client information into Copilot can create a non‑trivial risk of exposure. The ABA and commentators have emphasized that before entering client data into a generative AI tool, lawyers must assess whether that data could be disclosed or accessed by others, including through unintended re‑use in future outputs to different users. That risk analysis is not optional; it is part of your obligation to make reasonable efforts to prevent unauthorized access or disclosure.

Fake Citations from AI Tools can Threaten Accuracy and Legal Ethics!

Model Rules 5.1 and 5.3, which govern the responsibilities of partners, managers, supervisory lawyers, and non‑lawyer assistants, also apply to AI use. When you deploy Copilot in your firm, you are functionally introducing a new category of “assistant” whose work product must be supervised like that of a junior lawyer or paralegal. Policies, training, and review procedures are needed so that AI‑drafted content is consistently checked for accuracy, bias, hallucinations, and improper legal conclusions before it ever reaches a client, court, or counterparty. Ignoring Copilot’s disclaimers and Microsoft’s own hedging around reliability is, in effect, ignoring red flags that any reasonable supervising attorney would address.

Model Rule 1.4 on communication adds yet another dimension: transparency with clients about how you are using AI in their matters. Authorities interpreting the Model Rules have stressed that lawyers should keep clients reasonably informed, which includes explaining when and how AI tools are utilized to assist in their cases. This is particularly important where AI may affect cost, turnaround time, or the nature of the work performed, such as using Copilot to generate a first draft instead of assigning that task to an associate. Engagement letters and fee agreements are increasingly incorporating language about AI use, both to set expectations and to align with evolving ethical guidance.

The “for entertainment purposes only” language is more than a curiosity; it is a signal about allocation of risk. Microsoft’s disclaimer mirrors language historically used by psychic hotlines and other services seeking to avoid responsibility for inaccurate advice. When such a disclaimer is attached to a tool you might be tempted to use for legal analysis, the tool is telling you that you assume the risks of errors. Under the Model Rules, those risks ultimately translate into potential malpractice, sanctions, or disciplinary action if AI‑generated errors make their way into filed documents or client counseling.

Recent real‑world incidents involving lawyers who submitted briefs containing AI‑fabricated citations demonstrate how quickly misuse of generative AI can cross ethical lines. In those cases, the core problem was not that AI was used; it was that the lawyers failed to verify the content and then misrepresented fictitious cases as genuine authority to the court. That behavior implicates Model Rules 3.3 (candor toward the tribunal) and 8.4 (misconduct) along with competence. Copilot’s warnings about possible mistakes do not excuse a lawyer from the duty to check every citation, quote, and legal conclusion that AI produces before relying on it.

lawyers must assess whether that data could be disclosed or accessed by others

⚠️

lawyers must assess whether that data could be disclosed or accessed by others ⚠️

For practitioners with limited to moderate technology skills, the answer is not to abandon AI entirely, but to approach it with structured safeguards. A practical workflow might involve using Copilot to outline a research plan or draft a first pass at a contract clause, followed by standard legal research in trusted databases and rigorous review by a human lawyer before anything is finalized. Firms should configure Copilot and other AI tools in ways that minimize data exposure, such as disabling cross‑tenant learning, a feature that lets the system learn from patterns across multiple organizations’ environments, where possible, and restricting which matters and users can access certain features. Training sessions can focus less on technical jargon and more on concrete do’s and don’ts tied directly to the Model Rules, which is the language most lawyers already speak. 🧠

alawys Protect Client Confidentiality When Using AI in Modern Law Practice!

Governance is also essential. Written AI policies should address acceptable use cases, prohibited content for prompts, mandatory review standards, logging and auditing of AI‑assisted work, and incident response if an AI‑related error is discovered. These policies should be backed by regular training and by leadership that models appropriate use, rather than quietly delegating AI experimentation to the most tech‑savvy associates. Vendors’ evolving terms of use—including Microsoft’s move to revise its “entertainment purposes” language and adjust Copilot integration in Windows—should be monitored and incorporated into risk assessments over time.

In short, when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. Copilot and similar tools can be valuable allies in a modern legal practice, but only if they are treated as fallible assistants whose work must be checked, not as oracles. The ABA Model Rules already provide the framework: competence, confidentiality, supervision, and honest communication. The task for today’s legal professionals is to apply that framework thoughtfully to AI, recognizing both its promise and its very real limitations before letting it anywhere near client work or court filings. ⚖️🤖

MTC: Hidden AI, GEO, and the ABA Model Rules: What Every Lawyer Needs to Know Before Their Next Client Finds Them Online ⚖️🤖

Generative AI is already talking about you, your law firm, and your practice area—even if you have never opened ChatGPT. 😳 Clients ask AI tools legal questions in natural language, and those systems answer by pulling from whatever content they trust online. For lawyers, that raises two intertwined issues: “hidden AI” inside everyday tools and the rise of Generative Engine Optimization (GEO). Together, they sit squarely in the path of your duties under the ABA Model Rules.

Legal Ethics Meets GEO and Hidden AI!

Hidden AI is everywhere in modern law practice tools. Microsoft 365 suggests text, summarizes long email threads, and drafts documents. Zoom transcribes and sometimes “enhances” meetings. Practice‑management platforms now market AI assistants that review documents, summarize matters, and even suggest next steps. Much of this AI runs quietly in the background, so it is easy to forget it exists—or to assume it is “just another feature.” Yet under ABA Model Rule 1.1, technological competence now includes understanding the benefits and risks of the technology you choose for your clients’ work. You cannot competently supervise what you do not even realize is there.

At the same time, AI tools sit on the front end of client development. When a potential client types, “How does a New Jersey divorce work and when should I hire a lawyer?” into an AI chatbot, that system gives an answer based on content it considers reliable. GEO—Generative Engine Optimization—is about making your content understandable, quotable, and safe for those systems to lift into the response. Where SEO asks, “How do I rank in Google’s blue links?”, GEO asks, “How do I become the answer AI gives when someone in my jurisdiction asks a real client question?” 🧠

Where the ABA Model Rules Fit

GEO and hidden AI are not just marketing trends; they are ethics issues.

  • Model Rule 1.1 (Competence). Comment 8 extends competence to relevant technology. ABA guidance on AI (including Formal Opinion 512) explains that lawyers must understand how AI tools work in broad strokes, their limitations, and their failure modes. If you expect clients to find you through AI‑generated answers, you should know what those systems are likely to say about your area of law and how your own content feeds into that ecosystem. ⚖️

  • Model Rule 1.6 (Confidentiality). You do not need to paste client facts into AI tools to do GEO. Good GEO content relies on hypotheticals and public law, not on confidential stories. But when you use AI inside Word, your practice platform, or a browser‑based assistant, you must know where the data goes, whether it is used for training, and whether additional client consent or stronger safeguards are required. 🔐

  • Model Rule 1.4 (Communication). When AI tools materially affect how you handle a matter—such as drafting, research, or review—you may need to explain that to clients in clear, non‑technical terms. In marketing, that same communication duty supports honest disclaimers: your GEO‑optimized articles must state that they are general information, not legal advice, and that AI summaries of your content are no substitute for a direct attorney‑client consultation.

  • Model Rules 7.1–7.3 (Advertising and Solicitation). GEO content must still be truthful and non‑misleading. You cannot let AI‑targeted content slide into promises of “guaranteed results” or vague claims of being “the best.” The fact that you are writing for AI as well as humans does not relax your duties under the advertising rules—it amplifies them, because misstatements can get replicated and amplified by AI tools. 📢

Handled thoughtfully, GEO can actually help you satisfy these rules. It encourages you to publish accurate, current, and jurisdiction‑specific explanations that educate the public and reduce confusion. Done poorly, it can push you into ethically dangerous territory where AI retells your overbroad claims to countless readers you never see.

What Is “Hidden AI” in Law Practice?

How AI Shapes Legal Ethics and Client Discovery

For many lawyers with limited or moderate tech skills, the biggest risk is not exotic AI research—it is quiet defaults.

Examples:

  • Word processors that turn on AI‑assisted drafting by default.

  • Email services that summarize conversations using third‑party models.

  • Cloud DMS, i.e., a cloud-based document management system, or practice platforms that offer “smart” suggestions based on client documents.

These tools can be legitimate productivity boosts, but under Rules 1.1 and 1.6, you must understand enough about them to decide when and how to use them. That includes asking:

  • Does this feature send client content to an external provider?

  • Is that provider training on my data?

  • Can I turn that training off?

  • Is there a business or enterprise version with better confidentiality terms?

You do not need to become a software engineer. You do need to know the basic data‑flow story well enough to make an informed risk judgment and to explain that judgment if a client or disciplinary authority asks. 🙋‍♀️

Moving from SEO to GEO—Ethically

Traditional SEO still matters. You still want clear titles, descriptive meta tags, fast and mobile‑friendly pages, and basic schema markup so search engines can understand your site. GEO builds on that foundation and asks you to go one step further: write in a way that large language models can safely quote.

GEO‑friendly legal content usually has:

✅   An answer‑first summary at the top: a short, plain‑English overview of the main question.

✅   Strong jurisdiction signals: repeated references to the state, province, or country, relevant courts, and applicable statutes.

✅   Specific client questions: headings written in the same conversational style clients use (“How long do I have to sue after a car accident in Ohio?”).

✅   Trust signals: bylines, credentials, bar memberships, links to statutes and court sites, and recent update dates.

For example, if you serve veterans in disability benefits work, your GEO page might be titled “How VA Disability Claims Work for [Your State] Veterans” and open with a five‑sentence, answer‑first summary in plain English. You would clearly note that you practice in specific jurisdictions, link to the VA and governing statutes, and spell out when someone should seek legal counsel. An AI system looking for a safe, jurisdiction‑clear answer is more likely to treat that content as a reliable source.

From an ethics standpoint, this structure helps you:

  • Stay in your lane (Rule 1.1) by emphasizing your actual jurisdiction and practice scope.

  • Provide accurate, non‑misleading information (Rules 7.1–7.3).

  • Communicate clearly about what your content is—and is not (Rule 1.4).

Practical First Steps for Non‑Techy Lawyers

You do not need to rebuild your entire site this week. A focused, incremental approach works well, especially if you are still building your tech confidence. Here is a practical sequence that maintains compliance with the Model Rules:

Legal Ethics Meets GEO and Hidden AI

  1. Audit your “hidden AI.” With your IT provider or vendor reps, identify where AI is already in use in your stack: Microsoft 365, Google Workspace, Zoom, your case‑management system, research tools, and any browser extensions. Turn off any features you cannot yet explain to yourself in basic terms. 🛠️

  2. Pick one practice area to GEO‑optimize. Choose the area that drives most of your matters. List the 10 most common client questions you actually hear. Those are the headings for your first GEO page.

  3. Write answer‑first, jurisdiction‑specific content. Use short paragraphs and plain language, and embed jurisdiction cues and citations to official sources. Include clear disclaimers about general information, no legal advice, and the need for a consultation.

  4. Refresh and expand over time. Revisit that page whenever law or practice changes, add FAQs, and link related posts. This keeps content current for both search engines and AI tools.

  5. Document your choices. If you decide to use specific AI tools in drafting content or in client work, note your reasoning: confidentiality safeguards, vendor terms, and how you supervise outputs. This helps show that you approached AI use thoughtfully under Rules 1.1, 1.4, 1.6, 5.1, and 5.3. 📚

The core message is simple: you do not have to master every technical detail to be a tech‑savvy lawyer, but you do have to stop pretending that AI is optional. Your clients are already using it; your vendors are already embedding it; and AI systems are already shaping how clients find you. Taking a deliberate, ethics‑aware approach to hidden AI and GEO is no longer extra credit—it is part of protecting your clients, your reputation, and your license. 🚀⚖️

MTC

📰 ABA TECHSHOW 2026 Recap: From AI Hype to LLM Reality, Google Workspace, and Ethical Lawyering in the Age of Bots ⚖️🤖

The Real Story Behind ABA TECHSHOW 2026

The techshow is the conference to go to keep your pulse on the technology lawyers should be using every day!

Walking into ABA TECHSHOW 2026 this year, I wasn’t thinking about shiny gadgets; I was thinking about competence, client service, and what it will mean to practice law in an era dominated not just by “AI,” but by large language models (LLMs) quietly shaping almost everything we see and share online. During my work on The Tech-Savvy Lawyer.Page blog and podcast, I keep running into the same pattern: lawyers know they should understand legal technology, yet they worry they’ll break something, breach a rule, or look foolish in front of their staff. TECHSHOW 2026 aimed directly at that anxiety — but this year, the conversation needs to go beyond what AI and generative AI can do and toward how LLMs and search bots are already shaping our professional identities online and offline. ⚖️💻

Keynotes: The “AI Dividend” and Your Time

The keynote lineup captured the tension between promise and risk. Legal market analysts highlighted what some called the “AI Dividend”: when machines take over routine drafting and research, lawyers gain time to think, advise, and advocate at a higher level. The real question — one I’ve been hammering on The Tech-Savvy Lawyer.Page for years — is what you will do with the time technology gives back (some of that time should include reviewing your work, e.g., your case citations). Tech-savvy speakers pushed attendees to look past vendor hype and focus on the broader digital environment, where consumer-facing tools, search engines, and recommendation algorithms are setting new expectations for speed, transparency, and availability.

Practical AI in the Sessions

Inside the conference rooms, the “Taming the Machines” and related AI tracks met baseline concerns (some with hands-on workshops) focused on realistic use cases: assisted drafting, pattern spotting in discovery, and summarizing voluminous documents. These sessions were built for lawyers who live in Word, Outlook, Google Workspace, and practice management systems and who simply want to stop retyping the same paragraphs. The faculty hammered home a critical point: generative AI is an assistant, not a decision-maker; you remain the lawyer, responsible for accuracy, judgment, and ethics under the ABA Model Rules. 🤖📄

Google Workspace, Microsoft 365, and Using What You Already Own

Mathew Krebis’ session on Google Workspace drove that message home in very practical terms. He showed how many firms are only scratching the surface of tools they already pay for: shared Drives with well-structured permissions, real-time collaboration in Google Docs, Gmail automation for intake and follow-up, and Google Calendar combined with Tasks to keep matter timelines under control. When you layer in emerging AI features in Workspace — smart replies, document summaries, suggested outlines — you see how even modest use of these tools can dramatically reduce friction in daily practice, and the tools Mathew discussed are not isolated to “law practice management” systems.

The takeaway was powerful: before you chase a new platform, fully exploit the ecosystem you already have. For many firms, “being more tech-savvy” starts with properly configuring their Google Workspace, Microsoft 365, or other SaaS platform, rather than buying yet another service.

Podcasting, Social Media, and LLM-Driven Visibility

Meanwhile, one other yet important frontier — and one that still feels underexplored — is what happens when LLMs and search bots become the primary lens through which clients, colleagues, and even opposing counsel discover you. That’s where my panel, 🎧 Podcasting for Lawyers: The Truth Behind the Mic, came in.

Ruby L. Powers, Gyi Tsakalakis, Stephanie Everett, and I discussed podcasting and social media not just as marketing channels, but as structured signals fed into LLM-driven engines that are constantly indexing, ranking, and inferring who is an authority on a given topic. Whether you talk about appellate practice, family law, or even a hobby outside the law, your content becomes training data for Generative Engine Optimization/LLM bots that decide which voices surface first when someone types a question into an AI chatbox. 🎙️🌐

In other words, your digital footprint is no longer static. It is being interpreted, reassembled, and presented as answers — often without you ever seeing the intermediate steps. That reality raises a new layer of ethical questions under the ABA Model Rules. Model Rule 7.1’s prohibition on false or misleading communications about the lawyer or the lawyer’s services takes on a new twist when LLMs remix snippets of your posts, podcasts, Google Workspace–hosted client alerts, and blog articles into composite “advice.”

You might be scrupulously accurate in your content, but if an LLM mischaracterizes it or presents it out of context, what then? TECHSHOW 2026 addressed traditional risks like hallucinated case citations, but there is room for a deeper, explicit conversation about how LLM-driven discovery intersects with advertising, communication, and competence duties.

EXPO Hall: Tools, Timekeeping, and Vendor Reality Checks

The EXPO Hall, as always, served as a laboratory of possibilities. Practice management platforms, billing tools, document automation, and a wave of AI-enhanced products competed for attention. Timekeeping tools that automatically capture activity across devices and applications and then propose draft time entries have grown dramatically since last year. For lawyers still reconstructing their days from memory and sticky notes, this is more than a marginal upgrade; it directly affects revenue, work-life balance, and accuracy.

But the fair warning comes here: make sure vendors are showing you what their product can do today, not what they hope it will do someday. In the LLM era, marketing decks are often several steps ahead of deployed reality. 🧾⏱️

Remember, you have an obligation under Model Rule 1.1 (competence) and Model Rule 5.3 (responsibilities regarding non-lawyer assistance) to understand the capabilities and limitations of any tech you “delegate” work to. Asking hard questions about current functionality, data handling, and audit trails is not being difficult; it is part of your duty of care.

Cybersecurity, Confidentiality, and LLM Risk

networking oppOrtunities like the taste of tecHshow” is a great way to talk with and learn from other lawyers about using tech in the practice of law.

The sessions on cybersecurity and confidentiality continued to do vital work. Under Model Rule 1.6, our obligation to protect client information extends to cloud storage, email, video conferencing, and the mobile devices we casually use in airport lounges. The “Guardians of the Data” track walked through practical checklists rather than abstract fearmongering: password managers, multi-factor authentication, properly configured backups, and vendor due diligence.

For firms running on Google Workspace, that translated into concrete steps: enforcing two-step verification, tightening Drive sharing settings, using client-specific shared Drives instead of ad hoc personal folders, and monitoring admin logs for suspicious access. The move from generic “AI” to LLM-powered services on any platform increases data risk, because many tools rely on ingesting your content — sometimes including client information — to improve their models. If you don’t understand where your data is going and how it is used, you cannot credibly say you are meeting confidentiality obligations. 🔐☁️

Competence, Human-in-the-Loop, and Everyday Workflows

You have an obligation under Model Rule 1.1 (competence) and Model Rule 5.3 (responsibilities regarding non-lawyer assistance) to understand the capabilities and limitations of any tech you “delegate” work to. Asking hard questions about current functionality, data handling, and audit trails is part of your duty of care.

Balancing this skepticism, though, is an equally important truth: becoming proficient with AI and LLM-based tools is not a spectator sport. You cannot satisfy your duty of technological competence from the sidelines. You have to use the tools first on a small scale, then progressively in more critical workflows, always with appropriate supervision and verification.

That might mean piloting an AI drafting feature in Google Docs and Microsoft Word for internal templates, or testing structured intake forms and automations inside Google Workspace or Microsoft 365 before rolling them out firm-wide. Ignoring AI because it feels uncomfortable is no longer the safer option. In some practices, failing to integrate it intelligently — while peers and opposing counsel do — may itself raise competence concerns as expectations evolve in courts and among clients. 🧩📈

Saturday Sessions: From “Use AI” to “Use AI Responsibly”

On Saturday, the 9 a.m. conversation among ABA President Michelle A. Behnke, Immediate Past President William R. “Bill” Bay, and President-Elect Barbara J. Howard, underscored how all of this ties into the rule of law and access to justice, framing AI as something lawyers now have a responsibility to actually use, not simply watch from the sidelines. The 10 a.m. session with Judge Timothy S. Driscoll then shifted the focus from “use AI or be left behind” to “use AI responsibly,” making it clear that judges, too, are integrating AI into their work and that they are not immune from mistakes when they rely on it.

The message for everyone in the courtroom ecosystem was simple and blunt: “Review, review, and review” any work touched by AI, because AI is a non‑infallible tool that does make errors and can mislead the unwary. Together, these sessions acknowledged the growing digital divide: lawyers and clients who can’t or won’t adopt technology risk falling out of the mainstream of legal services, while those who adopt it recklessly risk eroding confidence in both their own work and the justice system as a whole.

We are not merely debating convenience; we are deciding who gets effective representation and who is left out because the lawyer they might have hired never appeared in their LLM‑driven search results — or appeared with AI‑boosted visibility but poor ethical judgment. Technology, in this sense, is not optional; it is one of the few levers we have to expand meaningful access to legal help, provided we wield it with intent, humility, and rigorous human review. ⚖️🧠

LLM Literacy: The Next Core Competency

That balance — between caution and experimentation — is where TECHSHOW 2026 both excelled and showed its next frontier. Many sessions made AI approachable, breaking down concepts for lawyers with limited to moderate tech skills and providing concrete workflows they could apply on Monday. What I would like to see more explicitly next year is programming that treats LLM literacy as a core competency: understanding how LLMs are built, how they index and surface information, how your content feeds into them, and how that affects everything from client intake to reputation, whether you are working in Microsoft 365, Google Workspace, or a specialized legal platform.

From my vantage point as a legal tech ambassador at The Tech-Savvy Lawyer, the most successful sessions respected that many lawyers are highly capable professionals who simply haven’t had the time or guidance to modernize their workflows. They don’t need to become prompt engineers. They need guardrails, roadmaps, and clear examples of how to align AI, LLM tools, and mainstream platforms like Microsoft 365 and Google Workspace with the ABA Model Rules and local bar guidance. When faculty focused on incremental steps — tightening cybersecurity configurations, adding a layer of AI-assisted drafting under strict human review, building a consistent content strategy that LLMs can reliably recognize — the room should lead in.

A Tough-Love Takeaway for Lawyers

If you are a lawyer who still feels behind, here’s the core message I took away from TECHSHOW 2026, with a bit of tough love: you don’t need to chase every new tool, but you can’t afford to ignore LLM-driven AI and the platforms you already live in, like Microsoft 365 and Google Workspace, any longer. Understand the basics; pilot one or two well-vetted tools to start improving your efficiency without sacrificing the need for a true human-in-the-loop.

SEE YOU IN CHICAGO FOR ABA TECHSHOW 2027!!!

Read your jurisdiction’s ethics opinions on AI and technology. Build habits that protect client data by default. Use your own content — whether blog posts, newsletters, or podcasts — to train the bots to see you as a trusted authority rather than a digital afterthought. Ultimately, your bar license may be at more risk from not engaging with AI than from engaging with it carefully and intelligently.

The future of legal practice will not wait until we are all comfortable; it is here now, embedded in the search boxes, recommendation engines, and tools your clients already use. TECHSHOW 2026 made that clear. The next move is yours. 🚀⚖️

MTC

MTC: Staying Ahead of the Curve: Why ABA Techshow Is Not Optional for Today's Practicing Lawyer

the aba techshow is the perfect place for lawyers to learn the skills they need to know to meet aba requirements to stay abreast of the benefits and risks associated with relevant technOlogy used in the practice of law!

Let me be direct: technology is no longer a "nice-to-have" in legal practice. It is an ethical obligation. 🎯

The American Bar Association made that clear in 2012 when it amended Comment 8 to Model Rule 1.1 — the foundational rule governing competence. That comment explicitly states that a lawyer must "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology." Not someday. Not when it's convenient. Now — and continuously. If you are a practicing attorney and you are not actively engaging in legal technology education, you are not just leaving efficiency on the table. You may be skating dangerously close to an ethical violation.

That is precisely why I keep coming back to ABA TECHSHOW — and precisely why I encourage every lawyer I speak with, regardless of their comfort level with technology, to attend.

🔑 ABA TECHSHOW Is Built for You — Yes, You

I want to address something head-on: the assumption that Techshow is a conference for tech enthusiasts and IT professionals. It is not. The 2026 conference, running March 25–28 at McCormick Place in Chicago, features over 100 technology vendors and programming explicitly designed for lawyers at every skill level — including those who still break into a cold sweat opening a new software interface. The sessions span everything from AI fundamentals to cybersecurity to practice management to video communication. There is a deliberate on-ramp built into the conference structure because the organizers understand that the legal profession is diverse in its relationship with technology.

I have been privileged to serve as a speaker and faculty member at TECHSHOW, and this year is no exception. At TECHSHOW 2026, I am co-presenting two sessions that I believe speak directly to where the legal profession is right now.

The first, Podcasting for Lawyers: The Truth Behind the Mic, pulls back the curtain on how lawyers can leverage podcasting as a powerful tool for building authority, deepening client relationships, and positioning themselves as thought leaders in their practice areas. In a media landscape saturated with blogs and social media posts, a podcast gives you something rare: an intimate, sustained connection with your audience. As you know, I run my own podcast — The Tech-Savvy Lawyer.Page Podcast — and in that session, alongside colleagues and previous podcast guest Ruby Powers and hopefully future podcast guests Gyi Tsakalakis, and Stephanie Everett, we share the real, actionable steps behind building compelling legal content. 🎙️ 

learn how setting up a podcast studio carries over to help with other legal events!

My second session, Camera Ready Anywhere: Mastering Video Meetings with Clients, Courts, and Colleagues, my co-presenter, Temi Siyanbade, and I explore the practical, professional, and ethical dimensions of virtual communication. As virtual meetings have become a permanent fixture of legal practice — whether you are conducting client consultations on Zoom, appearing remotely before a tribunal, or negotiating with opposing counsel on TECHSHOW — looking and sounding competent on camera is no longer optional. This session covers audio and video setup, lighting, platform best practices, and how to project professionalism in a digital environment. The irony is that many lawyers who are meticulous about their appearance in a courtroom give almost no thought to how they present themselves on a video call. That gap matters. It matters to clients. It matters to judges. And yes, it can matter to your reputation.

⚖️ The ABA Model Rules Are Not Suggestions

Let us return to the ethics piece, because I think it deserves more than a passing mention. ABA Model Rule 1.1 sets the standard for competent representation. Most lawyers understand this in terms of legal knowledge — knowing the law, understanding procedure, being prepared. Fewer appreciate that the ABA's 2012 amendment has extended that standard to technology.

As of today, 40-plus states have adopted some version of the technology competence obligation articulated in Comment 8. The District of Columbia most recently joined that group in 2025. This is not a fringe interpretation. It is a growing national consensus about what it means to be a competent lawyer in the modern era.

Rule 1.6 — governing confidentiality — also carries technology implications. A lawyer who fails to understand how their email system works, who stores client data on unsecured devices, or who falls victim to a phishing attack that exposes client files has potentially breached their duty of confidentiality. Rule 5.3 requires that supervisors ensure non-lawyer staff are also compliant with the Rules — and that includes how they use firm technology. The tentacles of technology competence reach throughout the Model Rules.

Conferences like TECHSHOW exist, in part, to help you satisfy these obligations in a practical, hands-on way. The ABA Law Practice Division has consistently described Techshow as an opportunity to understand the "benefits and risks" of technology — the exact language of Comment 8. This is not accidental. It is intentional alignment between the programming and your professional duties.

🚀 The Future Is Already Here — Are You Ready?

The 2026 theme — Innovation That Protects the Rule of Law — reflects something I have believed for years: technology, when adopted thoughtfully, does not undermine the legal profession. It strengthens it. AI tools are transforming how lawyers research, draft, and communicate. Wearable technology and augmented reality are beginning to reshape how we work and collaborate. Deposition technology is being revolutionized by AI-powered transcript tools and remote video platforms. None of this is science fiction. It is happening right now, in law firms across the country.

The question is not whether you will engage with these tools. The question is whether you will engage with them proactively — understanding their benefits and their risks — or reactively, scrambling to catch up after a client complaint or a disciplinary inquiry.

I am not here to alarm you. I am here to invite you. 🤝

your podcast studio set up iMpacts on you are perceived in the virtual legal landscape!

Whether you are a solo practitioner trying to figure out which AI tool is worth your subscription fee, or a partner at a mid-size firm wondering how to lead your team through a technology transition, Techshow offers you a safe, supportive, and genuinely energizing environment to learn. Most of the sessions are CLE-eligible. The vendors are accessible and eager to demonstrate — not sell. The community is collaborative.

More than four decades of working with technology and nearly 30 years of those in the legal arena have taught me one thing above all else: the lawyers who thrive are not necessarily the most tech-savvy. They are the most tech-willing — the ones who stay curious, stay engaged, and never stop learning. 💡

TECHSHOW is where that learning happens. I will see you there.

REGISTER HERE!

MTC

MTC: Is Apple’s MacBook Neo the Real Game Changer for Lawyers Stuck Between Windows and Mac? 🤔💼

A lawyer’s choice between the MacBook Neo vs. Windows is not only a strategic business choice but a professional ethics one too!

For years, many lawyers have treated the move from Windows to Mac as a luxury upgrade rather than a strategic business decision. 💻⚖️ Apple new MacBook Neo, with its $599 starting price (and lower with education discounts), directly challenges that mindset by bringing a true macOS laptop into the same budget range as many mid-tier Windows machines. The question for lawyers on the fence is no longer “Can I justify a Mac?” but “Is the Neo a responsible, ethically sound choice for my law practice, under both my budget and my professional duties?”

From a hardware and price perspective, the Neo matters because it compresses the long‑standing price gap between Windows laptops and MacBooks. At around $599, it lives squarely in the territory where most solos and small firms previously defaulted to Windows PCs or even Chromebooks, not because they preferred them, but because MacBooks seemed out of reach. Apple is using its Apple Silicon and tight supply chain control to keep Neo’s price relatively stable even as RAM, SSD, and CPU prices push other laptop prices up as much as 40 percent. In an environment where many PC makers must raise prices or cut corners, the Neo offers lawyers a predictable, brand‑name option that is less vulnerable to component price spikes in the short to mid term.

Dream itTech‑Savvy Lawyers: If your workflow already runs on Microsoft 365, webmail like Gmail, cloud‑based practice management, and browser‑based legal research tools, your computer’s operating system is now just invisible plumbing 🧑‍🔧 —focus on security, value, and productivity, not whether it’s Windows or Mac. 🔔

Dream itTech‑Savvy Lawyers: If your workflow already runs on Microsoft 365, webmail like Gmail, cloud‑based practice management, and browser‑based legal research tools, your computer’s operating system is now just invisible plumbing 🧑‍🔧 —focus on security, value, and productivity, not whether it’s Windows or Mac. 🔔

That said, lawyers should not mistake the Neo for a no‑compromise replacement for every Windows laptop. The device cannot run Windows natively, and running Windows in a virtual machine on Apple Silicon is possible but not ideal as a core strategy. If your practice still depends on a specific legacy Windows desktop app that has no modern web or Mac equivalent—think an older on‑premises case management system or niche desktop timekeeping tool—you must factor that in, because the Neo is not the machine for you. For everyone else, especially those whose workflow is already centered on Microsoft 365, webmail (e.g., Google), cloud practice management, and browser‑based research tools, the operating system is increasingly just the plumbing under the hood.

This is where today’s SaaS‑driven legal stack changes the analysis. Many of the core tools lawyers now rely on—cloud practice management, document automation, e‑signature, e‑billing, calendaring, and research platforms—are delivered through the browser or platform‑agnostic apps. 🌐 Most modern law‑focused SaaS platforms are built to be OS‑agnostic so they can serve both Windows and Mac firms with a single codebase, and they function similarly across Chrome, Edge, and Safari. That means the historical “Windows has all the legal software” argument is rapidly losing relevance for general practice, especially for solos and small firms that choose mainstream platforms over custom legacy systems.

The ABA Model Rules, however, keep this from being just a hardware shopping discussion. ABA Model Rule 1.1, and especially Comment 8, recognizes that competence now includes understanding “the benefits and risks associated with relevant technology.” That duty of technological competence does not require you to buy the most expensive device, but it does require you to make informed, reasonable choices about the systems you use to handle client information and conduct your practice. When you evaluate the Neo, you are not just deciding what laptop you prefer—you are deciding whether this platform lets you meet your obligations around confidentiality, reliability, uptime, and data handling in a way that is at least as competent as what you have on Windows.

Short‑term costs are where the MacBook Neo is most obviously attractive. At its launch price, it competes directly with mid‑range Windows laptops that often sacrifice build quality, thermals, or battery life to hit a number on the sticker. The Neo offers a brighter display, premium build, and Apple Silicon performance in that same price band, which can translate into less time fighting sluggish hardware and more time focused on client work. For a lawyer with limited to moderate tech skills, that smoother baseline experience can reduce friction, support better document handling, and lower the odds of user‑induced system instability. 🚀

Can Attorneys juggle a macbook Neo, their firm’s SaaS tools, and their ethical duties?

Mid‑term costs—three to five years—are where Apple’s supply chain and design decisions become relevant. Industry reports suggest that rising memory and CPU costs could force many Windows laptop manufacturers to push prices up sharply, while Apple’s long‑term supplier agreements help buffer its MacBooks from the worst of these increases. At the same time, the Neo introduces a more modular, repair‑friendly design than previous MacBooks, with lower out‑of‑warranty battery replacement costs, making mid‑life repairs less painful. For a law firm budgeting over the life of a device, this combination of more stable pricing and more manageable repair costs can make the total cost of ownership more predictable than a similarly priced Windows machine that may face steeper price hikes or cheaper construction.

Long‑term expenses involve more than just hardware. You must consider training, support, integration, and the risk of vendor lock‑in or disruptive platform changes. The Neo ties you more deeply into the macOS ecosystem, which can be a strength if you commit to it, but may introduce friction in a mixed Windows–Mac environment. On the Windows side, there are signs that Microsoft may move more aggressively toward subscription‑driven Windows licensing, especially for Pro editions, which could affect firms that rely heavily on Windows‑specific features. Lawyers already shoulder subscriptions for research services, practice management, and office suites, so a shift toward OS‑level subscription pricing could make the Mac’s relatively stable OS model more attractive over time.tech.

From an ethical perspective, the operating system decision intersects directly with data security and confidentiality. ABA technology‑competence guidance stresses that lawyers must understand the risks of the tools they use, including operating systems, cloud storage, and third‑party services. MacOS offers strong sandboxing, disk encryption, and built‑in security protections, but Windows has mature security controls as well, especially in managed environments. The real question is whether, given your own tech comfort level, can you configure and maintain a secure environment more reliably on Windows or macOS? For many small firms without dedicated IT, the Neo’s controlled hardware–software stack may reduce complexity and thereby reduce risk.(One added, but separate, benefit option is the availability to purchase AppleCare; this is Apple’s well-regarded extended warranty program, which can alleviate some of your concerns about future repairs.)

Still, the Neo is not a universal solution. If you are a litigator embedded in a court system that mandates Windows‑only e‑filing tools, if your firm uses an on‑prem Windows server that depends on Windows‑only integrations, or if you rely on specialized Windows‑only deposition or trial software, you will either need to keep a Windows machine in parallel or stay with Windows as your primary platform. Under Model Rule 1.1, knowingly moving to a platform that breaks critical parts of your workflow without a realistic workaround would raise competence concerns. In that sense, the Neos’s OS limitations force you to map your actual workflow—software, integrations, court requirements—rather than treating this as a purely personal preference decision.

can a lawyer leverage a macbook Neo and cloud platforms for secure practice?

So does the MacBook Neo qualify as a true “game changer” for lawyers sitting on the Windows‑to‑Mac fence? For a large subset of practitioners—especially solos and small firms who primarily use browser‑based SaaS tools, Microsoft 365, PDF software, and mainstream practice management platforms—the answer is increasingly yes. ✅ The Neo dramatically lowers the entry cost of joining the Mac ecosystem while offering a stable supply‑chain story and credible mid‑term repairability, all within a security model that can satisfy ABA technology‑competence expectations when used thoughtfully.

For others—those deeply tied to legacy Windows software or court‑mandated tools—the Neo may be more of a secondary device than a replacement. But even in those cases, its presence will pressure Windows OEMs to improve build quality, pricing transparency, and long‑term value, which benefits the legal profession regardless of which platform individual lawyers choose. In short, the MacBook Neo is less about abandoning Windows and more about forcing every lawyer to ask a more sophisticated, ethics‑aware question: which platform—Windows, Mac, or a hybrid—best supports competent, secure, and sustainable representation for my clients in the decade ahead?

MTC

MTC: Are Lawyers Really Ready for a Wallet‑Free Future? Digital Wallets, ABA Ethics, and the Reality of Going Fully Cashless 💳⚖️

Tech-savvy lawyers should not leave their physical wallets at home, BUT YOU CAN PROBABLY pare THEM down some.

When previous podcast guest David Sparks over at MacSparky shared his recent post about accidentally going out without his physical wallet—and still making it through the day just fine on his iPhone and Apple Wallet—it captured a quiet shift many of us in the legal profession are grappling with. He walked into his appointment armed only with a digital ID, digital insurance card, and Apple Pay, and everything worked. For a growing number of professionals, that is the new normal. The question for lawyers is more specific: not can we go wallet‑free, but should we—ethically, practically, and professionally—given our obligations under the ABA Model Rules?

Digital wallets are no longer niche tools reserved for tech enthusiasts. Apple Wallet and similar platforms have matured into robust ecosystems that can store payment cards, IDs, insurance cards, transit passes, and even car keys. They sit at the intersection of convenience, security, and risk. As attorneys, we have to examine that intersection with greater rigor than the average consumer, because our technology choices are framed by duties of competence, confidentiality, and client service.

The promise of a wallet‑free practice

On paper, the case for a full digital wallet is compelling. Digital payments can reduce friction at the courthouse café, client lunches, and bar events. Digital IDs eliminate worries about misplacing a physical card. Many platforms add layers of biometric security that traditional wallets can’t match. David notes that Apple Wallet has “been quietly getting better for years,” allowing storage of physical card numbers behind Face ID and making peer‑to‑peer payments a tap‑away. For a solo or small‑firm lawyer, that friction reduction compounds over time into real efficiency.

From a malpractice‑avoidance standpoint, a digital wallet can be safer than a billfold. Losing a traditional wallet means scrambling to cancel credit cards, monitoring for identity theft, and possibly dealing with unauthorized use of your bar ID or access cards. A lost phone, by contrast, can be located, remotely wiped, or locked with strong authentication. Properly configured, it can reduce risk rather than increase it.

This is where ABA Model Rule 1.1 on competence, particularly Comment 8, becomes relevant. The Comment notes that competent representation includes understanding “the benefits and risks associated with relevant technology.” A digital wallet is very much “relevant technology” for a modern practitioner. Choosing not to understand or use it, especially when it offers better security and traceability than analog methods, may itself become a competence question as the bar’s expectations evolve.

The gaps: cash, IDs, and access to justice

There are plenty of reasons not to go “cashless” when leaving home or the office.

Still, David’s hesitation—“there’s a part of me that still feels compelled to carry a small wallet with my driver’s license in it”—should resonate with lawyers. There are pockets of our professional lives where the ecosystem is not ready, and those pockets matter.

First, cash. Many lawyers still tip courthouse staff, parking attendants, baristas near the courthouse, and others in cash—including, in my case, using $2 bills (yes, they are still produced, still accepted, and can be obtained at many banks across the U.S. [At least as of the time of this posting]. I almost always get an excited smile when I tip my barista for his/her work with a $2 bill). Cash remains the lowest‑friction, most universally accepted “protocol” for small-scale human interactions. Refusing to carry any cash at all can put you in awkward social and professional situations, especially in older courthouses or local establishments that either do not take cards or resent micro‑transactions by card. For those committed to cash tipping as a personal or professional habit, a purely digital wallet is not yet a substitute.

Second, physical IDs. While TSA and some states are piloting and accepting digital IDs, acceptance is not universal, and the rules are in flux. David notes he has a state digital ID that “shows up nicely” in Apple Wallet. That is great—until you encounter an agency, judge, clerk, or officer who simply will not accept it. Not all jurisdictions recognize mobile driver’s licenses or digital IDs, and some procedures (e.g., certain filings or in‑person notarizations) still presume a physical, inspectable card. The risk is not hypothetical: show up with the wrong form of ID for a flight or a court security checkpoint, and you may face delay, additional fees, or outright denial of entry.

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.”

✈️ 🌎 ‼️

FROM TSA WEBSITE - “If you are unable to provide the required acceptable ID, such as a passport or REAL ID, you can pay a $45 fee to use TSA ConfirmID. TSA will then attempt to verify your identity so you can go through security; however, there is no guarantee TSA can do so.” ✈️ 🌎 ‼️

For lawyers, this is not just an inconvenience—it is a competence and diligence issue under Model Rules 1.1 and 1.3. If your failure to carry an accepted ID means you miss a hearing, delay a filing, or cannot visit a client, you have a professional problem, not just a tech annoyance. Likewise, local court rules and security policies may require a specific bar card or government‑issued ID to enter restricted areas. A digital ID on your phone will not help if the sheriff’s deputy at the door has not been trained or authorized to accept it.

Third, connectivity. A digital wallet that is fully dependent on live internet access is a fragile tool in old courthouses with thick stone walls, in rural jurisdictions, or during emergencies. Many modern digital wallets do allow offline transactions at NFC terminals using stored tokens, but not all. If your payment method, ID, or membership pass depends on a cloud verification step and you are in a dead zone—or your battery dies—you effectively have no wallet. Lawyers who rely on public transit, rideshares, or mobile office setups need to consider this in contingency planning, particularly when punctuality is essential.

Digital wallets and legal ethics

From an ethics perspective, digital wallets intersect with several core duties.

Under Model Rule 1.6, protecting client confidentiality extends to how you pay for and manage client‑related expenses. If you are using peer‑to‑peer payment apps or storing client‑related account details in a digital wallet, you must understand their privacy and data‑sharing practices. Some services expose transaction histories, social feeds, or metadata that could inadvertently reveal client relationships or matter details. Configuring strict privacy settings and separating personal from firm accounts is not optional; it is part of your duty of confidentiality.

Model Rule 1.15 on safekeeping property also comes into play if you ever use digital tools to handle client funds, reimbursements, or settlement distributions. While most bars still require traditional trust accounts and closely regulate payment processors, the trend toward digital payments will continue. Using any digital payment or wallet solution around client funds requires careful vetting, written policies, and—ideally—consultation with your malpractice carrier and bar ethics guidance.

Finally, Model Rule 5.3 on responsibilities regarding nonlawyer assistance extends to IT providers and wallet platforms. If your firm relies on third‑party providers to manage mobile device management (MDM), security, or payment integrations, you must make reasonable efforts to ensure their conduct aligns with your professional obligations. Managing digital wallets on firm‑owned or BYOD devices should be governed by a clear policy that addresses encryption, remote wipe, lock‑screen settings, and acceptable use.

Practical guidance: a hybrid, not a cliff

As advanced as our digital wallets are, the legal professional should carry a combination of digital and physical identification, means of payment, and cash!

Given these realities, are we “truly there” yet for lawyers to go fully wallet‑free? Not quite. For most practitioners, the prudent path is a hybrid approach:

  • Carry a slim physical wallet with a government‑issued ID, bar card (if used locally), a minimal backup payment card, and a small amount of cash for tipping and edge cases.

  • Use a digital wallet as your primary payment and convenience layer, especially in environments where it is well‑supported and secure.

  • Confirm, in advance, what IDs your courthouse, correctional facilities, and agencies accept, and do not assume your digital ID will suffice.

  • Harden your digital wallet: enable strong biometrics, ensure a reputable MDM or security solution manages any firm devices, and separate personal from professional payment flows where possible.

This hybrid approach aligns with Model Rule 1.1’s requirement to understand and responsibly adopt relevant technology while honoring the practical demands of courtroom work and client service. It allows you to benefit from the security and efficiency of digital wallets without betting your professional obligations on the most fragile parts of the ecosystem: universal acceptance and ubiquitous connectivity.

David ends his reflection by asking whether he will ever “truly go out knowingly wallet‑free” and whether he is alone in his hesitation. Lawyers should feel no pressure to be first in line to abandon physical wallets entirely. Our job is to advocate, counsel, and appear—on time, properly identified, and fully prepared. That may mean, for the foreseeable future, living comfortably in both worlds: with a well‑tuned digital wallet in your hand and a minimal, carefully curated physical wallet in your pocket.

MTC

MTC: Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖

A Tech-Savvy Lawyer MUST REVIEW AI-Generated Legal Documents

AI hallucinations are reportedly down across many domains. Still, previous podcast guest Dorna Moini is right to warn that legal remains the unnerving exception—and that is where our professional duties truly begin, not end. Her article, “AI hallucinations are down 96%. Legal is the exception,” helpfully shifts the conversation from “AI is bad at law” to “lawyers must change how they use AI,” yet from the perspective of ethics and risk management, we need to push her three recommendations much further. This is not only a product‑design problem; it is a competence, confidentiality, and candor problem under the ABA Model Rules. ⚖️🤖

Her first point—“give AI your actual documents”—is directionally sound. When we anchor AI in contracts, playbooks, and internal standards, we move from free‑floating prediction to something closer to reading comprehension, and hallucinations usually fall. That is a genuine improvement, and Moini is right to emphasize it. But as soon as we start uploading real matter files, we are squarely inside Model Rule 1.6 territory: confidential information, privileged communications, trade secrets, and dense pockets of personally identifiable information. The article treats document‑grounding primarily as an accuracy-and-reliability upgrade, but lawyers and the legal profession must insist that it is first and foremost a data‑governance decision.

Before a single contract is uploaded, a lawyer must know where that data is stored, who can access it, how long it is retained, whether it is used to train shared models, and whether any cross‑border transfers could complicate privilege or regulatory compliance. That analysis should involve not just IT, but also risk management and, in many cases, outside vendors. “Give AI your actual documents” is safe only if your chosen platform offers strict access controls, clear no‑training guarantees, encryption in transit and at rest, and, ideally, firm‑controlled or on‑premise storage. Otherwise, you may be trading a marginal reduction in hallucinations for a major confidentiality incident or regulatory investigation. In other words, feeding AI your documents can be a smart move, but only after you read the terms, negotiate the data protection, and strip or tokenize unnecessary PII. 🔐

LawyerS NEED TO MONITOR AI Data Security and PII Compliance POLICIES OF THE AI PLATFORMS THEY USE IN THEIR LEGAL WORK.

Moini’s second point—“know which tasks your tool handles reliably”—is also excellent as far as it goes. Document‑grounded summarization, clause extraction, and playbook‑based redlines are indeed safer than open‑ended legal research, and she correctly notes that open‑ended research still demands heavy human verification. Reliability, however, cannot be left to vendor assurances, product marketing, or a single eye‑opening demo. For purposes of Model Rule 1.1 (competence) and 1.3 (diligence), the relevant question is not “Does this tool look impressive?” but “Have we independently tested it, in our own environment, on tasks that reflect our real matters?”

A counterpoint is that reliability has to be measured, not assumed. Firms should sandbox these tools on closed matters, compare AI outputs with known correct answers, and have experienced lawyers systematically review where the system fails. Certain categories of work—final cites in court filings, complex choice‑of‑law questions, nuanced procedural traps—should remain categorically off‑limits to unsupervised AI, because a hallucinated case there is not just an internal mistake; it can rise to misrepresentation to the court under Model Rule 3.3. Knowing what your tool does well is only half of the equation; you must also draw bright, documented lines around what it may never do without human review. 🧪

Her third point—“build verification into the workflow”—is where the article most clearly aligns with emerging ethics guidance from courts and bars, and it deserves strong validation. Judges are already sanctioning lawyers who submit AI‑fabricated authorities, and bar regulators are openly signaling that “the AI did it” will not excuse a lack of diligence. Verification, though, cannot remain an informal suggestion reserved for conscientious partners. It has to become a systematic, auditable process that satisfies the supervisory expectations in Model Rules 5.1 and 5.3.

That means written policies, checklists, training sessions, and oversight. Associates and staff should receive simple, non‑negotiable rules:

✅ Every citation generated with AI must be independently confirmed in a trusted legal research system;

✅ Every quoted passage must be checked against the original source; 

✅ Every factual assertion must be tied back to the record.

Supervising attorneys must periodically spot‑check AI‑assisted work for compliance with those rules. Moini is right that verification matters; the editorial extension is that verification must be embedded into the culture and procedures of the firm. It should be as routine as a conflict check.

Stepping back from her three‑point framework, the broader thesis—that legal hallucinations can be tamed by better tooling and smarter usage—is persuasive, but incomplete. Even as hallucination rates fall, our exposure is rising because more lawyers are quietly experimenting with AI on live matters. Model Rule 1.4 on communication reminds us that, in some contexts, clients may be entitled to know when significant aspects of their work product are generated or heavily assisted by AI, especially when it impacts cost, speed, or risk. Model Rule 1.2 on scope of representation looms in the background as we redesign workflows: shifting routine drafting to machines does not narrow the lawyer’s ultimate responsibility for the outcome.

Attorney must verify ai-generated Case Law

For practitioners with limited to moderate technology skills, the practical takeaway should be both empowering and sobering. Moini’s article offers a pragmatic starting structure—ground AI in your documents, match tasks to tools, and verify diligently. But you must layer ABA‑informed safeguards on top: treat every AI term of service as a potential ethics document; never drop client names, medical histories, addresses, Social Security numbers, or other PII into systems whose data‑handling you do not fully understand; and assume that regulators may someday scrutinize how your firm uses AI. Every AI‑assisted output must be reviewed line by line.

Legal AI is no longer optional, yet ethics and PII protection are not. The right stance is both appreciative and skeptical: appreciative of Moini’s clear, practitioner‑friendly guidance, and skeptical enough to insist that we overlay her three points with robust, documented safeguards rooted in the ABA Model Rules. Use AI, ground it in your documents, and choose tasks wisely—but do so as a lawyer first and a technologist second. Above all, review your work, stay relentlessly wary of the terms that govern your tools, and treat PII and client confidences as if a bar investigator were reading over your shoulder. In this era, one might be. ⚖️🤖🔐

MTC