MTC: Should Lawyers Host Their Own AI (or Hybrid AI)?

Lawyers need to weigh hosting AI against ABA ethics in modern practice.

Lawyers are being pushed to decide whether to host their own artificial intelligence systems, rely entirely on cloud tools, or adopt a hybrid model that uses both local and cloud-based AI.🌐 At the same time, the American Bar Association’s Formal Opinion 512 makes clear that AI use sits squarely inside existing duties of competence, confidentiality, communication, candor, supervision, and fees under the Model Rules of Professional Conduct.

Perplexity’s new “Personal Computer” platform is a vivid example of how this can work in practice: it can run as an always‑on AI agent on a Mac mini, with access to local files, native apps, and cloud models, effectively turning a spare Mac into a dedicated digital worker. For lawyers, that kind of setup is appealing because a Mac mini can sit in the office as a sandboxed machine, disconnected from the main network and primary cloud file storage, to tightly control what AI can see and where client data goes.🧱

Why Lawyers Are Tempted to Host Their Own or Hybrid AI

There are several practical reasons lawyers and law firms are looking at running AI locally, or in a hybrid configuration that blends on‑premise and cloud tools:

  • Control over client data. Running AI on a dedicated Mac mini or similar device gives the firm direct control over where data is stored, which apps it can touch, and whether it ever leaves the office environment.

  • 24/7 “digital worker.” Platforms like Perplexity’s Personal Computer can operate continuously, orchestrating multiple models, moving between local files and the web, and even continuing work that you start on your phone while you are away.⚙️

  • Integration with local files and apps. A local or hybrid agent can read your document management folders, draft or revise motions in your word processor, and compare local files with online sources without sending entire client datasets to a general‑purpose cloud chatbot.

  • Potential cost and performance benefits. For some workflows, once the hardware is in place, local or hybrid AI can be more predictable in cost and latency than pure pay‑per‑token cloud services, especially when workloads are steady and repetitive.💸

From an ethics standpoint, these benefits map directly onto Model Rule 1.1’s requirement that lawyers maintain technological competence, which now includes a duty to understand both the capabilities and the limitations of AI tools they deploy in practice. If you can explain how your on‑premise or hybrid AI is configured, what data it sees, and why you chose that architecture, you are already moving toward satisfying that duty of competence in your technology choices.

ABA Model Rules: Key Considerations for Self‑Hosted and Hybrid AI

The ABA’s Formal Opinion 512 does not mandate or prohibit self‑hosting, but it does identify core ethical duties that must guide any AI deployment. For lawyers thinking about a sandboxed computer or hybrid AI, several Model Rules are especially important:

  • Model Rule 1.1 (Competence). You must understand enough about the AI system—local or cloud—to evaluate its reliability, security, and appropriate use, including risks like hallucinations, outdated information, and bias.

  • Model Rule 1.4 (Communication). In many situations, you may need to tell clients that you are using generative AI—and how—so they can make informed decisions about the representation.

  • Model Rule 1.5 (Fees). If you bill for AI‑assisted work, your fees still must be reasonable; you cannot simply pass through AI costs without regard to value, and you cannot charge as if the work were done entirely by hand.

  • Model Rule 1.6 (Confidentiality). Client information must be protected whether it is processed on‑premise or in the cloud, which means assessing encryption, access controls, logging, and whether AI vendors can use your data to train their models.

  • Model Rules 3.3 and 4.1 (Candor). You must not present AI‑generated work product that you have not verified, and you must correct any false or misleading statements to tribunals or others if AI contributes to those errors. 

  • Model Rules 5.1 and 5.3 (Supervision). Partners and managing lawyers must implement reasonable policies, training, and oversight to ensure that both lawyers and non‑lawyer staff use AI tools in compliance with ethical obligations. 

Formal Opinion 512 underscores that using generative AI does not reduce any of these obligations; rather, it adds new vectors for potential violations, including inadvertent disclosure through “self‑learning” tools that retain prompts to improve their models. A self‑hosted or sandboxed system can reduce some of these risks but does not eliminate the need for careful configuration, testing, and ongoing oversight.🔍

The Case for a Sandboxed Mac Mini or Similar Setup

Attorneys can test sandboxed computers for aba compliant, secure ai workflows.

A compelling middle road is to run your AI assistant as an always‑on agent on a dedicated, sandboxed machine—such as a Mac mini—segregated from your primary network and cloud storage, and then carefully curate what you allow it to access. Perplexity’s Personal Computer is designed to run 24/7 on a Mac mini, with secure sandboxed file creation, visible actions, and a kill switch, which can help align AI use with ethical expectations of control and auditability.🧑‍💻

For law practices with limited to moderate technology skills, this architecture offers practical advantages:

  • You can keep the AI’s working directory separate from your main document management system, copying in only those files you want it to analyze.

  • You can disconnect the sandbox machine from your firm’s primary VPN and file‑syncing tools, reducing the attack surface for client data.💽

  • You can log and periodically review what the AI agent is doing—what files it opens, what tasks it runs—to support your supervisory duties under Rules 5.1 and 5.3.

Because a personal computer can orchestrate teams of models and interact with local files and cloud services in one system, it embodies the hybrid AI idea: use local control for sensitive matters, and selectively rely on cloud models for broader research or drafting where appropriate safeguards are in place. That kind of hybrid strategy aligns well with the ABA’s focus on risk‑based analysis rather than a one‑size‑fits‑all prohibition.⚖️

Why Some Lawyers Should Not Host Their Own AI (At Least Not Yet)

Self‑hosting or running a hybrid computer‑based AI platform is not the right answer for every firm, and in some practices, it may actually increase risk. If your firm cannot realistically manage updates, patches, access controls, and backups for a dedicated AI machine, a reputable cloud provider with strong security and clear contractual commitments may be a safer option. Many lawyers underestimate the work required to securely configure and maintain specialized systems, which can lead to misconfigurations that expose confidential information or disable audit logs you may need for internal investigations or regulatory inquiries.

There is also a risk of overconfidence: having an AI agent running on your own hardware can create a false sense that everything processed on that machine is automatically safe and ethically sound.😬 Formal Opinion 512 warns that self‑learning AI tools can leak information across matters, even within a single firm, if they are not properly isolated; that risk exists whether the system runs on your computer or in the cloud. For many small firms and solos, the most ethical and efficient path may be to use vetted, well‑documented cloud AI tools under strict internal policies rather than trying to build and secure a home‑grown AI infrastructure.

Finally, if you lack even moderate technology literacy, jumping straight to a self‑hosted AI environment can distract from more foundational tasks like implementing a written AI policy, training staff on prompt hygiene, and integrating AI use into your conflict checks and quality control processes. In those cases, simpler deployments—such as using browser‑based AI tools with no client identifiers and careful manual review—can be more defensible under the Model Rules.

Practical Takeaways for Ethics‑Focused AI Adoption

an ETHICS-FOCUSED LAWYER CAN CONSIDER USING A HYBRID AI UNDER THE ABA Model Rules.

For lawyers and firms considering self‑hosted or hybrid AI, several practical steps emerge from the ABA guidance and from the new generation of self‑hosted AI platforms:

  • Start with a written AI policy that maps to Model Rules 1.1, 1.4, 1.5, 1.6, 3.3, 4.1, 5.1, and 5.3, that distinguishes between internal experimentation and client‑facing use.

  • If you deploy a sandboxed Mac mini or similar, define precisely which files and apps it may access, how it will be backed up, and who has administrative control.🔐

  • Treat AI outputs as drafts that require human review, not as final work product, and document your review in a way that aligns with your quality‑control procedures.

  • Train all users—not just IT—on how the Personal Computer or other AI system operates, what logs are available, and how to shut it down if it behaves unexpectedly.

  • Revisit your configuration and vendor contracts regularly, including any terms about data retention, training, and breach notification, to ensure ongoing compliance with Revised ethics guidance and state‑level opinions.📜

In that light, the question is not whether lawyers should or should not host their own AI, but whether they can do so in a way that satisfies the ABA’s expectations for competence, confidentiality, and supervision while delivering real value to clients. For some, a carefully configured sandboxed Mac mini running a hybrid AI agent will be a powerful, ethical accelerator; for others, the more responsible choice is to rely on well‑governed cloud tools until their internal capabilities catch up.

MTC

TSL Labs 🧪 Bonus: Deep Dive on our April 27, 2026, Editorial, MTC: Smart Recording, Client Secrets, and HeyPocket: What Every Lawyer Needs to Know in 2026 📱⚖️

📌 To Busy to Read This Week’s Editorial?

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we unpack how AI note takers and “always-listening” devices can quietly route client secrets to third-party vendors, why that matters under the ABA Model Rules, and how a 2026 federal decision out of the Southern District of New York turned one defendant’s AI chats into discoverable evidence. Whether you are a solo practitioner, in-house counsel, or a tech-curious professional in another field, this conversation will help you balance convenience with confidentiality and avoid turning your favorite AI assistant into your biggest evidentiary risk.

👉 Before your next client meeting, listen to this episode, check out our editorial, and run your current AI tools through the checklist we outline—then subscribe and share with a colleague who is still “just trusting the app.” 🎧

In our conversation, we cover the following:

  • 00:00 – The “ambient microphone” problem: phones, smart speakers, wearables, and connected cars as a continuous surveillance layer around client conversations.

  • 01:00 – How technology competence has shifted from locking file cabinets to understanding data custody, cloud routing, and API-driven services.

  • 02:30 – What makes AI note takers like HeyPocket different from passive telemetry and why capturing the spoken “payload” changes the threat model.

  • 04:00 – The invisible “third party in the room”: routing privileged audio through external AI models and the malpractice risk of default “Allow” clicks.

  • 05:30 – Applying ABA Model Rules 1.1 and 1.6 to AI workflows: competence, confidentiality, and “reasonable efforts” in a world of automated transcription.

  • 07:00 – Risk-based analysis from ABA Formal Opinions 477R and 498: weighing sensitivity, likelihood of disclosure, and available safeguards before using AI.

  • 08:30 – Why secretly recording clients or opponents with AI tools can implicate Rule 8.4(c), even in one‑party consent jurisdictions.

  • 10:00 – Inside United States v. Heppner (SDNY 2026): how public generative AI platforms destroyed privilege and work-product protections for a criminal defendant.

  • 12:00 – How AI training and tokenization work, why “military‑grade encryption” does not save privilege if terms of service allow internal data use.

  • 14:00 – Treating every AI note taker like an outsourced e‑discovery vendor: NDAs, retention policies, security audits, and data destruction timelines.

  • 16:00 – Practical minimization strategies: defaulting to no recording, segmenting AI-generated content by matter, and restricting access via role‑based controls.

  • 17:30 – Establishing bright-line “no‑AI” categories (criminal defense, internal investigations, sensitive family/immigration, high‑value trade secrets).

  • 18:30 – Counseling clients not to “prep their case” with public chatbots after Heppner and why this is now part of competent representation.

  • 19:30 – Building a simple vendor-vetting checklist for law firms and professional practices adopting AI note takers.

  • 20:00 – Looking ahead: when failure to use secure, vetted AI may itself become a competence issue due to inefficiency and overbilling.

  • 21:00 – Rethinking privilege in a world where an algorithmic “third party” is always in the room and devices are never truly off

RESOURCES

Mentioned in the episode

MTC: Smart Recording, Client Secrets, and HeyPocket: What Every Lawyer Needs to Know in 2026 📱⚖️

Your smartphone and AI note‑taking tools now sit in on more client conversations than many junior associates.📱 They track where you are, who you talk to, and—if you let them—what you and your clients say in real time. For lawyers, that convenience comes with concrete privilege, confidentiality, and compliance risks that cannot be ignored.⚖️

Smart Devices, AI Note‑Takers, and Constant Surveillance 📍

Modern smart devices already log GPS coordinates, Wi‑Fi networks, Bluetooth connections, and app activity, creating a rich behavioral profile of you and your clients. Smart speakers and voice assistants listen for wake words, but they sometimes capture snippets of nearby conversations and send them to remote servers for processing. Fitness wearables, in‑car systems, and “always‑on” microphones further increase the volume of ambient data that can be collected.

Against that background, AI‑enabled recorders and summarizers like Pocket add a new layer: deliberate recording, transcription, and AI analysis of your conversations. Pocket is marketed as an AI‑powered “thought companion” and conversation recorder that creates searchable summaries and action items; by design it captures each conversation as its own object to improve clarity and support consent‑based use. For a busy lawyer, this is appealing—automatic notes, organized insights, and fewer missed follow‑ups.🤖

Yet the same capabilities that make HeyPocket useful also make it ethically sensitive. You are no longer just allowing your phone to passively log metadata; you are actively routing client speech through a third‑party AI stack that stores and processes that data, subject to its own privacy policy, security posture, and retention rules.

ABA Model Rules: Competence, Confidentiality, and Truthfulness ⚖️

The ABA Model Rules already give you a clear framework for evaluating whether and how to use tools like HeyPocket in practice.

  • Model Rule 1.1 (Competence) and Comment 8 require lawyers to understand “the benefits and risks associated with relevant technology.” In this context, “relevant technology” includes AI‑driven recorders, their data flows, and their vendor terms. Using a tool you do not understand can be a competence problem, not just a convenience choice.⚠️

  • Model Rule 1.6 (Confidentiality) requires “reasonable efforts” to prevent unauthorized access or disclosure of client information, which now includes avoiding casual sharing of contacts, calendars, and conversations with apps or cloud services that may let humans review or monetize the data. Several state bar opinions already warn that lawyers may not simply click “Allow” when apps request access to contacts or case‑related data unless they determine the information will not be viewed by humans or transferred without client consent.

  • ABA Formal Opinion 477R outlines a risk‑based analysis for electronic communications, asking you to weigh sensitivity, likelihood of disclosure, cost of safeguards, impact on representation, client expectations, and requests for enhanced security. That same method applies directly to AI recorders: you must ask whether routing privileged discussions through an AI vendor is “reasonable” given the stakes of the matter.

  • ABA Formal Opinion 498 specifically calls out always‑listening smart devices and recommends disabling them during client communications to avoid unnecessary exposure to third parties. If you would mute Alexa for an intake call, you should think even more carefully before inviting an AI recording service into the room.

Model Rules 5.1 and 5.3 (supervision of lawyers and non‑lawyer assistants) also matter. If you roll out AI note‑takers firmwide, you must implement policies, training, and oversight to ensure that lawyers, staff, and vendors handle client data consistently with confidentiality obligations. And Rule 8.4(c) (prohibition on dishonesty or deception) can be implicated if you secretly record clients, witnesses, or opposing parties even in one‑party consent jurisdictions; at least one ethics authority has treated undisclosed recordings as unethical despite being legal.

When AI Recordings and Smart Data Become Evidence 🧾

Courts have already embraced smart‑device data as evidence: location records, communication metadata, calendar entries, and app logs routinely appear in both criminal and civil litigation. Forensic tools can image a device and surface location histories, messages, and app‑generated artifacts that can reconstruct events with surprising precision.

AI tools are now entering that evidentiary picture. In United States v. Heppner (S.D.N.Y. 2026), a defendant’s use of a public AI platform to analyze his legal situation—and the documents he generated from those conversations—was held not to be protected by attorney‑client privilege or the work‑product doctrine. The court emphasized that the AI provider’s terms of service allowed collection and disclosure of prompts and outputs, so the defendant had no reasonable expectation of confidentiality.

The lesson for lawyers is direct: if you or your clients feed sensitive matter details into an AI recorder or note‑taker whose policies allow human review, secondary uses, or disclosure to third parties, privilege can be placed at risk. Vendor marketing language about security cannot substitute for a real review of actual terms, retention practices, and opt‑out mechanisms.heydata+3

Using HeyPocket and Similar Tools Ethically in Practice 🎙️

Ethical use of HeyPocket and similar tools is possible, but it is not “plug‑and‑play.” You should treat these platforms more like outsourced e‑discovery vendors than like harmless productivity apps.✅

Key practical steps include:

  1. Perform a documented vendor risk review. Read the privacy policy and data‑processing terms to see what is recorded, how long it is stored, whether data is used to train models, and what rights you and your clients have to delete or export recordings. Confirm that access is logged and limited, and that data is encrypted in transit and at rest.

  2. Limit what you record. Default to not recording privileged conversations unless you have a clear, articulable reason, a defensible risk assessment, and—in higher‑risk matters—informed client consent. Use tools like HeyPocket in lower‑sensitivity contexts (internal debriefs, CLE notes, public presentations) rather than as an automatic recorder of all client meetings.

  3. Use explicit disclosures and consent. In many jurisdictions, recording requires the consent of all parties; even where only one‑party consent is required, an undisclosed recording can still trigger ethical concerns. A short, plain‑language explanation (“We use an AI note‑taking assistant that will record and transcribe this call; here is how we protect your information…”) respects client autonomy and supports informed consent under Model Rules 1.4 and 1.6.

  4. Segment data and control access. Configure firm accounts so that recordings are tied to matters, not to individuals’ personal devices wherever possible. Restrict who can review recordings and summaries, and enforce role‑based permissions consistent with Rule 5.1 and 5.3 obligations.

  5. Define bright‑line “no AI” categories. Certain matters—criminal defense, internal investigations, sensitive family or immigration cases, high‑value trade secret disputes—may warrant a categorical ban on AI recorders because the downside of any leak is catastrophic. Document these categories in your technology and confidentiality policies.

  6. Train your team and your clients. Explain to lawyers, staff, and key clients that not every AI interaction is confidential or privileged and that using consumer‑grade tools on their own may waive important protections. Encourage clients to avoid entering matter‑specific facts into public AI systems without discussing it with you first.

Approached this way, a tool like HeyPocket can be used as a controlled, auditable note‑taking assistant rather than a stealth surveillance risk. The ethical question is not “AI recorder: yes or no?” but “Under what conditions, with what safeguards, and in which matters, if any, is this tool a reasonable choice?”

Technology Competence as a Continuous Obligation 🚀

Technology will only grow more invasive, more ambient, and more tightly integrated with everyday law practice.📈 ABA and state bar guidance increasingly treats technology competence as an ongoing duty, tied directly to confidentiality, supervision, and even malpractice exposure. Smart devices and AI platforms are not going away, so opting out entirely is rarely realistic.

For lawyers with limited to moderate technical skills, the path forward is practical: build a short, repeatable checklist for evaluating tools; lean on reputable vendors with clear, lawyer‑friendly terms; seek help from cybersecurity professionals when stakes are high; and treat client confidentiality as the non‑negotiable anchor for every technology decision. When you do that, you can leverage products like HeyPocket to improve focus and memory while still honoring the core promise that underlies every engagement letter: your client’s secrets stay safe.🔐

MTC

TSL LABS BONUS: Dynamic Random-Access Memory (DRAM): Why It Matters for Law Firm Performance and Data Security ⚖️💻

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we break down our April 20, 2026, Tech‑Savvy Lawyer editorial on how a global DRAM shortage and AI data center demand are driving up PC prices, pushing many legal professionals toward Apple hardware, and redefining what technological competence really means. We explore how unified memory, on‑device AI, and long‑term support lifecycles are changing the Mac vs. Windows calculus, and why “cheap but weak” laptops may now create serious competence and confidentiality risks for your clients.

In our conversation, we cover the following:

  • 00:00 – Why upgrading your work laptop in 2026 feels like buying a luxury vehicle, not a routine office expense.

  • 00:45 – Setting the stage: a “seismic shift” in hardware pricing hitting professional industries, with a focus on the legal field.01:30 – Introducing Michael D.J. Eisenberg’s Tech‑Savvy Lawyer editorial and its core thesis about a tech hardware crisis.

  • 02:15 – The global DRAM crunch: how AI data centers are buying up memory like airlines hoard jet fuel, and why PC OEMs are getting squeezed.

  • 03:30 – Microsoft’s April 2026 Surface price hikes and the end of the “Windows is cheaper” assumption for law firms.

  • 05:15 – The “value inversion”: when high‑end Windows laptops now cost more than roughly comparable MacBooks.

  • 06:30 – Why this isn’t a normal tech price cycle and how it breaks 20 years of corporate IT purchasing assumptions.

  • 07:15 – Apple’s structural advantage: vertical integration, unified memory, and shielding itself from spot‑market DRAM volatility.

  • 08:30 – The M‑series (M5) advantage: performance per watt, thermal behavior, battery life, and running local AI plus heavy legal workloads.

  • 09:45 – Yes, Apple prices are rising too—why the relative “security‑to‑cost” and performance story still favors Macs for many professionals.

  • 10:45 – When “cheap but weak” hardware crosses the line: connecting underpowered laptops to ABA Model Rule 1.1 (competence) and Comment 8 on tech competence.

  • 12:00 – From annoyance to ethical exposure: how sluggish systems cripple eDiscovery, AI‑driven research, and document automation.

  • 13:00 – Why laptop purchasing is now core client‑service strategy, not just a back‑office procurement task.

  • 13:45 – On‑device vs. cloud AI: where computation happens, why that matters, and how it ties into ABA Model Rule 1.6 (confidentiality).

  • 14:30 – The role of Apple’s Neural Engine and local processing in reducing reliance on external AI APIs and third‑party servers.

  • 15:30 – Clarifying the security nuance: Windows is not inherently less secure, but comparable on‑device AI capability often costs more.

  • 16:30 – Redefining security in 2026: it’s not just antivirus and passwords; it’s where the AI thinking physically happens.

  • 17:15 – Building a documented purchase matrix: price, performance, storage, memory, security, lifecycle, and critical software compatibility.

  • 18:15 – When you can’t leave Windows: legacy legal software, state e‑filing systems, and the hidden costs of moving to macOS.

  • 19:00 – Survival strategies for Windows‑locked practices: non‑Surface OEMs, staggered refresh cycles, and buying fewer but higher‑quality machines.

  • 19:45 – Treating laptops as long‑term infrastructure instead of disposable commodities.

  • 20:15 – Big‑picture recap: DRAM shortages, unified memory, ethical duties, and shifting hardware norms in law practice.

  • 20:45 – The closing question: will AI‑driven hardware requirements quietly raise the price of access to justice?

RESOURCES

Mentioned in the episode

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

If you want your next laptop purchase to strengthen—not weaken—your ethical obligations, client security, and AI‑powered workflows, hit play now and learn how to build a smarter, future‑proof hardware strategy. 🎧💡

Dynamic Random-Access Memory (DRAM): Why It Matters for Law Firm Performance and Data Security ⚖️💻

DRAM powers smoother multitasking for faster legal research, drafting, and case management.

Dynamic Random-Access Memory (DRAM aka “RAM”) is the short-term memory your computer uses to run active tasks. It holds data that your system needs right now. This includes open documents, browser tabs, and legal software processes. When you close a program or shut down your device, DRAM clears. It does not store information permanently. 📂

For legal professionals, DRAM plays a direct role in daily productivity. Every time you open a large PDF, review discovery files, or run a case management system, your computer relies on DRAM. If there is not enough memory available, your system slows down. You may notice lag, freezing, or delayed responses. 🐢 These issues interrupt workflow and increase frustration.

In a legal setting, slow systems are more than an inconvenience. They can affect client service. Delays in accessing documents or responding to communications can create risk. Under ABA Model Rule 1.1, lawyers must maintain competence. This includes understanding the benefits and risks of relevant technology (see Comment 8). 💡 Knowing how DRAM impacts performance is part of that duty.

DRAM also connects to data security. While DRAM itself is temporary, system performance influences how securely lawyers handle client information. A slow or overloaded system may lead users to adopt risky workarounds. For example, attorneys may save files locally instead of using secure systems. They may also delay updates or avoid security tools that slow performance further. 🔒 These behaviors can increase exposure to data breaches.

ABA Model Rule 1.6 requires lawyers to safeguard client confidentiality. Reliable hardware supports this obligation. Adequate DRAM helps systems run security software smoothly. It also supports encryption processes and secure cloud access. When systems perform well, lawyers are more likely to follow proper security protocols. ✅

Strong DRAM performance helps law firms protect confidential data and secure workflows.

Understanding DRAM also helps when purchasing or upgrading hardware. Many law firms invest in software but overlook system specifications. Memory is a key factor in performance. A modern legal practice often requires at least 16 GB of DRAM for standard workloads.* Larger litigation matters or heavy e-discovery tools may require more. 📊 Without sufficient memory, even the best software cannot perform effectively.

Consider a common scenario. An attorney is reviewing thousands of documents in an e-discovery platform. Each file requires memory to open and process. If the system lacks DRAM, documents load slowly. Searches take longer. The attorney may lose time waiting instead of analyzing. With adequate DRAM, the same task becomes faster and more efficient. ⚡

DRAM also supports multitasking. Lawyers often run multiple applications at once. Email, document management systems, research tools, and video conferencing may all run simultaneously. Each application consumes memory. When DRAM is sufficient, switching between tasks is seamless. When it is not, the system may stall or crash.

It is important to distinguish DRAM from storage. Storage, such as a hard drive or solid-state drive, holds data long-term. DRAM handles active processes. Both are important, but they serve different purposes. Confusing the two can lead to poor purchasing decisions. 💻

Cloud computing does not eliminate the need for DRAM. Even cloud-based legal tools rely on local system memory. Your browser and operating system still require DRAM to function. A fast internet connection helps, but it does not replace adequate memory. 🌐

Law firm leaders should view DRAM as part of risk management. Investing in proper hardware reduces downtime. It improves efficiency and supports compliance with professional obligations. It also enhances the user experience, which can reduce errors caused by frustration or delay.

Smart hardware planning starts with the right DRAM for modern legal practice.

In practical terms, firms should review device specifications regularly. They should align hardware with the demands of their practice areas. Litigation, transactional work, and regulatory practices may have different requirements. IT professionals can assist with these assessments.

In summary, DRAM is a foundational component of legal technology. It affects speed, reliability, and security. Lawyers do not need deep technical knowledge, but they should understand its impact. This awareness supports better decisions and stronger compliance with ABA Model Rules. ⚖️ By prioritizing performance and security, firms can deliver more effective and responsible client service. 🚀

MTC: Why 2026’s PC Price Hikes Put Law Firms at Risk 💻⚖️ (and Why Many Lawyers Are Quietly Switching to Macs)

2026 PC price hikes threaten law firm budgets, performance, ethical compliance!

Lawyers and Legal Professionals, the warning signs have been flashing for more than a year: 2026 was never going to be a normal hardware refresh cycle for law firms. 💸 Economists tracking the global memory crunch and AI‑driven demand have been clear that PCs and laptops would see double‑digit price hikes as Dynamic Random-Access Memory (DRAM) and other components were redirected to lucrative data‑center workloads. For lawyers who depend on reliable, reasonably priced computers to run practice‑critical applications, this is not an abstract macroeconomic story; it is a direct hit to margins, access to justice, and even ethical compliance.

Recent moves by Microsoft have made the problem impossible to ignore. In mid‑April, Microsoft sharply raised prices across its Surface lineup, including the Surface Pro and Surface Laptop families that many lawyers and law firms rely on for their Windows‑based workflows. Entry‑level machines that once started under $1,000 now begin well above that mark, with some configurations jumping several hundred dollars over their launch prices. In some cases, high‑end Surface laptops now cost more than roughly comparable MacBook Pro configurations, erasing the longstanding assumption that Windows hardware is always the cheaper option.

Here, at the Tech‑Savvy Lawyer blog, I have been chronicling these developments for months, noting that major PC manufacturers signaled 15–20 percent price increases thanks to the AI‑driven memory squeeze and ongoing geopolitical tariff pressures. Those predictions are now a reality. For solo practitioners, small firms, and even midsize practices with thin IT budgets, the message is simple: if you are buying new Windows hardware in 2026, expect to pay more for the same level of performance, or accept underpowered machines that will age badly under AI‑enhanced workflows. 🧾

Apple, by contrast, has maneuvered itself into a relatively stronger position, even though it is not completely immune to component inflation. By tightly integrating Apple Silicon, storage, and other components under its own supply chain, Apple has been able to hold the line on some key configurations in a way that many PC Original Equipment Manufacturers (OEM) cannot. Commentators focusing on the legal market have already highlighted products like the MacBook Neo as examples of Apple using its vertical control to keep pricing relatively stable while competitors raise prices or quietly cut specifications. At the same time, Apple’s M‑series and M5‑generation chips continue to deliver strong performance per watt, especially for on‑device AI tasks and productivity applications, which matters when you are running multiple research tools, document management systems, videoconferencing platforms, and AI assistants on a single machine.

This does not mean Apple has avoided all price movement. Newer MacBook Air and MacBook Pro models with M5 chips have seen list price increases of around $ 100–$ 400, depending on configuration. However, when Microsoft’s updated Surface pricing pushes many midrange Windows machines into the same or higher price tiers than comparable Macs, the calculus for lawyers becomes more nuanced. A Windows laptop that used to be the “budget” choice can now be as expensive as, or more expensive than, a MacBook that delivers similar or better performance and longer support life.

MacBooks outperform rising-cost Windows laptops for lawyers seeking value, security!

For the legal sector, this convergence of price and performance has three important implications.

First, hardware purchasing is no longer a purely IT or “back office” concern. It is an integral part of risk management and client‑service strategy. The ABA Model Rules, particularly Model Rule 1.1 on competence and Comment 8 to that rule, make clear that lawyers have a duty to maintain competence in relevant technology. Using outdated, underpowered hardware can impair your ability to use secure videoconferencing, e‑discovery tools, AI‑driven research platforms, and document automation systems. That, in turn, can compromise both efficiency and the quality of representation. ⚖️ When price hikes push firms toward “cheap but weak” machines, they risk falling behind on this duty of technological competence.

Second, Model Rule 1.6 on confidentiality and related ethics opinions underscore the importance of protecting client information in digital environments. In an era when AI tools increasingly run on‑device, machines that can perform more work locally reduce reliance on cloud processing and third‑party data transfers. Apple’s integrated hardware and on‑device AI capabilities, combined with its strong security posture, can make Macs appealing from a confidentiality standpoint, especially for sensitive practices such as criminal defense, family law, and complex commercial litigation. That does not mean Windows machines are inherently less secure, but when high‑end, well‑secured Windows hardware costs significantly more than it used to, some firms may find that Apple’s offerings now deliver a stronger security‑to‑cost ratio.

Third, long‑term budgeting must adapt to the new reality that technology lifecycles will cost more. Economists and industry groups have projected that tariffs and component shortages could add hundreds of dollars to the average laptop by the time those costs are fully passed through. For law firms, this means that hardware refresh cycles should be planned more deliberately, with strategic staggering of purchases, careful evaluation of total cost of ownership, and perhaps a willingness to stretch the lifecycle of existing machines that still meet performance and security requirements. 🗓️

So where does this leave the practicing lawyer or small firm managing technology with limited internal IT support? 🤔

One practical approach is to stop treating the Windows versus Mac decision as a matter of habit and start treating it as a structured, documented evaluation. Build a simple matrix that compares specific models—such as a midrange Surface Laptop and a MacBook Air or MacBook Neo—on price, performance, storage, memory, security features, support life, and compatibility with your core practice software. Involving firm leadership in these decisions and tying them explicitly to ABA Model Rule 1.1 and 1.6 considerations will help demonstrate that you are exercising reasonable diligence in technology selection.

At the same time, lawyers should not assume that Apple is the default winner. Many legal‑industry tools, case management systems, and document workflows remain optimized for Windows, especially in litigation and specialized practice areas. If your practice depends heavily on Windows‑only software, the cost of moving to Macs (including virtualization or remote desktop solutions) may outweigh hardware price advantages. However, even in a Windows‑centric environment, the new pricing landscape may push firms to consider non‑Surface OEMs or to buy fewer, higher‑quality machines and share them across teams rather than treating laptops as disposable commodities.

Strategic legal tech planning improves performance, security, and long-term cost control for lawyers!

Ultimately, the predicted—and now visible—price hikes on PCs are not just a story about higher invoices from vendors. They are a stress test of how seriously law firms take technological competence, security, and long‑term planning. The firms that respond by proactively reassessing their hardware standards, considering platforms like Apple that have weathered the pricing storm more gracefully, and explicitly aligning purchasing decisions with ABA Model Rules will not only control costs; they will position themselves as trustworthy, efficient, and forward‑looking in a market where clients increasingly notice the difference. 🚀

MTC

📖 Word of the Week: “Cross‑Tenant” Learning in Legal Practice

Cross-tenant learning helps law firms improve AI tools without exposing data

If your firm uses cloud‑based tools, you are already living in a multi‑tenant world. In that world, cross‑tenant learning is quickly becoming a key concept that every lawyer and legal operations professional should understand. 🧠⚖️

In simple terms, a “tenant” is your firm’s logically separate space inside a cloud platform: your own users, matters, documents, and settings, isolated from everyone else’s. Cross‑tenant learning refers to techniques in which a vendor’s system learns from patterns across multiple tenants (for example, many law firms) to improve its features—such as search, drafting suggestions, or document classification—without exposing any other firm’s confidential data to you or yours to them.

Why cross‑tenant learning matters for law firms

Cross‑tenant learning is especially relevant as generative AI and machine‑learning tools become embedded in e‑discovery platforms, contract review tools, legal research systems, and practice‑management software. Vendors may use aggregated and anonymized usage data to:

  • Improve relevance of search results and recommendations.

  • Enhance clause and issue spotting in contracts and briefs.

  • Reduce false positives in e‑discovery or compliance alerts.

  • Optimize workflows based on how similar firms use the product.

For lawyers, the value proposition is straightforward: your tools can become “smarter” faster, based on lessons learned across many organizations, not just your own firm’s experience. Done properly, cross‑tenant learning can raise the baseline quality and efficiency of technology available to your practice. ⚙️📈

ABA Model Rules: Confidentiality and Competence

Any discussion of cross‑tenant learning for law firms must start with confidentiality and competence.

  • Model Rule 1.6 (Confidentiality of Information) requires lawyers to safeguard information relating to the representation of a client. That obligation extends to how your vendors collect, store, and use your data. You must understand whether and how client data may be used for cross‑tenant learning and ensure that any such use preserves confidentiality through anonymization, aggregation, and strong technical and contractual controls. 🔐

  • Model Rule 1.1 (Competence), including Comment 8, emphasizes that lawyers should keep abreast of the benefits and risks associated with relevant technology. Understanding cross‑tenant learning is now part of that duty. You do not need to become a data scientist, but you should be comfortable asking vendors precise questions and recognizing red flags.

  • Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance) applies when you rely on vendors as nonlawyer assistants. You must make reasonable efforts to ensure that their conduct is compatible with your professional obligations, including how they use your data for cross‑tenant learning. 🧾

Key questions to ask your vendors

ABA Model Rules guide ethical use of cross-tenant learning technologies

When evaluating a product that relies on cross‑tenant learning, consider asking:

  1. What data is used?

    • Is it only metadata or usage logs, or are actual document contents included?

    • Is the data aggregated and anonymized before it is used to train shared models?

  1. How is confidentiality protected?

    • Can other tenants ever see prompts, documents, or client‑identifying information from our firm?

    • What technical measures (encryption, access controls, tenant isolation) are in place?

  1. Can cross‑tenant learning be limited or disabled?

    • Do we have opt‑out or configuration controls?

    • Is there a dedicated model or environment for our firm if needed?

  1. What do the contract and policies say?

    • Does the MSA or DPA clearly limit use of client data to defined purposes?

    • How long is data retained, and how is it deleted if we leave?

These questions are not merely IT concerns; they go directly to your obligations under the ABA Model Rules and your firm’s risk profile.

Practical examples in law practice

Consider a cloud‑based contract‑analysis platform used by hundreds of firms. Over time, the provider can see which clauses lawyers routinely flag as risky, which edits are typically made, and what becomes the “preferred” language for certain issues. Through cross‑tenant learning, the system can use that aggregated knowledge to highlight problematic clauses and suggest alternatives more accurately for everyone.

Another example is an e‑discovery platform that uses cross‑tenant learning to distinguish between truly relevant documents and common “noise” such as automatically generated emails. The more matters the system processes across different tenants, the better it gets at ranking documents and reducing review burdens. This can be a material efficiency gain for litigation teams. ⚖️💼

In both scenarios, your ethical comfort depends on whether underlying data is appropriately anonymized, compartmentalized, and contractually protected.

Governance steps for your firm

To align cross‑tenant learning with professional obligations, firms can:

  • Update vendor‑due‑diligence checklists to include explicit questions about cross‑tenant learning, training data use, and model isolation.

  • Involve a cross‑functional team—lawyers, IT, information security, and risk management—in vendor selection and review.

  • Document your analysis of vendor practices and how they satisfy confidentiality, competence, and supervision obligations under the ABA Model Rules.

  • Educate lawyers and staff about how AI‑enabled tools work, what kinds of data they send into the system, and how to avoid unnecessary exposure of client‑identifying details.

Takeaway for busy practitioners

Smart vendor questions reduce risk in cross-tenant legal technology adoption

You do not need to reject cross‑tenant learning to protect your clients. Instead, you should approach it as a powerful capability that demands informed oversight. When well‑implemented, cross‑tenant learning can help your firm deliver faster, more consistent, and more cost‑effective legal services, while still honoring confidentiality and ethical duties. When poorly explained or loosely governed, it becomes an unnecessary and avoidable risk.

Understanding how your tools learn—and from whom—is now part of competent, modern legal practice. ⚖️💡

TSL.P Podcast Special! Podcasting for Lawyers: The Truth Behind the Mic – ABA TECHSHOW 2026 (Special Audio‑Only Episode) 🎙️⚖️

This special episode features the audio‑only release of an ABA TECHSHOW 2026 panel I was excited to be part of: “Podcasting for Lawyers: The Truth Behind the Mic,” with moderator Ruby Powers and fellow panelists Gyi Tsakalakis and Stephanie Everett. 🎧 Instead of our usual one‑on‑one format, you will hear a live, conference‑style conversation about how lawyers can use podcasting, video, and modern legal technology to build authority, strengthen client and referral relationships, and stay aligned with legal‑ethics and professionalism rules.

Join Ruby, Gyi, Stephanie, and me as we discuss the following three questions and more!

  1. How can lawyers design and sustain a podcast that supports their practice goals and speaks to a clearly defined audience?

  2. What practical tech stacks—microphones, recording platforms, hosting services, and workflow tools—are realistic for busy attorneys and legal professionals?

  3. How do podcasting, video, and short‑form content contribute to SEO, GEO, and long‑term business development for law firms?

In our conversation, we cover the following

  • 00:00 – Welcome to ABA TECHSHOW 2026 and introduction of the panel: Ruby Powers (moderator), Gyi Tsakalakis, Stephanie Everett, and Michael D.J. Eisenberg. 🎙️

  • 02:00 – Each panelist explains their podcast, ideal listener, and why they chose podcasting as a medium.

  • 06:00 – Publishing cadence: weekly, bi‑weekly, and how consistency drives listener trust and download growth.

  • 10:00 – Adding video and YouTube to audio‑only shows and how video clips improve discovery on social media.

  • 14:00 – DIY production vs. using producers, internal teams, or podcast networks, including time and cost trade‑offs.

  • 18:00 – Core tech stacks in practice: microphones, Zoom, Riverside, StreamYard, Descript, Libsyn, Calendly, Buffer, and other essentials. 💻

  • 24:00 – Guest selection, outreach, and sound checks; when to decline an appearance or reschedule due to poor audio or bad fit.

  • 30:00 – Using podcast hosting analytics and social‑platform insights to understand who is listening and what resonates.

  • 35:00 – Podcasting as networking and “virtual coffee”: building relationships with lawyers, experts, and vendors. ☕

  • 40:00 – SEO and GEO benefits: how episodes create long‑tail visibility in search, and why attribution still matters.

  • 45:00 – Ethics and professionalism: confidentiality, bar‑advertising rules, disclaimers, and avoiding client‑identifying facts. ⚖️

  • 52:00 – Final advice for lawyers on the fence about starting a podcast and how to improve with each episode instead of waiting for perfection.

RESOURCES

Connect with the panel

Mentioned in the episode (non‑hardware / non‑software)

Hardware mentioned in the conversation

Software & Cloud Services mentioned in the conversation

When AI Falls Short - What Legal Professionals Must Know Before Relying on Microsoft Copilot and Similar Embedded AIs.

AI Errors in Legal Practice Demand Vigilant Attorney Oversight!

Any reader of my blog should realize by now that artificial intelligence is no longer a novelty in law practice; it is embedded in research platforms, document automation, e‑discovery, and now in tools like Microsoft Copilot that appear inside the same Microsoft 365 ecosystem lawyers already live in. Yet Copilot’s own terms of use long described it as being “for entertainment purposes only,” while Microsoft has simultaneously marketed it as an enterprise‑grade productivity assistant and is now backing away from prominent Copilot buttons in several Windows 11 apps. For lawyers who must live under the ABA Model Rules of Professional Conduct, this tension is not an amusing footnote; it is an ethics problem waiting to happen. 

Microsoft’s Copilot terms have advised that the service “can make mistakes,” “may not work as intended,” and should not be relied on for important advice. At the same time, Microsoft has begun removing or rebranding Copilot buttons from Notepad, Snipping Tool, Photos, and Widgets in Windows 11, framing this move as an effort to reduce “unnecessary Copilot entry points” and be “more intentional” about where AI shows up. The features, or at least the underlying AI, are not disappearing entirely; they are simply becoming less conspicuous. For the practicing lawyer, the message is clear: powerful AI is being woven into everyday tools, but its creators still do not want you to rely on it the way you rely on a human associate. 🤖

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators.

⚠️

when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. ⚠️

That is precisely where the ABA Model Rules step in. Model Rule 1.1 requires competent representation and, through Comment 8, includes a duty to keep abreast of the benefits and risks of relevant technology. Using AI in law practice is increasingly seen as part of that competence obligation, but competence does not mean blind trust in unvetted outputs from a system whose own terms warn you not to rely on it. A lawyer who treats Copilot’s draft as a finished research memo, brief, or contract without independent verification risks violating the duty of competence every bit as much as a lawyer who never learned to use electronic research tools in the first place.

Model Rule 1.6 on confidentiality presents a second, and in many ways more pressing, concern. Generative AI systems may store, log, or otherwise use prompt content for analysis and improvement, which means uncritical copying and pasting of confidential client information into Copilot can create a non‑trivial risk of exposure. The ABA and commentators have emphasized that before entering client data into a generative AI tool, lawyers must assess whether that data could be disclosed or accessed by others, including through unintended re‑use in future outputs to different users. That risk analysis is not optional; it is part of your obligation to make reasonable efforts to prevent unauthorized access or disclosure.

Fake Citations from AI Tools can Threaten Accuracy and Legal Ethics!

Model Rules 5.1 and 5.3, which govern the responsibilities of partners, managers, supervisory lawyers, and non‑lawyer assistants, also apply to AI use. When you deploy Copilot in your firm, you are functionally introducing a new category of “assistant” whose work product must be supervised like that of a junior lawyer or paralegal. Policies, training, and review procedures are needed so that AI‑drafted content is consistently checked for accuracy, bias, hallucinations, and improper legal conclusions before it ever reaches a client, court, or counterparty. Ignoring Copilot’s disclaimers and Microsoft’s own hedging around reliability is, in effect, ignoring red flags that any reasonable supervising attorney would address.

Model Rule 1.4 on communication adds yet another dimension: transparency with clients about how you are using AI in their matters. Authorities interpreting the Model Rules have stressed that lawyers should keep clients reasonably informed, which includes explaining when and how AI tools are utilized to assist in their cases. This is particularly important where AI may affect cost, turnaround time, or the nature of the work performed, such as using Copilot to generate a first draft instead of assigning that task to an associate. Engagement letters and fee agreements are increasingly incorporating language about AI use, both to set expectations and to align with evolving ethical guidance.

The “for entertainment purposes only” language is more than a curiosity; it is a signal about allocation of risk. Microsoft’s disclaimer mirrors language historically used by psychic hotlines and other services seeking to avoid responsibility for inaccurate advice. When such a disclaimer is attached to a tool you might be tempted to use for legal analysis, the tool is telling you that you assume the risks of errors. Under the Model Rules, those risks ultimately translate into potential malpractice, sanctions, or disciplinary action if AI‑generated errors make their way into filed documents or client counseling.

Recent real‑world incidents involving lawyers who submitted briefs containing AI‑fabricated citations demonstrate how quickly misuse of generative AI can cross ethical lines. In those cases, the core problem was not that AI was used; it was that the lawyers failed to verify the content and then misrepresented fictitious cases as genuine authority to the court. That behavior implicates Model Rules 3.3 (candor toward the tribunal) and 8.4 (misconduct) along with competence. Copilot’s warnings about possible mistakes do not excuse a lawyer from the duty to check every citation, quote, and legal conclusion that AI produces before relying on it.

lawyers must assess whether that data could be disclosed or accessed by others

⚠️

lawyers must assess whether that data could be disclosed or accessed by others ⚠️

For practitioners with limited to moderate technology skills, the answer is not to abandon AI entirely, but to approach it with structured safeguards. A practical workflow might involve using Copilot to outline a research plan or draft a first pass at a contract clause, followed by standard legal research in trusted databases and rigorous review by a human lawyer before anything is finalized. Firms should configure Copilot and other AI tools in ways that minimize data exposure, such as disabling cross‑tenant learning, a feature that lets the system learn from patterns across multiple organizations’ environments, where possible, and restricting which matters and users can access certain features. Training sessions can focus less on technical jargon and more on concrete do’s and don’ts tied directly to the Model Rules, which is the language most lawyers already speak. 🧠

alawys Protect Client Confidentiality When Using AI in Modern Law Practice!

Governance is also essential. Written AI policies should address acceptable use cases, prohibited content for prompts, mandatory review standards, logging and auditing of AI‑assisted work, and incident response if an AI‑related error is discovered. These policies should be backed by regular training and by leadership that models appropriate use, rather than quietly delegating AI experimentation to the most tech‑savvy associates. Vendors’ evolving terms of use—including Microsoft’s move to revise its “entertainment purposes” language and adjust Copilot integration in Windows—should be monitored and incorporated into risk assessments over time.

In short, when AI falls short, it is the lawyer—not the software vendor—who will have to answer to clients, courts, and regulators. Copilot and similar tools can be valuable allies in a modern legal practice, but only if they are treated as fallible assistants whose work must be checked, not as oracles. The ABA Model Rules already provide the framework: competence, confidentiality, supervision, and honest communication. The task for today’s legal professionals is to apply that framework thoughtfully to AI, recognizing both its promise and its very real limitations before letting it anywhere near client work or court filings. ⚖️🤖

Podcasting for Lawyers: The Truth Behind the Mic at ABA TECHSHOW 2026 🎙️⚖️

🎧 Watch the ABA TECHSHOW 2026 panel: “Podcasting for Lawyers: The Truth Behind the Mic”

Podcasting has become one of the most powerful ways for lawyers to build authority, strengthen client relationships, and stand out in a crowded online marketplace—if it is done strategically and ethically. I recently had the privilege of serving on the March 26, 2026, ABA TECHSHOW panel, “Podcasting for Lawyers: The Truth Behind the Mic,” alongside moderator Ruby Powers and fellow panelists Gyi Tsakalakis and Stephanie Everett. Together, we walked through how attorneys can use podcasting, video, and legal technology to create consistent, professional content that supports real‑world business development while staying compliant with confidentiality and bar‑advertising rules. 🎧

In this post, you’ll find the recording of our ABA TECHSHOW 2026 session, a brief overview of the topics we covered, and links to tools and resources that can help you start—or sharpen—your own law‑firm podcast.

Brief Outline

1. Why podcasting makes sense for lawyers in 2026

  • How podcasting fits into modern law‑firm marketing and thought leadership.

  • The role of podcasts in SEO, GEO, and building long‑term visibility in your practice area.

  • Why authenticity, consistency, and a clear audience matter more than fancy production tricks.

2. Choosing your podcast’s audience and goals

  • Deciding whether you’re speaking to potential clients, referral sources, or other lawyers.

  • Aligning topics, interview guests, and episode formats with your business and reputational goals.

  • Avoiding the “variety show” trap and staying focused on the problems your audience actually cares about.

3. Building a realistic podcast tech stack for busy attorneys

  • Microphones and basic audio gear that deliver professional sound without breaking the bank.

  • Recording tools such as Zoom, Riverside, and StreamYard to capture both audio and video.

  • Hosting and workflow tools like Libsyn, Descript, Calendly, and Buffer that help you publish consistently and repurpose content efficiently.

4. Ethics, professionalism, and “the truth behind the mic”

  • Key confidentiality and advertising issues to consider when discussing client work or legal topics.

  • How to think about disclaimers, legal information vs. legal advice, and jurisdictional concerns.

  • Why podcasting is not just marketing content but also a professional reflection of how you communicate and practice law.

5. Making podcasting sustainable (and enjoyable) over time

  • Scheduling systems that keep you ahead on episodes without overwhelming your calendar.

  • Guest strategies that expand your network and add value for your audience.

  • How to measure success: client feedback, referrals, and qualitative signals—not just download counts.

Resources

  • 🌐 Session description on ABA TECHSHOW
    https://www.techshow.com/sessions/podcasting-for-lawyers-the-truth-behind-the-mic/

  • 💻 The Tech‑Savvy Lawyer.Page – blog and podcast
    https://www.TheTechSavvyLawyer.page

  • 🎙️ Tools and services mentioned

    • Buffer – https://buffer.com

    • Calendly – https://calendly.com

    • Descript – https://www.descript.com

    • Libsyn – https://libsyn.com

    • Riverside – https://riverside.fm

    • StreamYard – https://streamyard.com

    • Zoom – https://zoom.us

Suggested call‑to‑action paragraph

If you’re a lawyer or legal professional considering a podcast—or looking to refine the one you already have—I invite you to watch the full ABA TECHSHOW 2026 session and explore the resources above. Then connect with me at MichaelDJ@TheTechSavvyLawyer.Page to share what you’re building, ask questions about podcasting workflows and ethics, or suggest future topics you’d like to hear covered. 🎙️⚖️

📢 Special Shout-Out and Thank You to Ruby Powers for the invitation and Gyi and Stephanie for being great co-panelists!