MTC: 🔒 Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything ⚖️

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation 🛡️

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis 👥

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know 📋

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection 🗺️

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction 💼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy 🛠️

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World 🔮

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. ⚠️

🎙️ Ep. 120: AI Game Changers for Law Firms - Stephen Embry on Legal Tech Adoption and Privacy Concerns 🤖⚖️

My next guest is Stephen Embry. Steve is a legal technology expert, blogger at Tech Law Crossroads, and contributor to Above the Law. A former mass tort defense litigator with 20 years of remote practice experience, Steven specializes in AI implementation for law firms and legal technology adoption challenges. With a master's degree in civil engineering and programming expertise since 1980, he brings a unique technical insight to legal practice. Steven provides data-driven analysis on how AI is revolutionizing law firms while addressing critical privacy and security concerns for legal professionals. 💻

Join Stephen Embry and me as we discuss the following three questions and more! 🎯

  1. What do you think are the top three game-changer announcements from the 2025 ILTA Conference for AI that're gonna make the most impact for solo, small, and mid-size law firms?

  2. What are the top three security and privacy concerns lawyers should address when using AI?

  3. What are your top three hacks when it comes to using AI in legal?

In our conversation, we covered the following and more! 📝

  • [00:00:00] Episode Introduction & Guest Bio

  • [00:01:00] Steve's Current Tech Setup

  • [00:02:00] Apple Devices Discussion - MacBook Air M4, AirPods Pro

  • [00:06:00] Android Phone & Remote Practice Experience

  • [00:09:00] iPad Collection & MacBook Air Purchase Story

  • [00:12:00] Travel Tech & Backup Strategies

  • [00:15:00] Q1: AI Game Changers from ILTA 2025 Conference

  • [00:24:00] Billable Hour vs AI Adoption Challenges

  • [00:26:00] Competition & Client Demands for Technology

  • [00:35:00] Q2: AI Security & Privacy Concerns for Lawyers

  • [00:37:00] Discoverability & Privilege Waiver Issues

  • [00:44:00] Q3: Top AI Hacks for Legal Professionals

  • [00:46:00] Using AI for Document Construction & Rules Compliance

  • [00:50:00] Contact Information & Resources

Resources 📚

Connect with Stephen Embry

• Email: sembry@techlawcrossroads.com
• Blog: Tech Law Crossroads - https://techlawcrossroads.com
• Above the Law Contributions: https://abovethelaw.com
• LinkedIn: [Stephen Embry LinkedIn Profile]

Mentioned in the Episode

• ILTA (International Lawyers Technology Association) Conference 2025 - https://www.iltanet.org
• Max Stock Conference - Chicago area legal technology conference
• Consumer Electronics Show (CES) - https://www.ces.tech
• Federal Rules of Civil Procedure - https://www.uscourts.gov/rules-policies/current-rules/federal-rules-civil-procedure
• Apple Event (October 9th) - Apple's product announcement events
• Gaylord Conference Center - Washington, DC area conference venue

Hardware Mentioned in the Conversation 🖥️

• MacBook Air M4 (13-inch) - https://www.apple.com/macbook-air/
• iPad Pro - https://www.apple.com/ipad-pro/
• iPad Air - https://www.apple.com/ipad-air/
• iPad Mini - https://www.apple.com/ipad-mini/
• iPhone 16 - https://www.apple.com/iphone-16/
• Apple Watch Ultra 2 - https://www.apple.com/apple-watch-ultra-2/
• AirPods Pro - https://www.apple.com/airpods-pro/
• Samsung Galaxy (Android phone) - https://www.samsung.com/us/mobile/phones/galaxy/
• Samsung Galaxy Fold 7 - https://www.samsung.com/global/galaxy/galaxy-z-fold7/

Software & Cloud Services Mentioned in the Conversation ☁️

• Apple Intelligence - https://www.apple.com/apple-intelligence/
• ChatGPT - https://chat.openai.com
• Claude (Anthropic) - https://claude.ai
• Brock AI - AI debate and argumentation tool
• Notebook AI - https://notebooklm.google.com
• Microsoft Word - https://www.microsoft.com/en-us/microsoft-365/word
• Dropbox - https://www.dropbox.com
• Backblaze - https://www.backblaze.com
• Synology - https://www.synology.com
• Whisper AI - https://openai.com/research/whisper

Don't forget to give The Tech-Savvy Lawyer.Page Podcast a Five-Star ⭐️ review on Apple Podcasts or wherever you get your podcast feeds! Your support helps us continue bringing you expert insights on legal technology.

Our next episode will be posted in about two weeks. If you have any ideas about a future episode, please contact Michael at michaeldj@techsavvylawyer.page 📧

🚀 Shout Out to Steve Embry: A Legal Tech Visionary Tackling AI's Billing Revolution!

Legal technology expert Steve Embry has once again hit the mark with his provocative and insightful article examining the collision between AI adoption and billable hour pressures in law firms. Writing for TechLaw Crossroads, Steve masterfully dissects the DeepL survey findings that reveal 96% of legal professionals are using AI tools, with 71% doing so without organizational approval. His analysis illuminates a critical truth that many in the profession are reluctant to acknowledge: the billable hour model is facing its most serious existential threat yet.

The AI Efficiency Paradox in Legal Practice ⚖️

Steve’s article brilliantly connects the dots between mounting billable hour pressures and the rise of shadow AI use in legal organizations. The DeepL study reveals that 35% of legal professionals frequently use unauthorized AI tools, primarily driven by pressure to deliver work faster. This finding aligns perfectly with research showing that AI-driven efficiencies are forcing law firms to reconsider traditional billing models. When associates can draft contracts 70% faster with AI assistance, the fundamental economics of legal work shift dramatically.

The legal profession finds itself caught in what experts call the "AI efficiency paradox". As generative AI tools become more sophisticated at automating legal research, document drafting, and analysis, the justification for billing clients based purely on time spent becomes increasingly problematic. This creates a perfect storm when combined with the intense pressure many firms place on associates to meet billable hour quotas - some firms now demanding 2,400 hours annually, with 2,000 being billable and collectible.

Shadow AI Use: A Symptom of Systemic Pressure 🔍

Steve's analysis goes beyond surface-level criticism to examine the root causes of unauthorized AI adoption. The DeepL survey data shows that unclear policies account for only 24% of shadow AI use, while pressure to deliver faster work represents 35% of the motivation. This finding supports Steve's central thesis that "the responsibility for hallucinations and inaccuracies is not just that of the lawyer. It's that of senior partners and clients who expect and demand AI use. They must recognize their accountability in creating demands and pressures to not do the time-consuming work to check cites".

This systemic pressure has created a dangerous environment where junior lawyers face impossible choices. They must choose between taking unbillable time to thoroughly verify AI outputs or risk submitting work with potential hallucinations to meet billing targets. Recent data shows that AI hallucinations have appeared in over 120 legal cases since mid-2023, with 58 occurring in 2025 alone. The financial consequences are real - one firm faced $31,100 in sanctions for relying on bogus AI research.

The Billable Hour's Reckoning 💰

How will lawyers handle the challenge to the billable hour with AI use in their practice of law?

Multiple industry observers now predict that AI adoption will accelerate the demise of traditional hourly billing. Research indicates that 67% of corporate legal departments and 55% of law firms expect AI-driven efficiencies to impact the prevalence of the billable hour significantly. The legal profession is witnessing a fundamental shift where "[t]he less time something takes, the more money a firm can earn" once alternative billing methods are adopted.

Forward-thinking firms are already adapting by implementing hybrid billing models that combine hourly rates for complex judgment calls with flat fees for AI-enhanced routine tasks. This transition requires firms to develop what experts call "AI-informed Alternative Fee Arrangements" that embed clear automation metrics into legal pricing.

The Path Forward: Embracing Responsible AI Integration 🎯

Steve’s article serves as a crucial wake-up call for legal organizations to move beyond sanctions-focused approaches toward comprehensive AI integration strategies. The solution requires acknowledgment from senior partners and clients that AI adoption must include adequate time for verification and quality control processes. This too should serve as a reminder for any attorney, big firm to solo, to check their work before submitting it to a court, regulatory agency, etc. Several state bars and courts have begun requiring certification that AI-generated content has been reviewed for accuracy, recognizing that oversight cannot be an afterthought.

The most successful firms will be those that embrace AI while building robust verification protocols into their workflows. This means training lawyers to use AI competently, establishing clear policies for AI use, and most importantly, ensuring billing practices reflect the true value delivered rather than simply time spent. As one expert noted, "AI isn't the problem, poor process is".

Final Thoughts: Technology Strategy for Modern Legal Practice 📱

Are you ready to take your law practice to the next step with AI?

For legal professionals with limited to moderate technology skills, the key is starting with purpose-built legal AI tools rather than general-purpose solutions. Specialized legal research platforms that include retrieval-augmented generation (RAG) technology can significantly reduce hallucination risks while providing the efficiency gains clients expect. These tools ground AI responses in verified legal databases, offering the speed benefits of AI with enhanced accuracy.

The profession must also recognize that competent AI use requires ongoing education. Lawyers need not become AI experts, but they must develop "a reasonable understanding of the capabilities and limitations of the specific GAI technology" they employ. This includes understanding when human judgment must predominate and how to effectively verify AI-generated content.

Steve's insightful analysis reminds us that the legal profession's AI revolution cannot be solved through individual blame or simplistic rules. Instead, it requires systemic changes that address the underlying pressures driving risky AI use while embracing the transformative potential of these technologies. The firms that succeed will be those that view AI not as a threat to traditional billing but as an opportunity to deliver greater value to clients while building more sustainable and satisfying practices for their legal professionals. 🌟

🎙️ TSL Labs: Listen to June 30, 2025, TSL editorial as Discussed by two AI-Generated Podcast Hosts Turn Editorial Into Engaging Discussion for Busy Legal Professionals!

🎧 Can't find time to read lengthy legal tech editorials? We've got you covered.

As part of our Tech Savvy Lawyer Labs initiative, I've been experimenting with cutting-edge AI to make legal content more accessible. This bonus episode showcases how Notebook.AI can transform written editorials into engaging podcast discussions.

Our latest experiment takes the editorial "AI and Legal Research: The Existential Threat to Lexis, Westlaw, and Fastcase" and converts it into a compelling conversation between two AI hosts who discuss the content as if they've thoroughly analyzed the piece.

This Labs experiment demonstrates how AI can serve as a time-saving alternative for legal professionals who prefer audio learning or lack time for extensive reading. The AI hosts engage with the material authentically, providing insights and analysis that make complex legal tech topics accessible to practitioners at all technology skill levels.

🚀 Perfect for commutes, workouts, or multitasking—get the full editorial insights without the reading time.

Enjoy!

🎙️ Bonus Episode: TSL Lab’s Notebook.AI Commentary on June 23, 2025, TSL Editorial!

Hey everyone, welcome to this bonus episode!

As you know, in this podcast we explore the future of law through engaging interviews with lawyers, judges, and legal tech professionals on the cutting edge of legal innovation. As part of our Labs initiative, I am experimenting with AI-generated discussions—this episode features two Google Notebook.AI hosts who dive deep into our latest Editorial: "Lawyers, Generative AI, and the Right to Privacy: Navigating Ethics, Client Confidentiality, and Public Data in the Digital Age." If you’re a busy legal professional, join us for an insightful, AI-powered conversation that unpacks the editorial’s key themes, ethical challenges, and practical strategies for safeguarding privacy in the digital era.

Enjoy!

In our conversation, the "Bots" covered the following:

00:00 Introduction to the Bonus Episode

01:01 Exploring Generative AI in Law

01:24 Ethical Challenges and Client Confidentiality

01:42 Deep Dive into the Editorial

09:31 Practical Strategies for Lawyers

13:03 Conclusion and Final Thoughts

Resources:

Google Notebook.AI - https://notebooklm.google/

🎙️ Ep. 107: AI Demand Pro Co-Founder Travis Easton on Fast, Effective Settlement Drafting!

My next guest is Travis Easton, Co-Managing Partner of Easton & Easton LLP and CEO of AI Demand Pro, Inc. We discusses how AI is transforming legal workflows. Travis outlines three benefits of AI: boosting efficiency, revenue, and work quality. He also explores real-world uses of ChatGPT and Claude for tasks like drafting emails while stressing data privacy and accuracy. Furthermore, Travis warns of pitfalls like AI hallucinations and over-reliance, underscoring that lawyers must always review and finalize AI-assisted work to ensure integrity.

All this and more!

Enjoy!

Join Travis and me as we discuss the following three questions and more!

  1. What are your top three AI strategies for enhancing daily legal tasks, and how can lawyers integrate them seamlessly?

  2. How does AI Demand Pro leverage AI to streamline legal processes more effectively than traditional methods, and what are the key benefits of this approach?

  3. What are the top three potential pitfalls or red flags that users of AI tools like AI Demand Pro should be aware of to ensure responsible and effective use?

In our conversation, we cover the following:

[00:56] Travis's Tech Setup

[06:29] AI Strategies for Enhancing Legal Tasks

[11:55] Real-Time Examples of AI Use in Legal Practice

[14:54] Potential Pitfalls of AI Tools in Legal Practice

[20:14] Ensuring Responsible and Effective AI Use

[22:33] Contact Information

Resources:

Connect with Travis:

LinkedIn: linkedin.com/in/travis-easton

Website: demandpro.ai/

Email: mailto: travis@demandpro.ai

Software & Cloud Services mentioned in the conversation:

  • AI Demand Pro: https://www.demandpro.ai

  • Apple iPad: https://www.apple.com/ipad/

  • Apple iPhone: https://www.apple.com/iphone/

  • Apple Keyboard: https://www.apple.com/keyboards/

  • Apple MacBook Air: https://www.apple.com/macbook-air/

  • Apple Pencil: https://www.apple.com/apple-pencil/

  • Apple TV: https://www.apple.com/tv/

  • CasePeer CRM: https://www.casepeer.com

  • ChatGPT: https://chat.openai.com

  • Claude AI: https://claude.ai

  • DocReviewPad: https://www.litsoftware.com/docreviewpad

  • ExhibitsPad: https://www.litsoftware.com/exhibitspad

  • LIT Software Suite: https://www.litsoftware.com

  • Microsoft Word: https://www.microsoft.com/en-us/microsoft-365/word

  • TimelinePad: https://www.litsoftware.com/timelinepad

  • TranscriptPad: https://www.litsoftware.com/transcriptpad

  • TrialPad: https://apps.apple.com/us/app/trialpad-trial-presentation/id1319316401

  • WordPerfect: https://www.wordperfect.com 

Transcript

[00:00:00]

Introduction

Michael D.J. Eisenberg: Episode 107. AI Demand pro. Travis Easton on fast, effective settlement drafting.

Our next guest is Travis Easton personal injury journey and co-founder of AI Demand Pro. Travis shares with us his groundbreaking insights on leveraging AI for settlement demands, essential legal tech tools and practical strategies that transform law firm efficiency. We discuss this and much more.

Enjoy.

Ad Read #1: Consider Giving The Tech-Savvy Lawyer.Page Podcast A Five-Star ⭐️ Review!

Michael D.J. Eisenberg: Have you been enjoying the Tech Savvy lawyer.page podcast? Consider giving us a five star review on Apple Podcasts or wherever you get your podcast feeds.

Introducing Our Guest!

Michael D.J. Eisenberg: Travis, welcome to the podcast.

Travis Easton: Thanks, Michael. Nice to be here. Appreciate it.

Michael D.J. Eisenberg: I appreciate you being here. And to get things started, please tell us what your current tech setup is.

Our Guest's Tech Setup!

Travis Easton: Yeah, so we're a personal injury law firm here, and so [00:01:00] we've been with Case Peer, which is a CRM that we actually were one of their initial customers of, I think seven, eight years ago.

So that's what we used to kind of run our law firm, and it's been great. And then we use AI Demand Pro to write mm-hmm. Settlement. Mm-hmm. Cool. They've been awesome that, and we're gonna talk a little bit more about that since we're one of the founders of that. So a company called Alert, which helps us with, it's not really necessarily tech, but that's what brings in a lot of our leads and things like that.

Cool. So we can

Michael D.J. Eisenberg: get more cases. Well, tell us about your hardware. What kind of computers are you using today? What's on your desk there?

Travis Easton: We're Apple people.

Michael D.J. Eisenberg: Okay. Oh, you know Max. Excellent. Do you know what you're using? A Mac Mini. A Mac Studio. I have a, I have a

Travis Easton: MacBook Air. I just got the new MacBook Air that came out this year.

So

Michael D.J. Eisenberg: Nice.

Travis Easton: We usually upgrade every couple years. Me and my brother are pretty attuned to all the new Mac products that come out every year.

Michael D.J. Eisenberg: Excellent. And do you have any other devices, like for instance, with your smartphone,

Travis Easton: iPhone,

Michael D.J. Eisenberg: and do you keep up to date on that?

Travis Easton: I'm a part of the yearly renewal program, [00:02:00] yes.

So, yep. Every year. Same here, A new iPhone.

Michael D.J. Eisenberg: Same here. It's interesting, I was at the A BA tech show recently and they talked about how only 6% of law firms use Apple computers, which seems a little bit weird to me, but other hand here I am listening to you. It's like we're an Apple computer office.

Travis Easton: Yeah, no, I think we are one of the rare ones.

I would say there's more and more that are switching over. But when we did initially make the switch over a number of years ago, the biggest thing was my dad who had started our law firm, was still on Word perfect. And so having, I don't, you know, I wasn't at the firm at that time. I think I was still in college, but having to go from word perfect to word.

That there was some transformation conversion process and it was pretty terrible. So I think that was the hardest part when they, they switched over from PCs to Max. But since then it's been great.

Michael D.J. Eisenberg: I think a lot of the older attorneys, they had macros already built forward. Perfect. And they didn't wanna reinvent the wheel.

Travis Easton: Yeah.

Michael D.J. Eisenberg: And so, well, I mean, I think

Travis Easton: when my dad started, it was a [00:03:00] typewriter, to be honest. So I, I know it was for sure. But I'm saying even when we started our firm and kind of went out on his own, we definitely had a typewriter in the office. So that's just, it's pretty crazy to think how far we've come.

Michael D.J. Eisenberg: I tell everybody that the best class I took in high school typing.

Travis Easton: Yeah,

Michael D.J. Eisenberg: because I've gotten, be able to get so much done because I can type.

Travis Easton: Makes a huge difference. That's for sure.

Michael D.J. Eisenberg: Are there any other tech devices that you use that help you in your day-to-day work?

Travis Easton: Yeah. I mean, it's kind of related, but Apple TVs right? For demonstrations. Yeah. Things like that. When I'm, when I'm giving demos or have a group setting and we want to mm-hmm.

Present something that's kind of how we use it. We use an Apple TV and flash it up onto the computer. iPads are also very big as well. We're a trial attorney firm. Mm-hmm. And so we use some technology that, there's an app called Trial Pad and we've utilized that at trial and it's great with iPads and just kind of as far as your exhibits and everything you wanna present.

And so just makes your life a lot easier in that regard.

Michael D.J. Eisenberg: So you know Brett Bernie, right?

Travis Easton: I don't know Brett, to be honest.

Michael D.J. Eisenberg: You need to talk to Brett [00:04:00] Bernie. Who does in the news podcast with Jeff Richardson. They are all in on. That software package, and I know Brett, and someone from Lit Software came and did a presentation or two at the a BA tech show this year, so I know how well regarded that product is. Do you use an Apple pencil to help you with that or is it all,

Travis Easton: I mean, sometimes my brother, to be honest, is better with the Apple pencil.

I'm mm-hmm. I'm usually a finger type guide, but I know you can use it with that as well. But yeah, I'm usually, I'm not as adept with the Apple Pencil.

Michael D.J. Eisenberg: Using an Apple keyboard?

Travis Easton: I have, yeah. It depends on what, on the task I'm doing, but yeah, for things like trial padd, I'll usually throw the keyboard on there.

Cool.

Michael D.J. Eisenberg: How do you like that in comparison when you're using your Mac Air versus your, your iPad?

Travis Easton: I almost always use a computer for things. You can't use the trial, at least the last time we used it a little bit ago, it was just an iPad app you couldn't do it on. Mm-hmm. I don't know if they have plans to change that at any point, but anything else I'm doing, I try to always use a computer.

I'm just more familiar with that for things like typing and stuff like that.

Michael D.J. Eisenberg: Well, [00:05:00] actually, let me rephrase the question a little bit. How do you like using the Mac error versus the iPad? In the sense of typing and data input

Travis Easton: I'm much quicker on the computer, on the Mac and book error. Mm-hmm. Yeah.

I'm much more familiar with that and for what I do. Mm-hmm. The things I do on a daily basis, the computer is much quicker and better.

Michael D.J. Eisenberg: I tried shifting over to the iPad a little more and it's just not the same. Yeah. I want the power of a computer or I'm multitask a little bit more and I just feel a bit naked when I'm trying to use the iPad.

Travis Easton: The computer, it just works a lot better.

Michael D.J. Eisenberg: Yeah. Yep. Yep. Same here. Well, let's get into the questions.

Question #1:  What are Travis' top three AI strategies for enhancing day-to-day legal tasks? How can lawyers integrate them seamlessly into their workflow?

Michael D.J. Eisenberg: Question number one, what are your top three AI strategies for enhancing day-to-day legal tasks? How can lawyers integrate them seamlessly into their workflow?

Travis Easton: Alright, well let's get into it. So to me, what, as I kind of thought about this and have pondered it, I think that the way that AI should be looked at and utilized in the legal field can help in kind of three ways. One, you wanna make life easier. Two, you want to see if you [00:06:00] can utilize it to increase revenue.

And three, you want to use it to improve your work product. Right? To me, as a lawyer. If you can have an AI product that can capture all three of those, or at least some of 'em. Mm-hmm. And I think it's a winner. And that's something that should definitely be looked at and explored to see how you can utilize it and put it into your practice.

So going to the first one, right? How do you make life easier? And like I mentioned earlier, we've created an a company called AI Demand Pro. Mm-hmm. That's why the main that we utilize in our practice right now that relates to ai. So I am gonna reference that from time to time throughout these things.

Absolutely. Sure. So forgive me if I kind of talk about it quite a bit, but that's my best example of how I can kind of explain these things to you. So our firm is a personal injury law firm. We've been around for 30 plus years. My dad and three brothers and I work together here. We have, I think, seven associates now that work for us.

With us and they're great. And then we have a demand writing department. And so the [00:07:00] way that our firm is made up is, you know, in a personal injury case, you sign up the client, they go and do their medical treatment, and then from there you gather all the documents, you gather the photographs, you gather all the data, and you would have your demand writing department summarize all of that and put it together in what is called the demand package.

Once that is ready, then it goes to the attorney's desk. He reviews it, edits it, makes it better, and then talks it over with the client, and then when it's ready, it goes out the door and you send that to the insurance company. And that's kind of the lifeblood of a personal injury law firm. Or at least for the majority of 'em, some just go straight to litigation.

But the majority, this is kind of the crucial first step. The insurance company responds and accepts your offer, or you enter into negotiations and you're able to, you know, resolve it and settle the case is done. If you're not able to, that's when you would then file the lawsuit and go into what we refer to as litigation, right?

What we've been able to do in our practice is create this company called AI Demand that we basically built in-house for our firm, and it takes all of those [00:08:00] components. We put it into our software, and then the AI basically writes a settlement demand. In less than 30 minutes. So it's taking what would've taken hours to days for that demand, right?

Or an attorney and turns it into a 30 minute to a couple hour process, depending on, you know, the size of the demand. And so when you have that in mind, right, when we talk about this first one, making it easier. My sister is our head demand writer and has been leading that department for the past 15 years.

It's amazing that she stuck with that job because it's a very tedious, it's a very boring job. You're literally just summarizing medical records and typing them into a computer all day, and so it was our highest turnover position at our firm. It's usually college graduates, and then they try for a little bit, and then they're like, I want to go do something else, and so it was just a tough position to keep fill.

So what has happened since we've installed this program and created this, you know, AI Demand Pro within our business is not a single [00:09:00] demand writer has left because the process is so much easier, it's so much more rewarding for the demand writers. And so it took it from them having to summarize hundreds to thousands of medical pages on each case down to now they are reviewing the output, they are reviewing the document.

They're just cross referencing, checking it. And so it just has made their life much more enjoyable. And so I think when you're looking at an AI product, whether it's for discovery, whether it's, you know, for depositions, what, there's so many different products out there, those are kind of the ones that really touch home to personal injury.

You want to look for a product that can make your life easier, right? Mm-hmm. Make it, you don't have to do as much, because I think that's one of the beautiful things of machine learning. Right?

Michael D.J. Eisenberg: Well, I think one thing that you keep. Saying that I really wanna emphasize to the listeners that you do have to review your work

Travis Easton: a hundred

Michael D.J. Eisenberg: percent and make sure that you don't make any mistakes.

And you know, quite honestly in our conversation here, you've sort of just automatically bled into question number two. You know, how [00:10:00] does AI de demand pro leverage ai, streamline legal processes more effectively than traditional methods? And what are the key benefits of this approach? I think you answered that beautifully.

That being said, I'm gonna take it back a step. Yep. Going back to question number one, can you give us any examples of how you do use ai? I mean like actual real time examples in the sense of, you know, particular product to do x, Y to Z.

Travis Easton: I guess regarding your question, you know, I mean, you mean apart from like AI Demand Pro or something like that?

That,

Michael D.J. Eisenberg: yeah, yeah, yeah, yeah, yeah. I didn't know if you had any other examples or is your Bailey with AI really focused on AI demand Pro.

Travis Easton: The majority of it is I will use chat GBT or Quad if I have a specific question or if I have a specific task, I'll use it for email. Sometimes I'll use them for those things.

But the majority of our focus has been mm-hmm. Developing this and on utilizing this AI because it's so encompassing within the personal injury law firm. Right. And the other thing that, you know, we can reference this later when we talk about the pitfalls, [00:11:00] but you have to be very careful. Not only, like you said, to review your work, but before you upload any sensitive data, right, that in return have like medical records or things should not be uploading anything like that just into Chad gbt, right?

That's on the internet, right? What we've been able to do is safeguard to set up so many privacy blockades and just things like that within our site when we developed AI Demand Pro so that you can take those medical records and upload them and they're still HIPAA compliant, they're still safe ever. All of your data is double encrypted, and so for that reason, I, to be honest, I am very cautious utilizing AI for anything that.

Would, I would deem clients sensitive, or that should not just be uploaded to an nor ordinary site like chat, GBT or Claude.

Michael D.J. Eisenberg: Well, could you give us an example of how you might use one of those two AI to help you write an email? For what purpose might you do that?

Travis Easton: Yeah, so what I would usually do [00:12:00] is I would take the reply, the response from the person, you know?

Mm-hmm. If it's, if I'm responding to someone, I might take the response, put it in there, and then. Either put a little thing, like a little response of what I'm thinking I want it to be. Mm-hmm. Or I would give it a prompt, right? Like, Hey, I'm looking to respond in this way to this email. Could you drop to, for me, I think the, one of the best ways is that you, it just saves you so much time.

And to be honest, brain power. Right? And having to really focus in, because we only have so much brain power, we can right muster up throughout the day. And so if you can unload some of that off onto the AI model. I think it helps tremendously. And so basically you can get the gist and the tone of what you're trying to say and put it in there, and then it can spit out its first version and then you can either, you know, give it some more prompting and tweak it further from there.

Or you can just take it and then you edit it and finish it up and make sure it's the tone and the work, you know, the verbiage that you want.

Michael D.J. Eisenberg: So I'm assuming that you take out any PII from the response email [00:13:00] that you get before you pop it into

Travis Easton: Yes.

Michael D.J. Eisenberg: Claude or chat bt. Yeah, yeah,

Travis Easton: yeah. No personal information or anything like that.

Michael D.J. Eisenberg: Yep.

Ad #2: Consider Buying The Tech-Savvy Lawyer a Cup of Coffee ☕️ or Two ☕️☕️!

Michael D.J. Eisenberg: Pardon the interruption. I hope you're enjoying the Tech Heavy Lawyer page podcast. As much as I enjoy making them consider buying us a cup of coffee or two to help toray some of the production costs. Thanks and enjoy

and. I've received emails from a handful of parties over the years where quite honestly, , their communication may not have made sense or wasn't very clear.

And I will copy that. I will take out the p if there's any pi, I'm say, can, can you one tell me what is this person trying to convey to me? And two, draft an appropriate response. And it saves me some brain power there. 'cause sometimes. You get from certain parties, emails that really aren't clear, and that's me being polite.

Yes.

Travis Easton: A hundred percent.

Michael D.J. Eisenberg: Well, let's get into our last question.

Q?#3:  What are the top three potential pitfalls or red flags that users of AI tools like AI Demand Pro should be aware of to ensure responsible and effective use?

Michael D.J. Eisenberg: What are the top three potential pitfalls or red flags that users of AI tools like AI Demand Pro should be aware of to ensure responsible and effective [00:14:00] use?

Travis Easton: Yeah, so there's several. We've kind of referenced a couple of up to date. So the first one I wanted to talk about was, no AI company should be using the data that you are feeding it to learn from it.

So that should be the red flag when anybody is looking to sign up, you know, for an AI service mm-hmm. Or an AI connect. One of the questions you should be asking is, do, are you using my data to train your AI model? The biggest thing there is the privacy, right? If you know, if they are using their data. To train their AI model, and let's say they put in something about Joe fracturing his leg.

Then what happens is now that LLLM model will have that data that Joe fractured his leg actually baked into it. And so you're just giving up that privacy of that client or that person of whatever you fed into it and it's going into the system and now it's there forever. I think first thing is just make sure that no one is using your data to train their model off of first and foremost.[00:15:00]

Yeah. Did you have a question?

Michael D.J. Eisenberg: Well, so like I'm looking at your website and I see, you know, there's a clear chart here that says HIPAA compliant, which I'm presuming is in part, you know, not training off of the data that you put into it. Correct. Is there anything that you can think of that the listeners should be aware of and looking for when they review a site like AI Demand Pro?

Like what key? Bits should they be seeing that says this is going to be something that is not gonna be learned from.

Travis Easton: I don't think anybody is necessarily gonna say that specific thing on the website. When you're in a demo. When you're in a conversation. Mm-hmm. Looking to use it. Or maybe the frequently asked questions.

That is when you would want to bring it up. The HIPAA compliance, of course, is, that's kind of a separate topic. That just means that your medical data is safe and secure and, and being safeguarded according to the HIPAA compliance rules. But whether they're using it, it could be unrelated to medical data, right?

And so, right. You just wanna make sure they're not utilizing your [00:16:00] data. To train their AI model. And so I would just make sure you're asking that question.

Michael D.J. Eisenberg: So the reason why I asked you specifically about HIPAA was because going through your site, and I've seen others and I don't, products and companies like yourself say, Hey, we're not gonna learn from your data, or The AI we use is not gonna learn.

Travis Easton: We probably should. We should probably put that on there, but I think it's because most people don't even know to ask that,

Michael D.J. Eisenberg: which quite frankly, don't You think that kind of violates, was it model Rule one, common eight, that they'd be reasonably up to date on the technology that they use? Probably, yeah. I mean,

Travis Easton: yeah.

Michael D.J. Eisenberg: I mean,

Travis Easton: they should, right? They should. But such a new frontier. I mean, if you go to any of these conferences, everything is ai. It seems like a quarter of the talks and the, the speeches are on AI or something related to it. Right. Everybody is just trying to get a grasp around it as best they can. But yeah, I think the common lawyer still is.

Very uneducated regarding these things.

Michael D.J. Eisenberg: The problem is they [00:17:00] need to be better educated because the excuse of like the attorney outta the Southern District of New York of using Chachi BT to help draft his response brief without checking it. Yeah. And then, hey, the judge is like, are you sure these legit, these cases?

And he goes back and asks Chachi, BT, are these legit? Of course Chechen, he says, of course they're legit. Why would I not tell you the truth?

Travis Easton: Yeah.

Michael D.J. Eisenberg: So that was one.

Travis Easton: So that, so your, your great example there brings us to hallucinations, which is what happened in that case, right? Right. And so you need to be aware of hallucinations and so going out and asking Chad, GBT, Claude, any of these things, you are at significant peril of it hallucinating.

And so when we built AI Demand Pro, we have done as much work as we possibly can to make sure that we are not hallucinating. I'm not the engineer behind it, right? When we built this, but the way I explain it to people that probably aren't the most tech-minded either, is we have built a, what I call a closed.

[00:18:00] System. Right? And so we are not going out into the internet when the medical records say, this person's gonna get a back surgery, right? And saying, Hey, WebMD, can you tell us what you know about the back surgery? And that's where people get into trouble when they go into chat GBT, and they ask it things related to legal questions.

It's going out to the internet and gathering all of that information and stuff. And sometimes it's gonna be accurate and sometimes it won't be. And sometimes it's gonna what we call a hallucinate, just make things up. That's very scary if you are an attorney, and it goes back to your earlier about always double checking the work and not double checking it with chat GBT when you get called out on it.

Right? And so our system is closed. It does not go out and find any of that information, and it only has. We have put in there, which are things like the California vehicle codes and the other state vehicle code sections and things like that, that we need it to just gather the information if it pertains to that specific case.

Right. And so I would just say you need to always be aware of hallucinations [00:19:00] and ask them, you know, does your product hallucinate and things like that, and you know, try to get an understanding of how often is it hallucinating. And so I would say that's pitfall number two that people need to watch out for a hundred percent.

Did you have any questions regarding that, Michael?

Michael D.J. Eisenberg: Nope. I think he did a great job explaining that. So I'm gonna say number three.

Travis Easton: Yeah, so I would just say that, to be honest, it kind of goes back to what we talked about earlier. Avoid outsourcing your legal responsibilities to AI companies in their totality, right.

AI is meant as a tool to make it more efficient. Yeah, to make it faster and potentially cheaper. But you are the lawyer, you the listener. If you are a lawyer, you are the lawyer and you are ultimately responsible for your legal work product that you put out on behalf of your client. Whether that's legal research.

Whether that's, in this case, writing settlement demands. And so there are models out there where you send in your demand documents, the company puts it together and they put it all together [00:20:00] and then they send it back. And unfortunately, I know law firms that just rubber stamped that, right? Review it and send it out as the work product.

And in some cases it might be. Totally satisfactory, but what if they missed something? What if it was wrong? And so right. We were aware of some things like that, and ultimately that just wasn't the best model that we saw. And so that was one of the reasons why we created AI Demand Pro was just so that we're getting the demand created so quickly that we then can take that time in-house to have our demand writing department that we still have.

Right to review, edit it, right. Make it better if it needs to be. And then it still goes to the attorney who puts his finishing touches on it. Right. Right. And because I like to think that every attorney kind of has their own little style, right? Yeah. And so what we've done is we've taken the tedious and longest part of it, of really reviewing those medical records and shortened it down to 15 to 30 minutes so that you can take the time to review that, make it better, and put your finishing touches on it.

Instead of that process [00:21:00] taking hours and days, it's now going out in an hour or so. And it just make everything so much more efficient. And so I just think that with every AI product out there, they're just getting better and better, which is great and will continue to get better and better as they're fine tuned and things like that.

But we're still the lawyers. We still have the obligation to review everything, and so I would just be caution everybody before you're just rubber stamping something that AI produced. Make sure you're reviewing it and you are happy and you feel satisfactory with the work product that's put together for you.

Michael D.J. Eisenberg: Excellent, Travis, I really appreciate you sharing all that.

Where You Can Find Our Guest!

Michael D.J. Eisenberg: Tell us where can people find you?

Travis Easton: Yeah, so our website is Demand pro.ai. And then my email isTravis@demandpro.ai. And feel free to reach out with any questions or inquiries or any, anything I can do to help.

Michael D.J. Eisenberg: Excellent. I'll be sure to have that in the show notes and more.

And Travis, again, I want to thank you for being a guest today.

Travis Easton: Thanks, Michael, I appreciate it. Thanks for having me.

Michael D.J. Eisenberg: Thanks.

See You in Two Weeks!

Michael D.J. Eisenberg: Thank you for joining me on this episode of the Tech Savvy Lawyer Page podcast. Our [00:22:00] next episode will be posted in about two weeks. If you have any ideas about a future episode, please contact me at Michael DJ at the Tech Savvy Lawyer page.

Have a great day and happy Lauren.