The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]
While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.
The Court Order That Changed Everything âď¸
The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.
Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.
OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.
The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.
Privacy Rights in the Age of AI: An Eroding Foundation đĄď¸
This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.
The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.
For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.
The Attorney-Client Privilege Crisis đĽ
Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.
When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.
The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.
The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.
ABA Model Rules and AI: What Lawyers Must Know đ
The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.
Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.
Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.
Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]
Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"âplausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.
Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.
Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.
State-by-State Variations: A Patchwork of Protection đşď¸
While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2