MTC: London's iPhone Theft Crisis: Critical Mobile Device Security Lessons for Traveling Lawyers šŸ“±āš–ļø

lawyers can learn about cyber mobile security from the recent iphone thefts in london

Recent events in London should serve as a wake-up call for every legal professional who carries client data beyond the office walls. London police recently dismantled a sophisticated international theft ring responsible for smuggling approximately 40,000 stolen iPhones to China in just twelve months. This operation revealed thieves earning up to £300 per stolen device, with phones reselling overseas for as much as $5,000. With over 80,000 phones stolen in London last year alone, this crisis underscores critical vulnerabilities that lawyers must address when working remotely.

The sophistication of these operations is alarming. Criminals on electric bikes snatch phones from unsuspecting victims and immediately wrap devices in aluminum foil to block tracking signals. This industrial-scale crime demonstrates that our mobile devices—which contain privileged communications, case strategies, and confidential client data—are valuable targets for organized criminal networks operating globally.

Your Ethical Obligations Are Clear

ABA Model Rule 1.1 requires lawyers to maintain competence, including understanding "the benefits and risks associated with relevant technology". This duty of technological competence has been adopted by over 40 states and isn't optional—it's fundamental to ethical practice. Model Rule 1.6(c) mandates that lawyers "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client".

When your phone disappears—whether through theft, loss, or border seizure—you face potential violations of these ethical duties. Recent data shows U.S. Customs and Border Protection searched 14,899 devices between April and June 2025, a 16.7% increase from previous surges. Lawyers traveling internationally face heightened risks, and a stolen or searched device can compromise attorney-client privilege instantly.

Essential Security Measures for Mobile Lawyers

Before leaving your office, implement these non-negotiable protections. Enable full-device encryption on all smartphones, tablets, and laptops. For iPhones, setting a passcode automatically enables encryption; Android users must manually activate this feature in security settings. Strong passwords matter—use alphanumeric combinations of at least 12 characters, avoiding easily guessed patterns.

lawyer need to know how to protect their client’s pii when crossing the boarder!

Two-factor authentication (2FA) adds critical protection layers. Even if someone obtains your password, 2FA requires secondary verification through your phone or authentication app. This simple step dramatically reduces unauthorized access risks. Configure remote wipe capabilities before traveling. If your device is stolen, you can erase all data remotely, protecting client information even when physical recovery is impossible.

Disable biometric authentication when traveling internationally. Face ID and fingerprint scanners can be used against you at borders where Fourth Amendment protections are diminished. Restart your device before crossing borders to force password-only access. Consider carrying a "clean" device for international travel, accessing files only through encrypted cloud storage rather than storing sensitive data locally.

Coffee Shops, Airports, and Public Spaces

Public Wi-Fi networks pose serious interception risks. Hackers create fake hotspots with legitimate-sounding names, capturing everything you transmit. As lawyers increasingly embrace cloud-based computing for their work, encryption when using public Wi-Fi becomes non-negotiable

Always use a trusted VPN (Virtual Private Network) when connecting to public networks. VPNs encrypt your internet traffic, preventing interception even on compromised networks. Alternatively, use your smartphone's personal hotspot rather than connecting to public Wi-Fi. Turn off file sharing on all mobile devices. Avoid accessing highly sensitive client files in public spaces altogether—save detailed case work for secure, private connections.

Physical security deserves equal attention. Visual privacy screens prevent shoulder surfing. Position yourself with your back to walls in coffee shops so others cannot observe your screen. Be alert to your surroundings and maintain physical control of devices at all times. Never leave laptops, tablets, or phones unattended, even briefly.

Border Crossings and International Travel

Lawyers crossing international borders face unique challenges. CBP policies permit extensive device searches within 100 miles of borders under the border search exception, significantly reducing Fourth Amendment protections. New York State Bar Association Ethics Opinion 2017-5 addresses lawyers' duties when traveling with client data across borders.

The reasonableness standard governs your obligations. Evaluate whether you truly need to bring confidential information across borders. If travel requires client data, bring only materials professionally necessary for your specific purpose. Consider these strategies: store files in encrypted cloud services rather than locally; use strong passwords and disable biometric authentication; carry your bar card to identify yourself as an attorney if questioned; identify which files contain privileged information before reaching the border.

If border agents demand device access, clearly state that you are an attorney and the device contains privileged client communications. Ask whether the request is optional or mandatory. If agents conduct a search, document what occurred and consider whether client notification is required under Rule 1.4. New York Rule 1.6 requires taking reasonable steps to prevent unauthorized disclosure, with heightened precautions necessary when government agencies are opposing parties.

Practical Implementation Today

Create firm policies addressing mobile device security. Require immediate reporting of lost or stolen devices. Implement Mobile Device Management (MDM) software to monitor, secure, and remotely wipe all connected devices. Conduct regular security awareness training covering email practices, phishing recognition, and social engineering tactics.

Develop an Incident Response Plan before breaches occur. Know which experts to contact, document cybersecurity policies, and establish notification protocols. Under various state laws and regulations like California Civil Code § 1.798.82 and HIPAA's Breach Notification Rule, lawyers may be legally required to notify clients of data breaches.

Lawyers are on the front line of cybersecurity when on the go!

Communicate with clients about security measures. Obtain informed consent regarding electronic communications and any security limitations. Some firms include these discussions in engagement letters, setting clear expectations about communication methods and encryption use.

Stay current with evolving threats. Subscribe to legal technology security bulletins. The Tech-Savvy Lawyer blog regularly covers mobile security issues, including recent coverage of the SlopAds malware campaign that compromised 224 Android applications on Google Play Store. Technology competence requires ongoing learning as threats and safeguards evolve.

The Bottom Line

The London iPhone theft crisis demonstrates that our devices are valuable targets for sophisticated criminal networks operating internationally. Every lawyer who works outside the office—whether at coffee shops, client meetings, or international destinations—must take mobile security seriously. Your ethical obligations under Model Rules 1.1 and 1.6 demand it. Your clients' confidential information depends on it. Your professional reputation requires it.

Implementing these security measures isn't complicated or expensive. Enable encryption. Use strong passwords and 2FA. Avoid public Wi-Fi or use VPNs. Disable biometrics when traveling. Maintain physical control of devices. These straightforward steps significantly reduce risks while allowing you to work effectively from anywhere.

The legal profession has embraced mobile technology's benefits—now we must address its risks with equal commitment. Don't wait for a theft, loss, or border seizure to prompt action. Protect your clients' confidential information today.

MTC

MTC: Deepfakes, Deception, and Professional Duty - What the North Bethesda AI Incident Teaches Lawyers About Ethics in the Digital Age šŸ§ āš–ļø

Lawyers need to be aware of the potential Professional and ethical consequences if they allow deepfakes to enter the courtroom.

In October 2025, a seemingly lighthearted prank spiraled into a serious legal matter that carries profound implications for every practicing attorney. A 27 year-old, North Bethesda woman sent her husband an AI-generated photograph depicting a man lounging on their living room couch. Alarmed by the apparent intrusion, he called 911. The subsequent police response was swift and overwhelming: eight marked cruisers raced through daytime traffic with lights and sirens activated. When officers arrived, they found no burglar—the woman was alone at home, a cellphone mounted on a tripod aimed at the front door, and the admission that it was all a prank.

The story might have ended as a cautionary tale about viral social media trends gone awry. But for the legal profession, it offers urgent and multifaceted lessons about technological competence, professional responsibility, and the ethical obligations that now define modern legal practice.

The woman was charged with making a false statement concerning an emergency or crime and providing a false statement to a state official. Though the charges are criminal in nature, they illuminate a landscape that the legal profession must navigate with far greater care than many currently do. The intersection of generative AI, digital deception, and legal ethics represents uncharted territory—one where professional liability and disciplinary action await those who fail to understand the technology reshaping evidence, testimony, and truth-seeking in the courtroom.

The Technology Competence Imperative

In 2012, the American Bar Association amended Comment 8 to Model Rule 1.1 (Competence) to include an explicit requirement that lawyers remain competent in "the benefits and risks associated with relevant technology." This was not a suggestion; it was a mandate. Today, 31 states have adopted or adapted this language into their own professional conduct rules. The ABA's accompanying committee report emphasized that the amendment serves as "a reminder to lawyers that they should remain aware of technology." Yet the word "reminder" should not be mistaken for optional guidance. As the digital landscape grows more sophisticated—and more legally consequential—ignorance of technology is increasingly indefensible as a basis for professional incompetence.

This case exemplifies why: An attorney representing clients in disputes involving digital media—whether custody cases, employment disputes, criminal defense, or civil litigation—cannot afford to lack foundational knowledge of how AI-generated images are created, detected, and authenticated. A lawyer who fails to distinguish authentic video evidence from a deepfake, or who presents such evidence without proper verification, may be engaging in conduct that violates not only Rule 1.1 but also Rules 3.3 and 8.4 of the ABA Model Rules of Professional Conduct.

Rule 1.1 creates a floor, not a ceiling. While most attorneys are not expected to become machine learning engineers, they must possess working knowledge of AI detection tools, image metadata analysis, forensic software, and the limitations of each. Many free and low-cost resources now exist for such training. Bar associations, CLE providers, and technology vendors offer courses specifically designed for attorneys with moderate tech proficiency. The obligation is not to achieve expertise but to make a deliberate, documented effort to stay reasonably informed.

Lawyers may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available.

🚨

Lawyers may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available. 🚨

Candor, Evidence, and the Truth-Seeking Function

The Maryland incident also implicates ABA Model Rule 3.3 (Candor Toward the Tribunal). Rule 3.3(a)(3) prohibits lawyers from offering evidence that they know to be false. But what does a lawyer know when AI makes authenticity ambiguous?

Consider a hypothetical: A client provides a lawyer with a photograph purporting to show the opposing party engaged in misconduct. The lawyer accepts it at face value and presents it to the court. Later, it is discovered that the image was AI-generated. The lawyer may argue that they "reasonably believed" the photograph was authentic and thus did not knowingly violate Rule 3.3. But this defense grows weaker as technology becomes more accessible and detection methods more readily available. A lawyer's failure to employ basic verification protocols—such as checking metadata, using AI detection software, or consulting a forensic expert—may render their "belief" in authenticity unreasonable, transforming what appears to be good-faith conduct into a breach of the duty of candor.

The deeper concern is what scholars call the "Liar's Dividend": the phenomenon by which the mere existence of convincing deepfakes causes observers to distrust even genuine evidence. Lawyers can inadvertently exploit this dynamic by introducing AI-generated content without disclosure, or by sowing doubt in jurors' minds about the authenticity of real evidence. When a lawyer does so knowingly—or worse, with willful indifference—they corrupt the judicial process itself.

Rule 3.3 does not merely prevent lawyers from lying; it affirms their role as officers of the court whose duty to truth transcends client advocacy. This duty becomes more, not less, demanding in an age of manipulated media.

Dishonesty, Fraud, and the Outer Boundaries of Professional Conduct

North Bethesda deepfake prank highlights ethical gaps for attorneys.

ABA Model Rule 8.4(c) prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation. On its face, Rule 8.4 seems straightforward. But its application to AI-generated evidence raises subtle questions. If a lawyer negligently fails to detect a deepfake and introduces it as genuine, are they guilty of "deceit"? Does their ignorance of the technology constitute a defense, or does it constitute a separate violation of Rule 1.1?

The answer likely depends on context. A lawyer who presents AI-generated evidence without having undertaken any effort to verify it—in a jurisdiction where technological competence is mandated, and where basic detection tools are publicly available—may struggle to argue that they acted with mere negligence rather than reckless indifference to truth. The line between incompetence and dishonesty can be perilously thin.

Consider, too, the scenario in which a lawyer becomes aware that a client has manufactured evidence using AI. Rule 8.4(c) does not explicitly prevent a lawyer from advising a client about the legal risks of doing so, nor does it require immediate disclosure to opposing counsel or the court in all circumstances. However, if the lawyer then remains silent while the falsified evidence is introduced into litigation, they may be viewed as having effectively participated in fraud. The duty to maintain client confidentiality (Rule 1.6) can conflict with the duty of candor, but Rule 3.3 clarifies that candor prevails: "The duties stated in paragraph (a) … continue to the conclusion of the proceeding, and apply even if compliance requires disclosure of information otherwise protected by Rule 1.6.ā€

Practical Safeguards and Professional Resilience

So what can lawyers do—immediately and pragmatically—to protect themselves and their clients?

First, invest in education. Most state bar associations now offer CLE courses on AI, deepfakes, and digital evidence. Many require only two to three hours. Florida has mandated three hours of technology CLE every three years; others will likely follow. Attending such courses is not an extravagance; it is the baseline floor of professional duty.

Second, establish verification protocols. When digital evidence is introduced in a case—particularly photographs, videos, or audio recordings—require documentation of provenance. Demand metadata. Consider retained expert assistance to authenticate digital files. Many law firms now partner with forensic technology consultants for exactly this purpose. The cost is modest compared to the risk of professional discipline or malpractice liability.

Third, disclose limitations transparently. If you lack expertise in evaluating a particular form of digital evidence, say so. Rule 1.1 permits lawyers to partner with others possessing requisite skills. Transparency about technological limitations is not weakness; it is professionalism.

Fourth, update client engagement letters and retention agreements. Explicitly discuss how your firm will handle digital evidence, what verification steps will be taken, and what the client can reasonably expect. Document these conversations. In disputes with clients later, such records can be invaluable.

Fifth, stay alert to emerging guidance. Bar associations continue to issue formal opinions on technology and ethics. Journals, conference presentations, and industry publications track the intersection of AI and law. Subscribing to alerts from your state bar's ethics committee or joining legal technology practice groups ensures you remain informed as standards evolve. *You may find following The Tech-Savvy Lawyer.Page a great source for alerts and guidance! šŸ¤—

Final Thoughts: The Deeper Question

Lawyers have the professional and ethical responsibility of knowing how deepfakes work!

The Maryland case is ultimately not about one woman's ill-advised prank. It is about the profession's obligation to remain trustworthy stewards of justice in an age when truth itself can be fabricated with a few keystrokes. The legal system depends on evidence, testimony, and the adversarial process to uncover truth. Lawyers are its guardians.

Technology competence is not an optional specialization or a nice-to-have skill. Under the ABA Model Rules and the rules adopted by 31 states, it is a foundational professional duty. Failure to acquire it exposes practitioners to disciplinary action, malpractice claims, and—most importantly—the real possibility of leading their clients, courts, and the public toward injustice.

The invitation to lawyers is clear: engage with the technology that is reshaping litigation, evidence, and professional practice. Understand its capabilities and risks. Invest in verification, transparency, and ongoing education. In doing so, you honor not just your professional obligations but the deeper mission of the law itself: the pursuit of truth.

MTC: šŸ”’ Your AI Conversations Aren't as Private as You Think: What the OpenAI Court Ruling Means for Legal Professionals

A watershed moment in digital privacy has arrived, and it carries profound implications for lawyers and their clients.

The recent court ruling in In re: OpenAI, Inc., Copyright Infringement Litigation has exposed a critical vulnerability in the relationship between artificial intelligence tools and user privacy rights. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued an order requiring OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis". This unprecedented directive affected more than 400 million ChatGPT users worldwide and fundamentally challenged assumptions about data privacy in the AI era.[1][2][3][4]

While the court modified its order on October 9, 2025, terminating the blanket preservation requirement as of September 26, 2025, the damage to user trust and the precedent for future litigation remain significant. More importantly, the ruling illuminates a stark reality for legal professionals: the "delete" button offers an illusion of control rather than genuine data protection.

The Court Order That Changed Everything āš–ļø

The preservation order emerged from a copyright infringement lawsuit filed by The New York Times against OpenAI in December 2023. The Times alleged that OpenAI unlawfully used millions of its articles to train ChatGPT without permission or compensation. During discovery, concerns arose that OpenAI had been deleting user conversations that could potentially demonstrate copyright violations.

Judge Wang's response was sweeping. The court ordered OpenAI to retain all ChatGPT output logs, including conversations users believed they had permanently deleted, temporary chats designed to auto-delete after sessions, and API-generated outputs regardless of user privacy settings. The order applied retroactively, meaning conversations deleted months or even years earlier remained archived in OpenAI's systems.

OpenAI immediately appealed, arguing the order was overly broad and compromised user privacy. The company contended it faced conflicting obligations between the court's preservation mandate and "numerous privacy laws and regulations throughout the country and the world". Despite these objections, Judge Wang denied OpenAI's motion, prioritizing the preservation of potential evidence over privacy concerns.

The October 9, 2025 stipulation and order brought partial relief. OpenAI's ongoing obligation to preserve all new output log data terminated as of September 26, 2025. However, all data preserved before that cutoff remains accessible to plaintiffs (except for users in the European Economic Area, Switzerland, and the United Kingdom). Additionally, OpenAI must continue preserving output logs from specific domains identified by the New York Times and may be required to add additional domains as the litigation progresses.

Privacy Rights in the Age of AI: An Eroding Foundation šŸ›”ļø

This case demonstrates that privacy policies are not self-enforcing legal protections. Users who relied on OpenAI's representations about data deletion discovered those promises could be overridden by court order without their knowledge or consent. The "temporary chat" feature, marketed as providing ephemeral conversations, proved anything but temporary when litigation intervened.

The implications extend far beyond this single case. The ruling establishes that AI-generated content constitutes discoverable evidence subject to preservation orders. Courts now view user conversations with AI not as private exchanges but as potential legal records that can be compelled into evidence.

For legal professionals, this reality is particularly troubling. Lawyers regularly handle sensitive client information that must remain confidential under both ethical obligations and the attorney-client privilege. The court order revealed that even explicitly deleted conversations may be retained indefinitely when litigation demands it.

The Attorney-Client Privilege Crisis šŸ‘„

Attorney-client privilege protects confidential communications between lawyers and clients made for the purpose of obtaining or providing legal advice. This protection is fundamental to the legal system. However, the privilege can be waived through voluntary disclosure to third parties outside the attorney-client relationship.

When lawyers input confidential client information into public AI platforms like ChatGPT, they potentially create a third-party disclosure that destroys privilege. Many generative AI systems learn from user inputs, incorporating that information into their training data. This means privileged communications could theoretically appear in responses to other users' queries.

The OpenAI preservation order compounds these concerns. It demonstrates that AI providers cannot guarantee data will be deleted upon request, even when their policies promise such deletion. Lawyers who used ChatGPT's temporary chat feature or deleted sensitive conversations believing those actions provided privacy protection now discover their confidential client communications may be preserved indefinitely as litigation evidence.

The risk is not theoretical. In the now-famous Mata v. Avianca, Inc. case, a lawyer used a free version of ChatGPT to draft a legal brief containing fabricated citations. While the lawyer faced sanctions for submitting false information to the court, legal ethics experts noted the confidentiality implications of the increasingly specific prompts the attorney used, which may have revealed client confidential information.

ABA Model Rules and AI: What Lawyers Must Know šŸ“‹

The American Bar Association's Model Rules of Professional Conduct govern lawyer behavior, and while these rules predate generative AI, they apply with full force to its use. On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, providing the first comprehensive guidance on lawyers' use of generative AI.

Model Rule 1.1: Competence requires lawyers to provide competent representation, including maintaining "legal knowledge, skill, thoroughness and preparation reasonably necessary for representation". The rule's commentary [8] specifically states lawyers must understand "the benefits and risks associated with relevant technology". Opinion 512 clarifies that lawyers need not become AI experts, but must have a "reasonable understanding of the capabilities and limitations of the specific GenAI technology" they use. This is not a one-time obligation. Given AI's rapid evolution, lawyers must continuously update their understanding.

Model Rule 1.6: Confidentiality creates perhaps the most significant ethical challenge for AI use. The rule prohibits lawyers from revealing "information relating to the representation of a client" and requires them to "make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation". Self-learning AI tools that train on user inputs create substantial risk of improper disclosure. Information entered into public AI systems may be stored, processed by third-party vendors, and potentially accessed by company employees or incorporated into model training. Opinion 512 recommends lawyers obtain informed client consent before inputting any information related to representation into AI systems. Lawyers must also thoroughly review the terms of use, privacy policies, and contractual agreements of any AI tool they employ.

Model Rule 1.4: Communication obligates lawyers to keep clients reasonably informed about their representation. When using AI tools, lawyers should disclose this fact to clients, particularly when the AI processes client information or could impact the representation. Clients have a right to understand how their matters are being handled and what technologies may access their confidential information.[25][22][20][21]

Model Rule 3.3: Candor Toward the Tribunal requires lawyers to be truthful in their representations to courts. AI systems frequently produce "hallucinations"—plausible-sounding but entirely fabricated information, including fake case citations. Lawyers remain fully responsible for verifying all AI outputs before submitting them to courts or relying on them for legal advice. The Mata v. Avianca case serves as a cautionary tale of the consequences when lawyers fail to fulfill this obligation.

Model Rules 5.1 and 5.3: Supervisory Responsibilities make lawyers responsible for the conduct of other lawyers and nonlawyer assistants working under their supervision. When staff members use AI tools, supervising lawyers must ensure appropriate policies, training, and oversight exist to prevent ethical violations.

Model Rule 1.5: Fees requires lawyers to charge reasonable fees. Opinion 512 addresses whether lawyers can bill clients for time "saved" through AI efficiency gains. The guidance suggests that when using hourly billing, efficiencies gained through AI should benefit clients. However, lawyers may pass through reasonable direct costs of AI services (such as subscription fees) when properly disclosed and agreed upon in advance.

State-by-State Variations: A Patchwork of Protection šŸ—ŗļø

While the ABA Model Rules provide a national framework, individual states adopt and interpret ethics rules differently. Legal professionals must understand their specific state's requirements, which can vary significantly.[2

Lawyers must protect client’s PII from AI privacy failures!

Florida has taken a proactive stance. In January 2025, The Florida Bar Board of Governors unanimously approved Advisory Opinion 24-1, which specifically addresses generative AI use. The opinion recommends lawyers obtain "affected client's informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information". Florida's guidance emphasizes that lawyers remain fully responsible for AI outputs and cannot treat AI as a substitute for legal judgment.

Texas issued Opinion 705 from its State Bar Professional Ethics Committee in February 2025. The opinion outlines four key obligations: lawyers must reasonably understand AI technology before using it, exercise extreme caution when inputting confidential information into AI tools that might store or expose client data, verify the accuracy of all AI outputs, and avoid charging clients for time saved by AI efficiency gains. Texas also emphasizes that lawyers should consider informing clients when AI will be used in their matters.

New York has developed one of the most comprehensive frameworks through its State Bar Association Task Force on Artificial Intelligence. The April 2024 report provides a thorough analysis across the full spectrum of ethical considerations, including competence, confidentiality, client communication, billing practices, and access to justice implications. New York's guidance stands out for addressing both immediate practical considerations and longer-term questions about AI's transformation of the legal profession.

Alaska issued Ethics Opinion 2025-1 surveying AI issues with particular focus on competence, confidentiality, and billing. The opinion notes that when using non-closed AI systems (such as general consumer products), lawyers should anonymize prompts to avoid revealing client confidential information. Alaska's guidance explicitly cites to its cloud-computing predecessor opinion, treating AI data storage similarly to law firm files on third-party remote servers.

California, Massachusetts, New Jersey, and Oregon have issued guidance through their state attorneys general on how existing state privacy laws apply to AI. California's advisories emphasize that AI use must comply with the California Consumer Privacy Act (CCPA), requiring transparency, respecting individual data rights, and limiting data processing to what is "reasonably necessary and proportionate". Massachusetts focuses on consumer protection, anti-discrimination, and data security requirements. Oregon highlights that developers using personal data to train AI must clearly disclose this use and obtain explicit consent when dealing with sensitive data.[31]

These state-specific approaches create a complex compliance landscape. A lawyer practicing in multiple jurisdictions must understand and comply with each state's requirements. Moreover, state privacy laws like the CCPA and similar statutes in other states impose additional obligations beyond ethics rules.

Enterprise vs. Consumer AI: Understanding the Distinction šŸ’¼

Not all AI tools pose equal privacy risks. The OpenAI preservation order highlighted critical differences between consumer-facing products and enterprise solutions.

Consumer Plans (Free, Plus, Pro, and Team) were fully subject to the preservation order. These accounts store user conversations on OpenAI's servers with limited privacy protections. While users can delete conversations, the court order demonstrated that those deletions are not permanent. OpenAI retains the technical capability to preserve and access this data when required by legal process.

Enterprise Accounts offer substantially stronger privacy protections. ChatGPT Enterprise and Edu plans were excluded from the preservation order's broadest requirements. These accounts typically include contractual protections such as Data Processing Agreements (DPAs), commitments against using customer data for model training, and stronger data segregation. However, even enterprise accounts must preserve data when covered by specific legal orders.

Zero Data Retention Agreements provide the highest level of protection. Users who have negotiated such agreements with OpenAI are excluded from data preservation requirements. These arrangements ensure that user data is not retained beyond the immediate processing necessary to generate responses.

For legal professionals, the lesson is clear: consumer-grade AI tools are inappropriate for handling confidential client information. Lawyers who use AI must ensure they employ enterprise-level solutions with proper contractual protections, or better yet, closed systems where client data never leaves the firm's control.

Practical Steps for Legal Professionals: Protecting Privilege and Privacy šŸ› ļø

Given these risks, what should lawyers do? Abandoning AI entirely is neither realistic nor necessary. Instead, legal professionals must adopt a risk-management approach.

Conduct thorough due diligence before adopting any AI tool. Review terms of service, privacy policies, and data processing agreements in detail. Understand exactly what data the AI collects, how long it's retained, whether it's used for model training, who can access it, and what security measures protect it. If these answers aren't clear from public documentation, contact the vendor directly for written clarification.

Implement written AI policies for your firm or legal department. These policies should specify which AI tools are approved for use, what types of information can (and cannot) be input into AI systems, required safeguards such as data anonymization, client consent requirements, verification procedures for AI outputs, and training requirements for all staff. Document these policies and ensure all lawyers and staff understand and follow them.

Default to data minimization. Before inputting any information into an AI system, ask whether it's necessary. Can you accomplish the task without including client-identifying information? Many AI applications work effectively with anonymized or hypothetical scenarios that don't reveal actual client matters. When in doubt, err on the side of caution.

Obtain informed client consent when using AI for client matters, particularly when inputting any information related to the representation. This consent should be specific about what AI tools will be used, what information may be shared with those tools, what safeguards are in place, and what risks exist despite those safeguards. General consent buried in engagement agreements is likely insufficient.

Use secure, purpose-built legal AI tools rather than consumer applications. Legal-specific AI products are designed with confidentiality requirements in mind and typically offer stronger privacy protections. Even better, consider closed-system AI that operates entirely within your firm's infrastructure without sending data to external servers.

Never assume deletion means erasure. The OpenAI case proves that deleted data may not be truly gone. Treat any information entered into an AI system as potentially permanent, regardless of what the system's privacy settings claim.

Maintain privileged communication protocols. Remember that AI is not your attorney. Communications with AI systems are not protected by attorney-client privilege. Never use AI as a substitute for consulting with qualified colleagues or outside counsel on genuinely privileged matters.

Stay informed about evolving guidance. AI technology and the regulatory landscape are both changing rapidly. Regularly review updates from your state bar association, the ABA, and other professional organizations. Consider attending continuing legal education programs on AI ethics and technology competence.

Final thoughts: The Future of Privacy Rights in an AI World šŸ”®

The OpenAI preservation order represents a pivotal moment in the collision between AI innovation and privacy rights. It exposes uncomfortable truths about the nature of digital privacy in 2025: privacy policies are subject to override by legal process, deletion features provide psychological comfort rather than technical and legal certainty, and third-party service providers cannot fully protect user data from discovery obligations.

For legal professionals, these realities demand a fundamental reassessment of how AI tools fit into practice. The convenience and efficiency AI provides must be balanced against the sacred duty to protect client confidences and maintain the attorney-client privilege. This is not an abstract concern or distant possibility. It is happening now, in real courtrooms, with real consequences for lawyers and clients.

State bars and regulators are responding, but the guidance remains fragmented and evolving. Federal privacy legislation addressing AI has yet to materialize, leaving a patchwork of state laws with varying requirements. In this environment, legal professionals cannot wait for perfect clarity before taking action.

The responsibility falls on each lawyer to understand the tools they use, the risks those tools create, and the steps necessary to fulfill ethical obligations in this new technological landscape. Ignorance is not a defense. "I didn't know the AI was storing that information" will not excuse a confidentiality breach or privilege waiver.

As AI becomes increasingly embedded in legal practice, the profession must evolve its approach to privacy and confidentiality. The traditional frameworks remain sound—the attorney-client privilege, the duty of confidentiality, the requirement of competence—but their application requires new vigilance. Lawyers must become technology stewards as well as legal advisors, understanding not just what the law says, but how the tools they use might undermine their ability to protect it.

The OpenAI case will not be the last time courts grapple with AI data privacy. As generative AI proliferates and litigation continues, more preservation orders, discovery disputes, and privilege challenges are inevitable. Legal professionals who fail to address these issues proactively may find themselves explaining to clients, judges, or disciplinary authorities why they treated confidential information so carelessly.

Privacy in the AI age demands more than passive reliance on vendor promises. It requires active, informed engagement with the technology we use and honest assessment of the risks we create. For lawyers, whose professional identity rests on the foundation of client trust and confidentiality, nothing less will suffice. The court ruling has made one thing abundantly clear: when it comes to AI and privacy, what you don't know can definitely hurt you—and your clients. āš ļø

MTC: Balancing Digital Transparency and Government Employee Safety: The Legal Profession's Ethical Crossroads in the Age of ICE Tracking Apps

The balance between government employee saftey and the public’s right to know is always in flux.

The intersection of technology, government transparency, and employee safety has created an unprecedented ethical challenge for the legal profession. Recent developments surrounding ICE tracking applications like ICEBlock, People Over Papers, and similar platforms have thrust lawyers into a complex moral and professional landscape where the traditional principle of "sunlight as the best disinfectant" collides with legitimate security concerns for government employees.

The Technology Landscape: A New Era of Crowdsourced Monitoring

The proliferation of ICE tracking applications represents a significant shift in how citizens monitor government activities. ICEBlock, developed by Joshua Aaron, allows users to anonymously report ICE agent sightings within a five-mile radius, functioning essentially as "Waze for immigration enforcement". People Over Papers, created by TikTok user Celeste, operates as a web-based platform using Padlet technology to crowdsource and verify ICE activity reports with photographs and timestamps. Additional platforms include Islip Forward, which provides real-time push notifications for Suffolk County residents, and CoquĆ­, offering mapping and alert systems for ICE activities.

These applications exist within a broader ecosystem of similar technologies. Traditional platforms like Waze, Google Maps, and Apple Maps have long enabled police speed trap reporting. More controversial surveillance tools include Fog Reveal, which allows law enforcement to track civilian movements using advertising IDs from popular apps. The distinction between citizen-initiated transparency tools and government surveillance technologies highlights the complex ethical terrain lawyers must navigate.

The Ethical Framework: ABA Guidelines and Professional Responsibilities

Legal professionals face multiple competing ethical obligations when addressing these technological developments. ABA Model Rule 1.1 requires lawyers to maintain technological competence, understanding both the benefits and risks associated with relevant technology. This competence requirement extends beyond mere familiarity to encompass the ethical implications of technology use in legal practice.

Rule 1.6's confidentiality obligations create additional complexity when lawyers handle cases involving government employees, ICE agents, or immigration-related matters. The duty to protect client information becomes particularly challenging when technology platforms may compromise attorney-client privilege or expose sensitive personally identifiable information to third parties.

The tension between advocacy responsibilities and ethical obligations becomes acute when lawyers represent clients on different sides of immigration enforcement. Attorneys representing undocumented immigrants may view transparency tools as legitimate safety measures, while those representing government employees may consider the same applications as security threats that endanger their clients.

Balancing Transparency and Safety: The Core Dilemma

Who watches whom? Exploring transparency limits in democracy.

The principle of transparency in government operations serves as a cornerstone of democratic accountability. However, the safety of government employees, including ICE agents, presents legitimate counterbalancing concerns. Federal officials have reported significant increases in assaults against ICE agents, citing these tracking applications as contributing factors.

The challenge for legal professionals lies in advocating for their clients while maintaining ethical standards that protect all parties' legitimate interests. This requires nuanced understanding of both technology capabilities and legal boundaries. Lawyers must recognize that the same transparency tools that may protect their immigrant clients could potentially endanger government employees who are simply performing their lawful duties.

Technology Ethics in Legal Practice: Professional Standards

The legal profession's approach to technology ethics must evolve to address these emerging challenges. Lawyers working with sensitive immigration cases must implement robust cybersecurity measures, understand the privacy implications of various communication platforms, and maintain clear boundaries between personal advocacy and professional obligations.

The ABA's guidance on generative AI and technology use provides relevant frameworks for addressing these issues. Legal professionals must ensure that their technology choices do not inadvertently compromise client confidentiality or create security vulnerabilities that could harm any party to legal proceedings.

Jurisdictional and Regulatory Considerations

The removal of ICEBlock from Apple's App Store and People Over Papers from Padlet demonstrates how private platforms exercise content moderation that can significantly impact government transparency tools. These actions raise important questions about the role of technology companies in mediating between transparency advocates and security concerns.

Legal professionals must understand the complex regulatory environment governing these technologies. Federal agencies like CISA recommend encrypted communications for high-value government targets while acknowledging the importance of government transparency. This creates a nuanced landscape where legitimate security measures must coexist with accountability mechanisms.

Professional Recommendations and Best Practices

Legal practitioners working in this environment should adopt several key practices. First, maintain clear separation between personal political views and professional obligations. Second, implement comprehensive cybersecurity measures that protect all client information regardless of their position in legal proceedings proceedings. Third, stay informed about technological developments and their legal implications through continuing education focused on technology law and ethics.

Lawyers should also engage in transparent communication with clients about the risks and benefits of various technology platforms. This includes obtaining informed consent when using technologies that may impact privacy or security, and maintaining awareness of how different platforms handle data security and user privacy.

The legal profession must also advocate for balanced regulatory approaches that protect both government transparency and employee safety. This may involve supporting legislation that creates appropriate oversight mechanisms while maintaining necessary security protections for government workers.

The Path Forward: Ethical Technology Advocacy

The future of legal practice will require increasingly sophisticated approaches to balancing competing interests in our digital age. Legal professionals must serve as informed advocates who understand both the technological landscape and the ethical obligations that govern their profession. This includes recognizing that technology platforms designed for legitimate transparency purposes can be misused, while also acknowledging that government accountability remains essential to democratic governance.

transparency is a balancing act that all lawyers need to be aware of in their practice!

The legal profession's response to ICE tracking applications and similar technologies will establish important precedents for how lawyers navigate future ethical challenges in our increasingly connected world. By maintaining focus on professional ethical standards while advocating effectively for their clients, legal professionals can help ensure that technological advances serve justice rather than undermining it.

Success in this environment requires lawyers to become technologically literate advocates who understand both the promise and perils of digital transparency tools. Only through this balanced approach can the legal profession effectively serve its clients while maintaining the ethical standards that define professional practice in the digital age.

MTC

MTC (Bonus): The Critical Importance of Source Verification When Using AI in Legal Practice šŸ“šāš–ļø

The Fact-Checking Lawyer vs. AI Errors!

Legal professionals face an escalating verification crisis as AI tools proliferate throughout the profession. A recent conversation I had with an AI research assistant about AOL's dial-up internet shutdown perfectly illustrates why lawyers must rigorously fact-check AI outputs. In preparing my editorial for earlier today (see here), I came across a glaring error.  And when I corrected the AI's repeated date errors—it incorrectly cited 2024 instead of 2025 for AOL's September 30 shutdown—this highlighted the dangerous gap between AI confidence and AI accuracy that has resulted in over 410 documented AI hallucination cases worldwide. (You can also see my previous discussions on the topic here).

This verification imperative extends beyond simple date corrections. Stanford University research reveals troubling accuracy rates across legal AI tools, with some systems producing incorrect information over 34% of the time, while even the best-performing specialized legal AI platforms still generate false information approximately 17% of the time. These statistics underscore a fundamental truth: AI tools are powerful research assistants, not infallible oracles.

AI Hallucinations in the Courtroom are not a good thing!

Editor's Note: The irony was not lost on me that while writing this editorial about AI accuracy problems, I had to correct the AI assistant multiple times for contradictory statements about error rates in this very paragraph. The AI initially claimed Westlaw had 34% errors while specialized legal platforms had only 17% errors—ignoring that Westlaw IS a specialized legal platform. This real-time experience of catching AI logical inconsistencies while drafting an article about AI verification perfectly demonstrates the critical need for human oversight that this editorial advocates.

The consequences of inadequate verification are severe and mounting. Courts have imposed sanctions ranging from $2,500 to $30,000 on attorneys who submitted AI-generated fake cases. Recent cases include Morgan & Morgan lawyers sanctioned $5,000 for citing eight nonexistent cases, and a California attorney fined $10,000 for submitting briefs where "nearly all legal quotations ... [were] fabricated". These sanctions reflect judicial frustration with attorneys who fail to fulfill their gatekeeping responsibilities.

Legal professionals face implicit ethical obligations that demand rigorous source verification when using AI tools. ABA Model Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including AI's propensity for hallucinations. Rule 3.4 (Fairness to Opposing Party and Tribunal) prohibits knowingly making false statements of fact or law to courts. Rule 5.1 (Responsibilities Regarding Nonlawyer Assistance) extends supervisory duties to AI tools, requiring lawyers to ensure AI work product meets professional standards. Courts consistently emphasize that "existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings".

The Tech-Savvy Lawyer should have AI Verification Protocols.

The legal profession must establish verification protocols that treat AI as sophisticated but fallible technology requiring human oversight (perhaps a comment to Rule 1.1(8). This includes cross-referencing AI citations against authoritative databases, validating factual claims through independent sources, and maintaining detailed records of verification processes. Resources like The Tech-Savvy Lawyer blog and podcast provide valuable guidance for implementing these best practices. As one federal judge warned, "the duty to check their sources and make a reasonable inquiry into existing law remains unchanged" in the age of AI.

Attorneys who embrace AI without implementing robust verification systems risk professional sanctions, client harm, and reputational damage that could have been prevented through diligent fact-checking practices.  Simply put - check your work when using AI.

MTC

MTC: The End of Dial-Up Internet: A Digital Divide Crisis for Legal Practice šŸ“”āš–ļø

Dial-up shutdown deepens rural legal digital divide.

The legal profession faces an unprecedented access to justice challenge as AOL officially terminated its dial-up internet service on September 30, 2025, after 34 years of operation. This closure affects approximately 163,401 American households that depended solely on dial-up connections as of 2023, creating barriers to legal services in an increasingly digital world. While other dial-up providers like NetZero, Juno, and DSLExtreme continue operating, they may not cover all geographic areas previously served by AOL and offer limited long-term viability.

While many view dial-up as obsolete, its elimination exposes critical technology gaps that disproportionately impact vulnerable populations requiring legal assistance. Rural residents, low-income individuals, and elderly clients who relied on this affordable connectivity option now face digital exclusion from essential legal services and court systems. The remaining dial-up options provide minimal relief as these smaller providers lack AOL's extensive infrastructure coverage.

Split Courtroom!

Legal professionals must recognize that technology barriers create access to justice issues. When clients cannot afford high-speed internet or live in areas without broadband infrastructure, they lose the ability to participate in virtual court proceedings, access online legal resources, or communicate effectively with their attorneys. This digital divide effectively creates a two-tiered justice system where technological capacity determines legal access.

The legal community faces an implicit ethical duty to address these technology barriers. While no specific ABA Model Rule mandates accommodating clients' internet limitations, the professional responsibility to ensure access to justice flows from fundamental ethical obligations.

This implicit duty derives from several ABA Model Rules that create relevant obligations. Rule 1.1 (Competence) requires attorneys to understand "the benefits and risks associated with relevant technology," including how technology barriers affect client representation. Rule 1.4 (Communication) mandates effective client communication, which encompasses understanding technology limitations that prevent meaningful attorney-client interaction. Rule 1.6 (Confidentiality) requires reasonable efforts to protect client information, necessitating awareness of technology security implications. Additionally, 41 jurisdictions have adopted technology competence requirements that obligate lawyers to stay current with technological developments affecting legal practice.

Lawyers are a leader when it comes to calls for action to help narrow the access to justice devide!

The legal community must advocate for affordable internet solutions and develop technology-inclusive practices to fulfill these professional responsibilities and ensure equal access to justice for all clients.

MTC

MTC:  Federal Circuit's Drop Box Relocation Sends a Signal Threatening Access to Justice: Why Paper Filing Options Must Remain Accessible šŸ“āš–ļø

Midnight Filing Rights Under Threat by Federal Court Drop Box Move.

The Federal Circuit's recent decision to relocate its paper filing drop box from outside the courthouse to inside the building, with restricted hours of 8:30 AM to 7:00 PM, represents a concerning step backward for legal accessibility. This policy change, effective October 20, 2025, fundamentally undermines decades of established legal practice and creates unnecessary barriers to justice that disproportionately impact solo practitioners, small firms, and self-represented litigants.

The Critical Role of 24/7 Drop Box Access šŸ•

For generations, the legal profession has relied on midnight filing capabilities as an essential safety net. The traditional 24-hour drop box access has served as a crucial backup system when electronic filing systems fail, internet connectivity issues arise, or attorneys face last-minute technical emergencies. Federal courts have long recognized that electronic filing deadlines extend until midnight in the court's time zone, acknowledging that legal work often continues around the clock and in different time zones across the globe.

The ability to file papers at any hour has been particularly vital for attorneys handling time-sensitive matters such as emergency motions, appeals with strict deadlines, and patent applications where timing can be critical to a client's rights. Research shows that approximately 10% of federal court filings occur after 5:00 PM, with many of these representing urgent legal matters that cannot wait until the next business day.

Technology's Promise and Perils āš™ļø

While electronic filing systems have revolutionized legal practice, they are far from infallible. Court system outages occur with concerning regularity - as recently demonstrated by Washington State's two-week court system shutdown due to unauthorized network activity. When CM/ECF systems go offline, attorneys must have reliable alternative filing methods to meet critical deadlines.

The Federal Circuit's own procedures acknowledge this reality, noting that their CM/ECF system undergoes scheduled maintenance and may experience unexpected outages. During these periods, having accessible backup filing options becomes essential for maintaining the integrity of the legal process. The relocation of the drop box inside the building with limited hours eliminates this crucial failsafe, potentially leaving attorneys with no viable filing option during system emergencies outside business hours.

Digital Divide and Access to Justice Concerns šŸ“±

Tech-Savvy Lawyer Battles Drop Box Access and Justice Barrier.

The restricted drop box access exacerbates existing digital equity issues within the legal system. While large law firms have robust IT infrastructure and technical support, solo practitioners and small firms often lack these resources. Self-represented litigants, who represent approximately 75-95% of parties in many civil cases, face even greater challenges navigating electronic filing requirements.

Studies have shown that technology adoption in courts has disproportionately benefited well-resourced parties while creating additional barriers for vulnerable populations. The Federal Circuit's policy change continues this troubling trend by prioritizing operational convenience over equal access to justice.

Legal Practice Realities šŸ’¼

The Federal Circuit's restricted hours—8:30 AM to 7:00 PM, Monday through Friday—fail to recognize the realities of modern legal practice. Patent attorneys, who frequently practice before this court, often work across multiple time zones and may need to file documents outside traditional business hours due to client demands or international coordination requirements.

Moreover, the new policy requires documents to be date-stamped and security-screened before deposit, adding additional procedural steps that could create delays and complications. These requirements, while perhaps well-intentioned from a security perspective, create practical obstacles that could prevent the timely filing of critical documents.

Recommendations for Balanced Approach āœ…

The Federal Circuit should reconsider this policy change and adopt a more balanced approach that strikes a balance between security and access to justice. Recommended alternatives include:

Hybrid access model: Maintain extended drop box hours (perhaps 6:00 AM to 10:00 PM) to accommodate working attorneys while addressing security concerns.

Emergency filing provisions: Establish clear procedures for after-hours emergency filings when deadlines cannot be met due to the restricted schedule.

Enhanced electronic backup systems: Invest in more robust CM/ECF infrastructure and backup systems to reduce the likelihood of system outages that would necessitate paper filing.

Stakeholder consultation: Engage with the patent bar and other frequent court users to develop solutions that balance operational needs with practitioner requirements.

Preserving the Foundation of Legal Practice āš–ļø

Drop Box Limits Highlight Digital Divide in Federal Courthouse Access.

The Federal Circuit's drop box policy change represents more than an administrative adjustment - it undermines a fundamental principle that the courthouse doors should remain open to all who seek justice. The legal profession has long operated on the understanding that filing deadlines are absolute, and courts have historically provided mechanisms to ensure compliance even under challenging circumstances.

By restricting drop box access, the Federal Circuit sends a troubling message that convenience trumps accessibility. This policy particularly harms the very practitioners who help maintain the patent system's vitality - innovative small businesses, independent inventors, and emerging technology companies that rely on accessible filing procedures.

The court should reverse this decision and either restore 24-hour drop box access or, at a minimum, extend the hours to serve the legal community and the public better. In an era where access to justice faces mounting challenges, courts must resist policies that create additional barriers to legal participation. The integrity of our judicial system depends on maintaining pathways for all parties to present their cases, regardless of their technological capabilities or the timing of their legal needs.

MTC

MTC: The AI-Self-Taught Client Dilemma: Navigating Legal Ethics When Clients Think They Know Better šŸ¤–āš–ļø

The billing battlefield: Clients question fees for AI-assisted work while attorneys defend the irreplaceable value of professional judgment.

The rise of generative artificial intelligence has created an unprecedented challenge for legal practitioners: clients who believe they understand legal complexities through AI interactions, yet lack the contextual knowledge and professional judgment that distinguishes competent legal counsel from algorithmic output. This phenomenon, which we might call the "AI-self-taught-lawyer" syndrome, has evolved beyond mere client education into a minefield of ethical obligations, fee disputes, and even bar complaints when attorneys fail to properly manage these relationships.

The Pushback Reality: When Clients Think They Know Better

Reuters has documented ā€œAI hallucinationsā€ in court filings that create additional work for attorneys—work, i.e., checking citations, that should have been performed before the filing - and that some clients then may challenge on their bills, claiming they shouldn’t pay for hours spent correcting AI errors. This underscores the importance of clear communication about the distinct professional value attorneys add when verifying or refining AI-generated content.

Without clear communications, attorneys run the risk of being accused of "padding hours" when lawyers spend time verifying or correcting client-generated AI work. The ā€œuninformed" client may view attorney review as unnecessary overhead rather than essential professional service. One particularly challenging scenario involves clients who present AI-generated contracts or legal briefs and expect attorneys to simply file them without substantial review, then dispute billing when attorneys perform due diligence.

The Billing Battlefield: AI Efficiency vs. Professional Value

ABA Model Rule 1.5 requires reasonable fees, but AI creates complex billing dynamics. When clients arrive with AI-generated legal research, attorneys face a paradox: they cannot charge full rates for work essentially completed by the client, yet they must invest significant time in verifying, correcting, and providing professional oversight.

Florida Bar Ethics Opinion 24-1 explicitly addresses this challenge: ā€œlawyer[s] may not ethically engage in any billing practices that duplicate charges or that falsely inflate the lawyer's billable hours". However, the opinion also recognizes that AI verification requires substantial professional time that must be fairly compensated.

The D.C. Bar's Ethics Opinion 388 draws parallels to reused work product: when AI reduces the time needed for a task, attorneys can only bill for actual time spent, regardless of the value generated. This creates tension when clients expect discounted rates for "AI-assisted" work, while attorneys must invest more time in verification than traditional practice methods required.

The Bar Complaint Trap: Failure to Warn

The AI-self-taught dilemma: Confident clients push flawed AI legal theories, leaving attorneys to repair the damage before it reaches court

Perhaps the most dangerous aspect of the AI-self-taught client phenomenon is the potential for bar complaints when attorneys fail to adequately warn clients about AI risks. The pattern is becoming disturbingly common: clients use AI for legal research or document preparation, suffer adverse consequences, then file complaints alleging their attorney should have warned them about AI limitations and ethical concerns.

Recent disciplinary cases illustrate this risk. In People v. Crabill, a Colorado attorney was suspended for ā€œfor one year and one day, with ninety days to be served and the remainder to be stayed upon Crabill’s successful completion of a two year period of probation, with conditionsā€after using AI-generated fake case citations. While this involved attorney AI use, similar principles apply to client AI use that goes unaddressed by counsel. The Colorado Court of Appeals warned in Al-Hamim v. Star Heathstone that they "will not look kindly on similar infractions in the futureā€, suggesting that attorney oversight duties extend to client AI activities.

The New York State Bar Association's 2024 report emphasizes that attorneys have obligations to ensure paralegals and employees handle AI properly. This supervisory duty logically extends to managing client AI use that affects the representation, particularly when clients share AI-generated work as the basis for legal strategy.

Competence Requirements Under Model Rule 1.1

ABA Model Rule 1.1[8] requires attorneys to maintain knowledge of "the benefits and risks associated with relevant technology". This obligation intensifies when clients use AI tools independently. Attorneys cannot competently represent AI-literate clients without understanding the technology's limitations and potential pitfalls.

Recent sanctions demonstrate the stakes involved. In Wadsworth v. Walmart, attorneys were fined and lost their pro hac vice admissions after submitting AI-generated fake citations, despite being apologetic and forthcoming. The court emphasized that "technology may change, but the requirements of FRCP 11 do not". This principle applies equally when clients generate problematic AI content that attorneys fail to properly verify or address.

The Tech-Savvy Lawyer blog notes that competence now requires "sophisticated technology manage[ment] while maintaining fundamental duties to provide competent, ethical representation". When clients arrive with AI-generated legal theories, attorneys must possess sufficient AI literacy to identify potential hallucinations, bias, and accuracy issues.

Confidentiality Risks and Client Education

Model Rule 1.6 prohibits attorneys from revealing client information without informed consent. However, AI-self-taught clients create unique confidentiality challenges. Many clients have already shared sensitive information with public AI platforms before consulting counsel, potentially compromising attorney-client privilege from the outset.

ZwillGen's analysis reveals that using AI tools can "place a third party – the AI provider – in possession of client information" and risk privilege waiver. When clients continue using public AI tools for legal matters during representation, attorneys face ongoing confidentiality risks that require active management.

The New York State Bar Association warns that the use of AI "must not compromise attorney-client privilege" and requires attorneys to disclose when AI tools are employed in client cases. This obligation extends to educating clients about ongoing confidentiality risks from their independent AI use.

Supervision Challenges Under Model Rule 5.3

Model Rule 5.3, governing responsibilities regarding nonlawyer assistance, has evolved to encompass AI tools. When clients use AI for legal research, attorneys must treat this as unsupervised nonlawyer assistance requiring professional verification and oversight.

The supervision challenge intensifies when clients present AI-generated legal strategies with confidence in their accuracy. As one practitioner notes, "AI isn't a human subordinate, it's a tool. And just like any tool, if a lawyer blindly relies on it without oversight, they're the one on the hook when things go sideways". This principle applies whether the attorney or client operates the AI tool.

Recent malpractice analyses identify three main AI liability risks: "(1) a failure to understand GAI's limitations; (2) a failure to supervise the use of GAI; and (3) data security and confidentiality breaches". These risks amplify when clients use AI independently without attorney guidance or oversight.

Managing Client Overconfidence and Bias

When clients proudly present AI-generated briefs, lawyers face the hidden cost of correcting errors and managing unrealistic expectations.

Research reveals that AI systems can perpetuate historical biases present in legal databases and court decisions. When clients rely on AI-generated advice, they may unknowingly adopt biased perspectives or outdated legal theories that modern practice has evolved beyond.

A recent case example illustrates this danger: an attorney received "an AI generated inquiry from a client claiming there were additional securities filing requirements associated with a transaction," but discovered "the AI model was pulling its information from a proposed change to the law from over ten years ago" that was "never enacted into law". Clients presenting such AI-generated "research" create professional responsibility challenges for attorneys who must diplomatically correct misinformation while maintaining client relationships.

The confidence with which AI presents information compounds this problem. As noted in professional guidance, "AI-generated statements are no substitute for the independent verification and thorough research that an attorney can provide". Clients often struggle to understand this distinction, leading to pushback when attorneys question or contradict their AI-generated conclusions.

Practical Strategies for Ethical Client Management

Successfully navigating AI-self-taught clients requires comprehensive communication strategies that address both ethical obligations and practical relationship management. Attorneys should implement several key practices:

Proactive Client Education: Establish clear policies regarding client AI use and provide written guidance about confidentiality risks. Include specific language in engagement letters addressing client AI activities and their potential impact on representation.

Transparent Billing Practices: Develop clear fee structures that account for AI verification work. Explain to clients that professional review of AI-generated content requires substantial time investment and represents essential professional service, not unnecessary overhead.

Documentation Requirements: Require clients to disclose any AI use related to their legal matter. Create protocols for reviewing and addressing client-generated AI content while maintaining respect for client initiative.

Regular Communication: Implement ongoing check-ins about client AI use to prevent confidentiality breaches and ensure attorney strategy remains properly informed. Address client expectations about AI capabilities and limitations throughout the representation.

The Fee Justification Challenge

When clients present AI-generated research or draft documents, attorneys face complex billing considerations that require careful navigation. They cannot charge full rates for work essentially completed by the client's AI use, yet they must invest significant time in verification and correction.

The key lies in transparent communication about the additional value provided by professional judgment, ethical compliance, and strategic thinking that AI cannot replicate. As DISCO's client communication guide suggests: "Don't position AI as the latest trend. Present it as a way to deliver stronger outcomes" by spending "more time on strategy, insight, and execution and less on repetitive manual tasks".

Successful practitioners reframe the conversation from cost to value: "AI helps us work more efficiently, which means we spend more of our time on strategy, insight, and execution and less on repetitive manual tasks". This positioning helps clients understand that attorney review of AI-generated content enhances rather than duplicates their investment.

The Bar Complaint Prevention Protocol

Verifying AI ā€˜research’ isn’t padding hours—it’s an ethical obligation that protects clients and preserves professional integrity.

To prevent bar complaints alleging failure to warn about AI risks, attorneys should implement comprehensive documentation practices:

Written AI Policies: Provide clients with written guidance about AI use risks and limitations. Document these communications in client files to demonstrate proactive risk management.

Ongoing Monitoring: Create systems for identifying when clients are using AI tools during representation. Address confidentiality and accuracy concerns promptly when such use is discovered.

Professional Education: Maintain current knowledge of AI capabilities and limitations to provide competent guidance to clients. Document continuing education efforts related to AI and legal technology.

Clear Boundaries: Establish explicit policies about when and how client AI-generated content will be used in the representation. Require independent verification of all AI-generated legal research or documents before incorporation into legal strategy.

Final Thoughts: The Future of Professional Responsibility

The AI-self-taught client phenomenon represents a permanent shift in legal practice dynamics requiring fundamental changes in how attorneys approach client relationships. The legal profession's response will define the next evolution of attorney-client dynamics and professional responsibility standards.

As the D.C. Bar recognized, "clients and counsel must proceed with what we might call a 'collaborative vigilance'". This approach requires "maintaining a shared commitment to transparency, quality, and adaptability" while recognizing both AI's efficiencies and its limitations.

Success demands that attorneys embrace their expanding role as AI educators, technology managers, and ethical guardians. As ABA Formal Opinion 512 emphasizes, lawyers remain fully accountable for all work product, no matter how it is generated. This accountability extends to managing client expectations shaped by AI interactions and ensuring that professional judgment governs all strategic decisions, regardless of their technological origins.

The legal profession must evolve beyond simply tolerating AI-empowered clients to actively managing the ethical, practical, and professional challenges they present. By maintaining ethical vigilance while embracing technological benefits, attorneys can transform this challenge into an opportunity for more informed, efficient, and ultimately more effective legal representation. The key lies in recognizing that AI tools, whether used by attorneys or clients, remain subject to the timeless ethical obligations that protect both professional integrity and client interests.

Those who fail to adapt risk not only client dissatisfaction and fee disputes but also potential disciplinary action for inadequately addressing the AI-related risks that increasingly define modern legal practice.

MTC

MTC:  The Lawyer's Digital "Go Bag" — Preparing for the Unthinkable Termination

lawyers, are you ready for an untimely departure from your firm?

When a lawyer's career ends abruptly—whether through firm dissolution, partnership disputes, or sudden termination—the ethical obligations don't disappear with the pink slip. In fact, they intensify. The concept of a digital "go bag," popularized in corporate America as preparation for unexpected layoffs, takes on unique complexity in the legal profession, where client confidentiality, file ownership, and professional responsibility rules create a minefield of competing obligations.

Unlike other professionals who might download work samples or contacts before losing access, lawyers face stringent ethical constraints that make preparing for career disruption both essential and precarious.

Understanding the Legal Professional's Dilemma

The traditional digital go bag includes personal documents, performance reviews, professional contacts, and work samples. For lawyers, however, the landscape is far more treacherous. Everything in a lawyer's professional sphere potentially involves client confidentiality, creating ethical tripwires that don't exist in other professions.

When lawyers are terminated or leave firms, they cannot simply walk away with client files or even copies of their own work product if it contains client information. The ABA Model Rules create a web of continuing obligations that persist long after the employment relationship has ended.

The Ethical Framework Governing Lawyer Departures

Rule 1.6 — The Confidentiality Fortress

Rule 1.6 of the ABA Model Rules establishes that lawyers must protect client confidentiality indefinitely—even after termination or departure. This duty extends to:

  • All communications with clients;

  • Information learned during representation;

  • Strategic discussions about client matters;

  • Any data that could harm the client if disclosed.

The rule provides extremely limited exceptions, none of which include "I got fired and need this for my portfolio".

Rule 1.15 — Safeguarding Client Property

Under Rule 1.15, lawyers hold client files as property belonging to the client, not the lawyer. When employment ends, lawyers must:

  • Return client files to the firm or client immediately;

  • Surrender any client property in their possession;

  • Refrain from taking copies without explicit authorization.

The Texas State Bar's Ethics Opinion on departing lawyers is particularly stark: attorneys who delete client files from firm systems or take the only copies face potential disciplinary action under Rule 8.4 for dishonesty and deceit.

Rule 1.9 — Former Client Protections

Rule 1.9 extends confidentiality protections to former clients, meaning lawyers cannot use or disclose information learned during representation to harm former clients. This creates ongoing obligations that can span decades after a matter concludes.

What CAN Lawyers Legally Preserve?

Given these constraints, what can lawyers ethically include in their digital go bag? The answer is disappointingly narrow:

Personal Career Documents āœ…

  • Performance reviews and evaluations;

  • Salary statements and benefits records;

  • Bar admission certificates and CLE records;

  • Non-client-related correspondence with colleagues;

  • General firm policies and procedures.

Professional Development Materials āœ…

  • CLE certificates and continuing education records;

  • Bar memberships and professional association documents;

  • Personal networking contacts (non-client);

  • Industry articles and legal research (publicly available).

Limited Work Samples āš ļø

  • Publicly filed pleadings (already in public record);

  • Published articles or speeches (with proper attribution);

  • General legal forms or templates (non-client specific);

  • Redacted work samples (with all client identifying information removed).

Strictly Prohibited āŒ

  • Client files or any portion thereof;

  • Internal case strategy memos;

  • Client contact lists or information;

  • Billing records or time entries;

  • Any document containing client confidential information.

The Dangerous Middle Ground

The most perilous category involves documents that seem personal but contain client information. Consider these scenarios:

Email correspondence: Even emails that appear administrative may reference client matters, making them potentially confidential.

Calendar entries: Meeting notes and appointment records often contain client-privileged information.

Internal reports: Performance reviews that reference specific client matters or outcomes may violate confidentiality rules.

Contact lists: Professional networks built through client relationships cannot be extracted without ethical concerns.

Building an Ethically Compliant Digital Go Bag

Before Trouble Hits

Smart lawyers should prepare their digital go bag while still employed:

  1. Separate personal from professional: Use personal email accounts for career-related correspondence that doesn't involve client matters;

  2. Document your achievements carefully: Keep records of professional accomplishments without referencing client specifics;

  3. Maintain external professional networks: Build relationships through bar associations and professional groups, not just through client work;

  4. Create a non-client portfolio: Develop writing samples, CLE presentations, and other materials that showcase your skills without client data.

Emergency Protocols

If termination occurs suddenly:

  1. Don't panic-download: Resist the urge to grab files before losing access—this can lead to disciplinary action;

  2. Focus on truly personal items: Performance reviews, salary records, and personal correspondence only;

  3. Document the departure: Keep records of your termination notice and final communications for potential unemployment or wrongful termination claims;

  4. Consult ethics counsel immediately: Many state bars offer ethics hotlines for lawyers facing urgent professional responsibility questions.

Post-Departure Obligations

After leaving a firm, lawyers must:

  • Avoid using former client information: Cannot leverage previous client relationships or confidential information in new positions;

  • Maintain confidentiality indefinitely: The duty to protect client information never expires;

  • Cooperate with file transfers: Help ensure smooth transitions for ongoing client matters.

Special Considerations for Solos, Small, and Mid-Size Firms

Smaller firm lawyers face unique challenges:

Solo Practitioners

  • Own their client relationships but still must protect confidentiality when joining new firms;

  • May have limited resources for ethics consultation during crisis situations;

  • Often lack HR departments to guide appropriate departure procedures.

Small Firm Associates

  • May have developed direct client relationships that complicate file ownership issues;

  • Often handle multiple matters simultaneously, making clean departures more complex;

  • May face partner pressure to bring clients to new firms, creating ethical dilemmas.

Mid-Size Firm Lawyers

  • Navigate complex partnership agreements that may restrict post-departure activities;

  • Deal with sophisticated conflicts systems that track potential ethical violations;

  • Face partnership compensation structures that incentivize aggressive client development.

The Technology Trap

Modern law practice creates new ethical pitfalls. Cloud-based files, encrypted communications, and mobile devices blur the lines between personal and professional data. Lawyers must consider:

  • Automatic backups: Personal devices may automatically sync firm data;

  • Password management: Work-related passwords stored in personal managers;

  • Social media connections: Professional networks developed through client work;

  • Digital forensics: Firm IT systems may track all file access and downloads.

Practical Steps for Ethical Compliance

Regular Maintenance

  1. Annual digital cleanup: Review and properly categorize all professional documents;

  2. Ethics policy review: Stay current on your jurisdiction's professional responsibility rules;

  3. Malpractice consultation: Discuss departure scenarios with your professional liability insurer;

  4. Emergency contacts: Maintain relationships with ethics attorneys for urgent consultation.

Documentation Protocols

  1. Written policies: Develop clear protocols for handling departures and file transfers;

  2. Client communication: Establish procedures for notifying clients of attorney departures;

  3. Confidentiality agreements: Ensure all firm personnel understand their ongoing obligations;

  4. Regular training: Update lawyers and staff on current ethical requirements.

The High Stakes Reality

The consequences of getting this wrong extend far beyond mere employment disputes. Lawyers who improperly handle client information during departures face:

  • Disciplinary sanctions: Suspension or disbarment for ethical violations;

  • Malpractice liability: Potential lawsuits from harmed clients or former firms;

  • Criminal prosecution: Computer fraud charges for unauthorized data access;

  • Professional reputation damage: Ethics violations become public record in most jurisdictions.

Final Thoughts: Moving Forward Ethically.

walk away from your last job with dignity and your mandated ethics in tact!

The legal profession's emphasis on client protection means lawyers must accept that their digital go bags will be far more limited than those of other professionals. This isn't a flaw in the system—it's a feature that protects the attorney-client relationship that forms the foundation of effective legal representation.

Rather than viewing these restrictions as burdens, successful lawyers should see them as competitive advantages. Lawyers who build their reputations on ethical compliance, professional competence, and client service create sustainable careers that weather employment disruptions more effectively than those who rely on quick-fix strategies or ethical corner-cutting.

The most important item in any lawyer's digital go bag isn't a document or file—it's an unwavering commitment to professional responsibility that opens doors even when others close unexpectedly.

MTC: Small Firm AI Revolution: When Your Main Street Clients Start Expecting Silicon Valley Service šŸ“±āš–ļø

The AI revolution isn't just transforming corporate legal departments - it's creating unprecedented expectations among everyday clients who are increasingly demanding the same efficiency and innovation from their neighborhood attorneys. Just as Apple's recent automation ultimatum to suppliers demonstrates how tech industry pressures cascade through entire business ecosystems, the AI transformation is now reaching solo practitioners, small firms, and their individual clients in surprising ways.

The Expectation Shift Reaches Main Street

While corporate clients have been early adopters in demanding AI-powered legal services, individual consumers and small business owners are rapidly catching up. Personal injury clients who experience AI-powered customer service from their insurance companies now question why their attorney's document review takes weeks instead of days. Small business owners who use AI for bookkeeping and marketing naturally wonder why their legal counsel hasn't adopted similar efficiency tools.

The statistics reveal a telling gap: 72% of solo practitioners and 67% of small firm lawyers are using AI in some capacity, yet only 8% of solo practices and 4% of small firms have adopted AI widely or universally. This hesitant adoption creates a vulnerability, as client expectations continue to evolve at a faster pace than many smaller firms can adapt to.

Consumer-Driven Demand for Legal AI

Today's clients arrive at law offices with unprecedented technological literacy (and perhaps some unrealistic expectations - think a jury’s ā€œCSIā€ expectation during a long trial). They've experienced AI chatbots for customer service, used AI-powered apps for financial planning, and watched AI streamline other professional services. This exposure creates natural expectations for similar innovation in legal representation. The shift is particularly pronounced among younger clients who view AI integration not as an optional luxury but as basic professional competence.

Small firms report that clients increasingly ask direct questions about AI use in their cases. Unlike corporate clients, who focus primarily on cost reduction, individual clients emphasize speed, transparency, and improvements in communication. They want faster responses to emails, quicker document turnaround, and more frequent case updates - all areas where AI excels.

The Competitive Reality for Solo and Small Firms

The playing field is rapidly changing. Solo practitioners using AI tools can now deliver services that historically required teams of associates. Document review, which once consumed entire weekends, can now be completed in hours with the assistance of AI, allowing attorneys to focus on high-value client counseling and strategic work. This transformation enables smaller firms to compete more effectively with larger practices while maintaining personalized service relationships.

AI adoption among small firms is creating clear competitive advantages. Firms that began using AI tools early are commanding higher fees, earning recognition as innovative practitioners, and becoming indispensable to their clients. The technology enables solo attorneys to handle larger caseloads without sacrificing quality, effectively multiplying their capacity without the need to hire additional staff.

Technology Competence as Client Expectation

Legal ethics opinions increasingly recognize technology competence as a professional obligation. Clients expect their attorneys to understand and utilize available tools that can enhance the quality and efficiency of their representation. This expectation extends beyond simple awareness to active implementation of appropriate technologies for client benefit.

The ethical landscape supports this evolution. State bar associations from California to New York are providing guidance on the responsible use of AI, emphasizing that lawyers should consider AI tools when they can enhance client service. This regulatory support validates client expectations for technological sophistication from their legal counsel.

The Efficiency Promise Meets Client Budget Reality

AI implementation offers particular value for small firm clients who historically faced difficult choices between quality legal representation and affordability. AI tools enable attorneys to reduce routine task completion time by 50-67%, allowing them to offer more competitive pricing while maintaining service quality. This efficiency gain directly benefits clients through faster turnaround times and potentially lower costs.

The technology is democratizing access to legal services. AI-powered document drafting, legal research, and client communication tools allow small firms to deliver sophisticated services previously available only from large firms with extensive resources. Individual clients benefit from this leveling effect through improved service quality at traditional small firm pricing.

From Reactive to Proactive Service Delivery

Small firms using AI are transforming from reactive service providers to proactive legal partners. AI-powered client intake systems operate 24/7, ensuring potential clients receive immediate responses regardless of office hours. Automated follow-up systems keep clients informed about the progress of their cases, while AI-assisted research enables attorneys to identify potential issues before they become problems.

This proactive approach particularly resonates with small business clients who appreciate preventive legal guidance. AI tools enable solo practitioners to monitor regulatory changes, track compliance requirements, and alert clients to relevant legal developments - services that smaller firms previously couldn't provide consistently.

The Risk of Falling Behind

Small firms that delay AI adoption face increasing competitive pressure from both larger firms and more technologically sophisticated solo practitioners. Clients comparing legal services increasingly favor attorneys who demonstrate technological competence and efficiency. The gap between AI-enabled and traditional practices continues widening as early adopters accumulate experience and refine their implementations.

The risk extends beyond losing new clients to losing existing ones. As clients experience AI-enhanced service from other professionals, their expectations for legal representation naturally evolve. Attorneys who cannot demonstrate similar efficiency and responsiveness risk being perceived as outdated or less competent.

Strategic Implementation for Small Firms

Successful AI adoption in small firms focuses on tools that directly enhance the client experience, rather than simply reducing attorney effort. Document automation, legal research enhancement, and client communication systems provide immediate value that clients can appreciate and experience directly. These implementations create positive feedback loops where improved client satisfaction leads to referrals and practice growth.

The key is starting with client-facing improvements rather than back-office efficiency alone. When clients see faster document production, more thorough legal research, and improved communication, they recognize the value of technological investment and often become advocates for the firm's innovative approach.

🧐 Final Thoughts: The Path Forward for Small Firm Success

clients who see lawyers using ai will be more confident that lawyers are using ai behind the scenes.

Just as Apple's suppliers must invest in automation to maintain business relationships, solo practitioners and small firms must embrace AI to meet evolving client expectations. The technology has moved from an optional enhancement to a competitive necessity. The question is no longer whether to adopt AI, but how quickly and effectively to implement it.

The legal profession's AI transformation is creating unprecedented opportunities for small firms willing to embrace change. Those who recognize client expectations and proactively adopt appropriate technologies will thrive in an increasingly competitive marketplace. The future belongs to attorneys who view AI not as a threat to traditional practice, but as an essential tool for delivering superior client service in the modern legal landscape.  Remember what previous podcast guest, Michigan Supreme Court Chief Judge (ret.) Bridget Mary McCormick shared with us in #65: Technologies impact on access to justice with Bridget Mary McCormick, lawyers who don’t embrace AI will be left behind by those who do!

MTC