MTC: Can Lawyers Ethically Use Generative AI with Public Documents? 🤔 Navigating Competence, Confidentiality, and Caution! ⚖️✨

Lawyers need to be concerned with their legal ethics requirements when using AI in their work!

After my recent interview with Jayne Reardon on The Tech-Savvy Lawyer.Page Podcast 🎙️ Episode 99, it made me think: “Can or can we not use public generative AI in our legal work for clients by only using publicly filed documents?” This question has become increasingly relevant as tools like ChatGPT, Google's Gemini, and Perplexity AI gain popularity and sophistication. While these technologies offer tantalizing possibilities for improving efficiency and analysis in legal practice, they also raise significant ethical concerns that lawyers must carefully navigate.

The American Bar Association (ABA) Model Rules of Professional Conduct (MRPC) provide a framework for considering the ethical implications of using generative AI in legal practice. Rule 1.1 on competence is particularly relevant, as it requires lawyers to provide competent representation to clients. Many state bar associations provide that lawyers should keep abreast of the benefits and risks associated with relevant technology. This scrutiny highlights AI’s growing importance in the legal profession.

However, the application of this rule to generative AI is not straightforward. On one hand, using AI tools to analyze publicly filed documents and assist in brief writing could be seen as enhancing a lawyer's competence by leveraging advanced technology to improve research and analysis. On the other hand, relying too heavily on AI without understanding its limitations and potential biases could be seen as a failure to provide competent representation.

The use of generative ai can have complex ethic's’ requirements.

The duty of confidentiality, outlined in 1.1, presents another significant challenge when considering the use of public generative AI tools. Lawyers must ensure that client information remains confidential, which can be difficult when using public AI platforms that may store or learn from the data input into them. As discussed in our October 29th editorial, The AI Revolution in Law: Adapt or Be Left Behind (& where the bar associations are on the topic), state bar associations are beginning (if not already begun) scrutinizing lawyers use of generative AI. Furthermore, as Jayne Reardon astutely pointed out in our recent interview, even if a lawyer anonymizes the client's personally identifiable information (PII), inputting the client's facts into a public generative AI tool may still violate the rule of confidentiality. This is because the public may be able to deduce that the entry pertains to a specific client based on the context and details provided, even if they are "whitewashed." This raises important questions about the extent to which lawyers can use public AI tools without compromising client confidentiality, even when taking precautions to remove identifying information.

State bar associations have taken varying approaches to these issues. For example, the Colorado Supreme Court has formed a subcommittee to consider recommendations for amendments to their Rules of Professional Conduct to address attorney use of AI tools. Meanwhile, the Iowa State Bar Association has published resources on AI for lawyers, emphasizing the need for safeguards and human oversight.

The potential benefits of using generative AI in legal practice are significant. As Troy Doucet discussed in 🎙️Episode 92 of The Tech-Savvy Lawyer.Page Podcast, AI-driven document drafting systems can empower attorneys to efficiently create complex legal documents without needing advanced technical skills. Similarly, Mathew Kerbis highlighted in 🎙️ Episode 85 how AI can be leveraged to provide more accessible legal services through subscription models.

Do you know what your generative ai program is sharing with the public?

However, the risks are equally significant. AI hallucinations - where the AI generates false or misleading information - have led to disciplinary actions against lawyers who relied on AI-generated content without proper verification. See my editorial post My Two Cents: If you are going to use ChatGTP and its cousins to write a brief, Shepardize!!! Chief Justice John Roberts warned in his 2023 Year-End Report on the Federal Judiciary that "any use of AI requires caution and humility".

Given these considerations, a balanced approach to using generative AI in legal practice is necessary. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but with several important caveats:

1. Verification: All AI-generated content must be thoroughly verified for accuracy. Lawyers cannot abdicate their professional responsibility to ensure the correctness of legal arguments and citations.

2. Confidentiality: Extreme caution must be exercised to ensure that no confidential client information is input into public AI platforms.

3. Transparency: Lawyers should consider disclosing their use of AI tools to clients and courts, as appropriate.

The convergence of ai, its use in the practice of law, and legal ethics is here now1

4. Understanding limitations: Lawyers must have a solid understanding of the capabilities and limitations of the AI tools they use.

5. Human oversight: AI should be used as a tool to augment human expertise, not replace it.

This blog and podcast has consistently emphasized the importance of these principles. In our discussion with Katherine Porter in 🎙️ Episode 88, we explored how to maximize legal tech while avoiding common pitfalls. In my various posting, there has always been an emphasis on the need for critical thinking and careful consideration before adopting new AI tools.

It's worth noting that the legal industry is still in the early stages of grappling with these issues. As Jayne Reardon explored in 🎙️ Episode 99 of our podcast, the ethical concerns surrounding lawyers' use of AI are complex and evolving. The legal profession will need to continue to adapt its ethical guidelines as AI technology advances.

While generative AI tools offer exciting possibilities for enhancing legal practice, their use must be carefully balanced against ethical obligations. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but they must do so with a clear understanding of the risks and limitations involved. As the technology evolves, so too must our approach to using it ethically and effectively in legal practice.

MTC

🎙️Ep. 99: Navigating the Intersection of Law Ethics and Technology with Jayne Reardon.

Meet Jayne Reardon, a nationally renowned expert on legal ethics and professionalism who provides ethics, risk management, and regulatory advice to lawyers and legal service providers. Jayne is an experienced trial lawyer who has tried cases in state and federal courts across Illinois and on appeal up to the United States Supreme Court. She also sits on the national roster of the American Arbitration Association for Commercial and Consumer Arbitration. Moreover, she is a certified neutral in the Early Dispute Resolution Process. Jayne's experience includes service as Executive Director of the Illinois Supreme Court Commission on Professionalism, an organization dedicated to promoting ethics and professionalism among lawyers and judges, and disciplinary counsel for the Illinois Attorney Registration and Disciplinary Commission.

In today's conversation, Jayne explores ethical concerns for lawyers using AI, focusing on ABA Model Rules. She also discusses billing ethics, advising transparency in engagement letters and time tracking. Furthermore, Jayne highlights online civility, warning against impulsive posts and labeling, and real-life cases to underscore the importance of ethical vigilance in AI-integrated legal practice.

Join Jane and me as we discuss the following three questions and more!

  1. What are your top three warnings to lawyers about using AI in line with the ABA model rules of ethics?

  2. Some lawyers are creating DIY services online through chatbots, AI for clients, through chatbots and AI for clients to handle their legal affairs. What are the top three ethical concerns these lawyers should be wary of when creating these services?

  3. What are your top three suggestions about lawyers being civil to one another and others online?

In our conversation, we cover the following:

[01:11] Jayne's Current Tech Setup

[04:50] Handling Tech Devices and Daily Usage

[08:51] Ethical Considerations for AI in Legal Practice

[19:21] Ethical Considerations for AI-Assisted Services

[26:37] Civility in Online Interactions

[30:58] Connect with Jayne

Resources:

Connect with Jayne:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation:

* the “W-Calendar” program I refered to apparently is no longer an active software program available for purchase.

My Two Cents: Lessons from ABA's Formal Opinion 512 - A Follow-Up!

there will be many Collaborative discussions on ABA Formal Opinion 512's impact on legal practice!

This post is a follow-up to last week's editorial on my experience with the AI sessions at the American Bar Association's (ABA) 2024 Annual meeting. Today, I'll delve deeper into ABA's Formal Opinion 512 and explore its implications for legal practitioners.

Building on Prior Model Rules

ABA's Formal Opinion 512 builds on several foundational Model Rules of Professional Conduct. These include:

 Breakdown of ABA Formal Opinion 512 

Tech-savvy lawyer reviews ethical implications of AI under ABA Opinion 512.

 1. Competence

Formal Opinion 512 emphasizes that competence in legal practice now extends to a lawyer's understanding and use of technology. Lawyers must stay informed about changes in technology that affect their practice areas. This includes:

  • Understanding AI Capabilities: Lawyers must understand the capabilities and limitations of AI tools they use.

  • Continuing Education: Lawyers should engage in ongoing education about technological advancements relevant to their practice.

 2. Confidentiality

The opinion underscores the importance of maintaining client confidentiality when using AI tools. Key points include:

  • Risk Assessment: Lawyers must assess the risks associated with using AI tools, particularly concerning data security and privacy.

  • Vendor Due Diligence: Lawyers should conduct due diligence on AI vendors to ensure they comply with confidentiality obligations.

Lawyers will be Debating AI ethics and compliance for the foreseeable future!

 3. Supervision

Lawyers are responsible for supervising the AI tools and ensuring they are used ethically. This includes:

  • Oversight: Lawyers must oversee the AI tools to ensure they are used appropriately and do not compromise ethical standards.

  • Accountability: Lawyers remain accountable for the outcomes of AI-assisted tasks, ensuring that AI tools do not replace human judgment.

 4. Communication

Effective communication with clients about the use of AI is crucial. Lawyers should:

  • Inform Clients: Clearly inform clients about the use of AI tools in their cases.

  • Obtain Consent: Obtain informed consent from clients regarding the use of AI, especially when it involves sensitive data.

ABA's Formal Opinion 512 signals that AI is now essential in legal practice, but it also underscores the importance of maintaining ethical standards when using it.

Final Thoughts

ABA's Formal Opinion 512 is a significant step in ensuring that lawyers remain competent and ethical in an increasingly digital world. By emphasizing the need for technological proficiency, confidentiality, supervision, and clear communication, the ABA reinforces that staying updated with technology is not optional—it's a matter of maintaining one's bar license. Lawyers must embrace these guidelines to provide the best possible representation in the modern legal landscape.

Lawyers who do not keep up with the evolving AI landscape will be left behind by those who do!

🚨

Lawyers who do not keep up with the evolving AI landscape will be left behind by those who do! 🚨

My Two Cents: With AI Creeping Into Our Computers, Tablets, and Smartphones, Lawyers Need to Be Diligent About The Software They Use.

Lawyers need to be weary about the computer company behind the curtin as to what information they are taking from your data!

As Apple is anticipated to announce a new iPhone with AI baked into its operating system, lawyers, like Dorothy in the Wizard of Oz, can no longer stand idly by and trust that the person behind the curtain, i.e., the software creator or owner of their software product, is both trustworthy and not going to use the customer’s data in ways inconsistent with the data owners’ objectives or to protect their data personal identification information. Per ABA Model Rule 1.6(a), lawyers must reasonably ensure that their client’s Personal Identification Information (PII) is protected. And recent events are providing a bit of a minefield for not just lawyers.

I use a popular subscription service application called SetApp. It’s a subscription service that gives me access to over 240 applications. I use many of them daily. But one of its applications, Bartender (which helps clean up and manage your Mac computer’s toolbar), was recently but secretively purchased by a private company. The problem is that little is known about the company. There is a very legitimate concern that Bartender may be improperly using its customer’s computer data – apparently (but not confirmed to be) making unauthorized screenshots. (Note that this is not a critique of SetApp, but I am going to reevaluate my use of Bartender – here are some alternatives you may want to check out.) But this general concern does not end with just “unknown” Wizards.

Lawyers need to be weary about the computer company behind the curtin as to what information they are taking from your data!

It was recently discovered that Adobe changed customer's terms of service. Lawyers should be deeply concerned about Adobe's updated terms of use for Photoshop, which grant the company broad rights to access and remove users' cloud-stored content. This raises significant privacy and confidentiality issues, particularly for legal professionals handling sensitive client data under non-disclosure agreements (NDAs), protecting PII, and trial strategies. Adobe's ability to view and potentially mishandle files covered by NDAs could lead to damaging leaks and breaches of client trust. You can “opt out” of this by going to your account’s privacy settings, going to “Content analysis,” and making sure the “Allow my content to be analyzed by Adobe for product improvement and development purposes” option is not selected. You can also not upload your material to Adobe’s could service – these steps may provide an extra layer of protection, but no one is 100% sure.

As custodians of confidential information, lawyers have an ethical duty to safeguard client secrets. Adobe's overreaching policy raises significant concerns for the legal community. These concerns extend beyond software, as computer companies now integrate AI into their hardware systems.

Many Windows machines are developing their computers to work inherently with MS Windows' own AI, Copilot. At the time of this writing, Apple is expected to announce a new operating system with an AI built into it to work with its new M4 chip. In other words, hardware and software companies work together to have their machines work naturally with operating systems that have AI built into their software. The biggest concern that should be on lawyers' minds is how their data is being used to train a company’s AI. What protections are being built into the systems? Can users opt-out? What does this all mean for us lawyers?

This means that lawyers at any computer skill level must pay attention to the Terms of Service (ToS) for the computers and software they use for work. The warning signs are there. So, stay tuned to your Tech-Savvy Lawyer as we navigate through this together!

MTC

My Two Cents: Lawyers Need to Remember to Navigate Ethical Boundaries When Using Listservs: ABA's Guidance on Client Information Sharing.

Lawyers need to maintain client confidentiality when talking with colleagues in online forums.

The legal profession's reliance on technology continues to grow, facilitating collaboration and knowledge sharing among practitioners. Listservs, e.g., the American Bar Association's (ABA) own “solosez”, serve as an excellent medium for lawyers to discuss day-to-day law office management concerns, legal issues, and even their own cases.  But, when doing so, lawyers must still remember to keep their (former or current) client’s confidentiality when using these public forums.

The ABA recently issued Formal Opinion 511 to address ethical concerns surrounding the dissemination of client information on listservs and similar platforms.  The opinion emphasizes the need for lawyers to exercise caution when discussing client matters online, even in closed forums intended for professional discourse. Revealing confidential client information without proper consent can violate the duty of confidentiality enshrined in Model Rule 1.6.

While listservs offer a valuable resource for seeking guidance from peers, the ABA underscores that lawyers must refrain from disclosing information that could reasonably lead to the identification of a client. This includes details about the client's identity, legal issues, or other specifics that may compromise confidentiality.  But to emphasize the point of the opinion, it’s not just keeping confidential the identity, legal issues, or other specifics that may compromise confidentiality; this includes any information that could reasonably lead to the identification of a client.

To strike a balance between confidentiality and the benefits of professional collaboration, the opinion suggests several best practices:

Lawyers need to maintain client confidentiality with even some of the most minute details if it could “reasonably” reveal the client when talking with colleagues in online forums.

  • Anonymization: Lawyers should carefully anonymize client information by removing identifiers and altering specific facts to prevent inadvertent disclosure;

  • Client Consent: Obtaining the client's informed consent before sharing any details about their matter is the safest approach, though not always practical.

  • Forum Vetting: Evaluate the listserv's membership, policies, and security measures to ensure it provides adequate safeguards against unauthorized access or dissemination of shared information.

  • Contextual Consideration: Assess the sensitivity of the client's matter and the potential risks of disclosure before deciding whether to share information on a listserv.

In today’s social media age, it is easy for people to feel anonymous online. This can lead some people to let their safeguards down and reveal too much personal information. Or, quite frankly, say things they would not say to others in public.  Lawyers, too, need to ensure they are not revealing client information that may breach their ethical obligations to their clients (both current and former).

So, I’d like to repeat myself from above, while digital platforms facilitate knowledge sharing and professional development, lawyers must exercise vigilance to protect client confidentiality.  By adhering to the ABA's guidance and implementing robust safeguards, lawyers can leverage the benefits of online collaboration while upholding their ethical duties. Striking this balance is crucial for maintaining public trust and preserving the integrity of the legal profession in the digital age.

MTC.

My Two Cents: Did a Federal Judge in NC go too far in banning Docket Management Tools?

My Two Cents: Did a Federal Judge in NC go too far in banning Docket Management Tools?

A recent Order by a Federal Court Judge in North Carolina restricts lawyers from utilizing third-party automated docket management tools) due to concerns regarding unauthorized access to sealed documents, prompting ethical and operational dilemmas within the legal community.

Read More

Sound Quality versus Privacy – What is more important to a lawyer in a smart speaker?

AdobeStock_65087730.jpeg

Sound Quality versus Privacy – What is more important to a lawyer in a smart speaker?

MacRumors came out with an article comparing the mini-smart speakers currently on the market.  The candidates are the Amazon's Echo, Apple’s Homepod mini and Google's Nest Audio. They all retail for about $99.  It looks like hands down the Echo and Nest beat the Homepod-mini for quality and depth of music.

The audio specs break down:

  • Echo: 76mm woofer and two 20mm tweeters.

  • ‌HomePod mini‌: Full range driver and dual passive radiators.

  • Nest Audio: 75mm woofer and one 19mm tweeter.

BUT IS SOUND QUALITY WORTH THE TRUE “COST” OF THE DEVICE:

Certainly, if you are vested in the Amazon Alexa or Google Assistant platforms, I can see the draw to remain in the respective platforms’ microverse.  But sound quality and smart-assistant integration are not THE major concern for attorneys – It’s Privacy!

Amazon Alex and Google Assistant do not have a great reputation for protecting your privacyApple Homepods have had its share of fairly recent problems too!  But Apple’s Siri is more active in protecting your privacy.  The inherency of its “sandboxed” software makes it more likely prying eyes đź‘€ (or in this case ears 👂🦻!) will not be obtaining your private or your client’s confidential information!

PROFESSIONAL RESPONSIBILITY ALERT!
Remember, the Model Rules of Professional Conduct require you have to be both competent in your use of technology in your office Rule 1.1 [8] and take reasonable efforts to ensure your client’s information is protected, Rule 1.6 (c).

Granted, I am a Mac user in my private practice.  So, I would naturally gravitate toward Homepods.  But I do use Windows machines when it comes to the blog.  And IMHO the overall risk right now in buying an Amazon Alexa or Google Assistant is just not worth risk – even with the discounts you may be finding on Amazon!

MTC

Happy Lawyering!!!