MTC: Can Lawyers Ethically Use Generative AI with Public Documents? π€ Navigating Competence, Confidentiality, and Caution! βοΈβ¨
/After my recent interview with Jayne Reardon on The Tech-Savvy Lawyer.Page Podcast ποΈ Episode 99, it made me think: βCan or can we not use public generative AI in our legal work for clients by only using publicly filed documents?β This question has become increasingly relevant as tools like ChatGPT, Google's Gemini, and Perplexity AI gain popularity and sophistication. While these technologies offer tantalizing possibilities for improving efficiency and analysis in legal practice, they also raise significant ethical concerns that lawyers must carefully navigate.
The American Bar Association (ABA) Model Rules of Professional Conduct (MRPC) provide a framework for considering the ethical implications of using generative AI in legal practice. Rule 1.1 on competence is particularly relevant, as it requires lawyers to provide competent representation to clients. Many state bar associations provide that lawyers should keep abreast of the benefits and risks associated with relevant technology. This scrutiny highlights AIβs growing importance in the legal profession.
However, the application of this rule to generative AI is not straightforward. On one hand, using AI tools to analyze publicly filed documents and assist in brief writing could be seen as enhancing a lawyer's competence by leveraging advanced technology to improve research and analysis. On the other hand, relying too heavily on AI without understanding its limitations and potential biases could be seen as a failure to provide competent representation.
The duty of confidentiality, outlined in 1.1, presents another significant challenge when considering the use of public generative AI tools. Lawyers must ensure that client information remains confidential, which can be difficult when using public AI platforms that may store or learn from the data input into them. As discussed in our October 29th editorial, The AI Revolution in Law: Adapt or Be Left Behind (& where the bar associations are on the topic), state bar associations are beginning (if not already begun) scrutinizing lawyers use of generative AI. Furthermore, as Jayne Reardon astutely pointed out in our recent interview, even if a lawyer anonymizes the client's personally identifiable information (PII), inputting the client's facts into a public generative AI tool may still violate the rule of confidentiality. This is because the public may be able to deduce that the entry pertains to a specific client based on the context and details provided, even if they are "whitewashed." This raises important questions about the extent to which lawyers can use public AI tools without compromising client confidentiality, even when taking precautions to remove identifying information.
State bar associations have taken varying approaches to these issues. For example, the Colorado Supreme Court has formed a subcommittee to consider recommendations for amendments to their Rules of Professional Conduct to address attorney use of AI tools. Meanwhile, the Iowa State Bar Association has published resources on AI for lawyers, emphasizing the need for safeguards and human oversight.
The potential benefits of using generative AI in legal practice are significant. As Troy Doucet discussed in ποΈEpisode 92 of The Tech-Savvy Lawyer.Page Podcast, AI-driven document drafting systems can empower attorneys to efficiently create complex legal documents without needing advanced technical skills. Similarly, Mathew Kerbis highlighted in ποΈ Episode 85 how AI can be leveraged to provide more accessible legal services through subscription models.
However, the risks are equally significant. AI hallucinations - where the AI generates false or misleading information - have led to disciplinary actions against lawyers who relied on AI-generated content without proper verification. See my editorial post My Two Cents: If you are going to use ChatGTP and its cousins to write a brief, Shepardize!!! Chief Justice John Roberts warned in his 2023 Year-End Report on the Federal Judiciary that "any use of AI requires caution and humility".
Given these considerations, a balanced approach to using generative AI in legal practice is necessary. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but with several important caveats:
1. Verification: All AI-generated content must be thoroughly verified for accuracy. Lawyers cannot abdicate their professional responsibility to ensure the correctness of legal arguments and citations.
2. Confidentiality: Extreme caution must be exercised to ensure that no confidential client information is input into public AI platforms.
3. Transparency: Lawyers should consider disclosing their use of AI tools to clients and courts, as appropriate.
4. Understanding limitations: Lawyers must have a solid understanding of the capabilities and limitations of the AI tools they use.
5. Human oversight: AI should be used as a tool to augment human expertise, not replace it.
This blog and podcast has consistently emphasized the importance of these principles. In our discussion with Katherine Porter in ποΈ Episode 88, we explored how to maximize legal tech while avoiding common pitfalls. In my various posting, there has always been an emphasis on the need for critical thinking and careful consideration before adopting new AI tools.
It's worth noting that the legal industry is still in the early stages of grappling with these issues. As Jayne Reardon explored in ποΈ Episode 99 of our podcast, the ethical concerns surrounding lawyers' use of AI are complex and evolving. The legal profession will need to continue to adapt its ethical guidelines as AI technology advances.
While generative AI tools offer exciting possibilities for enhancing legal practice, their use must be carefully balanced against ethical obligations. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but they must do so with a clear understanding of the risks and limitations involved. As the technology evolves, so too must our approach to using it ethically and effectively in legal practice.
MTC