🚨BOLO: AI Malpractice🚨: Texas Lawyer Fined for AI-Generated Fake Citations! 😮

We’ve been reporting on lawyers incorrectly using AI in their work; but, the lesson has not yet reached all practicing lawyers: Here is another cautionary tale for legal professionals!

No lawyer wants to be disciplined for using generative ai incorrectly - check your work!

A Texas lawyer, Brandon Monk, has been fined $2,000 for using AI to generate fake case citations in a court filing. U.S. District Judge Marcia Crone of the Eastern District of Texas imposed the penalty and ordered Monk to complete a continuing legal education course on generative AI. This incident occurred in a wrongful termination case against Goodyear Tire & Rubber Co., where Monk submitted a brief containing non-existent cases and fabricated quotes. Concernedly, he was using Lexis AI function in his work - check out the report card a Canadian law professor gave Lexis+ AI in my editorial here. The case highlights the ethical challenges and potential pitfalls of using AI in legal practice.

The judge's ruling emphasizes that attorneys remain accountable for the accuracy of their submissions, regardless of the tools used.

Read the full article on Reuters for an in-depth look at this landmark case and its implications for the legal profession.

Be careful out there!

MTC/🚨BOLO🚨: Lexis+ AI™️ Falls Short for Legal Research!

As artificial intelligence rapidly transforms various industries, the legal profession is no exception. However, a recent evaluation of Lexis+ AI™️, a new "generative AI-powered legal assistant" from LexisNexis, raises serious concerns about its reliability and effectiveness for legal research and drafting.

Lexis+ AI™️ gets a failing grade!

In a comprehensive review, University of British Columbia, Peter A. Allard School of Law law Professor Benjamin Perrin put Lexis+ AI™️ through its paces, testing its capabilities across multiple rounds. The results were disappointing, revealing significant limitations that should give legal professionals pause before incorporating this tool into their workflow.

Key issues identified include:

  1. Citing non-existent legislation

  2. Verbatim reproduction of case headnotes presented as "summaries"

  3. Inaccurate responses to basic legal questions

  4. Inconsistent performance and inability to complete requested tasks

Perhaps most concerning was the AI's tendency to confidently provide incorrect information, a phenomenon known as "hallucination" that poses serious risks in the legal context. For example, when asked to draft a motion, Lexis+ AI™️ referenced a non-existent section of Canadian legislation. In another instance, it confused criminal and tort law concepts when explaining causation.

These shortcomings highlight the critical need for human oversight and verification when using AI tools in legal practice. While AI promises increased efficiency, the potential for errors and misinformation underscores that these technologies are not yet ready to replace traditional legal research methods or professional judgment.

For lawyers considering integrating AI into their practice, several best practices emerge:

lawyers need to be weary when using generative ai! 😮

  1. Understand the technology's limitations

  2. Verify all AI-generated outputs against authoritative sources

  3. Maintain client confidentiality by avoiding sharing sensitive information with AI tools

  4. Stay informed about AI developments and ethical guidelines

  5. Use AI as a supplement to, not a replacement for, human expertise

Just like in the United States, Canadian law societies and bar associations are beginning to address the ethical implications of AI use in legal practice. The Law Society of British Columbia has published guidelines emphasizing the importance of understanding AI technology, prioritizing confidentiality, and avoiding over-reliance on AI tools. Meanwhile, The Law Society of Ontario has set out its own set of similar guidelines. Canadian bar ethics codes may be structured somewhat differently than the ABA Model Rules of Ethics and some of the provisions may diverge from each other, the themes regarding the use of generative AI in the practice of law ring similar to each other.

Canadian law societies and bar associations, mirroring their U.S. counterparts, are actively addressing the ethical implications of AI in legal practice. The Law Society of British Columbia has issued comprehensive guidelines that underscore the critical importance of understanding AI technology, safeguarding client confidentiality, and cautioning against excessive reliance on AI tools. Similarly, the Law Society of Ontario has established its own set of guidelines, reflecting a growing consensus on the need for ethical AI use in the legal profession.

While the structure of Canadian bar ethics codes may differ from the ABA Model Rules of Ethics, and specific provisions may vary between jurisdictions, the overarching themes regarding the use of generative AI in legal practice are strikingly similar. These common principles include:

  1. Maintaining competence in AI technologies

  2. Ensuring client confidentiality when using AI tools

  3. Exercising professional judgment and avoiding over-reliance on AI

  4. Upholding the duty of supervision when delegating tasks to AI systems

  5. Addressing potential biases in AI-generated content

Hallucinations can end a lawyers career!

This alignment in ethical considerations across North American jurisdictions underscores the universal challenges and responsibilities that AI integration poses for the legal profession. As AI continues to evolve, ongoing collaboration between Canadian and American legal bodies will likely play a crucial role in shaping coherent, cross-border approaches to AI ethics in law.

It is crucial for legal professionals to approach these tools with a critical eye. AI has the potential to streamline certain aspects of legal work. But Professor Perrin’s review of Lexis+ AI™️ serves as a stark reminder that the technology is not yet sophisticated enough to be trusted without significant human oversight.

Ultimately, the successful integration of AI in legal practice will require a delicate balance – leveraging the efficiency gains offered by technology while upholding the profession's core values of accuracy, ethics, and client service. As we navigate this new terrain, ongoing evaluation and open dialogue within the legal community will be essential to ensure AI enhances, rather than compromises, the quality of legal services.

MTC

MTC: Can Lawyers Ethically Use Generative AI with Public Documents? 🤔 Navigating Competence, Confidentiality, and Caution! ⚖️✨

Lawyers need to be concerned with their legal ethics requirements when using AI in their work!

After my recent interview with Jayne Reardon on The Tech-Savvy Lawyer.Page Podcast 🎙️ Episode 99, it made me think: “Can or can we not use public generative AI in our legal work for clients by only using publicly filed documents?” This question has become increasingly relevant as tools like ChatGPT, Google's Gemini, and Perplexity AI gain popularity and sophistication. While these technologies offer tantalizing possibilities for improving efficiency and analysis in legal practice, they also raise significant ethical concerns that lawyers must carefully navigate.

The American Bar Association (ABA) Model Rules of Professional Conduct (MRPC) provide a framework for considering the ethical implications of using generative AI in legal practice. Rule 1.1 on competence is particularly relevant, as it requires lawyers to provide competent representation to clients. Many state bar associations provide that lawyers should keep abreast of the benefits and risks associated with relevant technology. This scrutiny highlights AI’s growing importance in the legal profession.

However, the application of this rule to generative AI is not straightforward. On one hand, using AI tools to analyze publicly filed documents and assist in brief writing could be seen as enhancing a lawyer's competence by leveraging advanced technology to improve research and analysis. On the other hand, relying too heavily on AI without understanding its limitations and potential biases could be seen as a failure to provide competent representation.

The use of generative ai can have complex ethic's’ requirements.

The duty of confidentiality, outlined in 1.1, presents another significant challenge when considering the use of public generative AI tools. Lawyers must ensure that client information remains confidential, which can be difficult when using public AI platforms that may store or learn from the data input into them. As discussed in our October 29th editorial, The AI Revolution in Law: Adapt or Be Left Behind (& where the bar associations are on the topic), state bar associations are beginning (if not already begun) scrutinizing lawyers use of generative AI. Furthermore, as Jayne Reardon astutely pointed out in our recent interview, even if a lawyer anonymizes the client's personally identifiable information (PII), inputting the client's facts into a public generative AI tool may still violate the rule of confidentiality. This is because the public may be able to deduce that the entry pertains to a specific client based on the context and details provided, even if they are "whitewashed." This raises important questions about the extent to which lawyers can use public AI tools without compromising client confidentiality, even when taking precautions to remove identifying information.

State bar associations have taken varying approaches to these issues. For example, the Colorado Supreme Court has formed a subcommittee to consider recommendations for amendments to their Rules of Professional Conduct to address attorney use of AI tools. Meanwhile, the Iowa State Bar Association has published resources on AI for lawyers, emphasizing the need for safeguards and human oversight.

The potential benefits of using generative AI in legal practice are significant. As Troy Doucet discussed in 🎙️Episode 92 of The Tech-Savvy Lawyer.Page Podcast, AI-driven document drafting systems can empower attorneys to efficiently create complex legal documents without needing advanced technical skills. Similarly, Mathew Kerbis highlighted in 🎙️ Episode 85 how AI can be leveraged to provide more accessible legal services through subscription models.

Do you know what your generative ai program is sharing with the public?

However, the risks are equally significant. AI hallucinations - where the AI generates false or misleading information - have led to disciplinary actions against lawyers who relied on AI-generated content without proper verification. See my editorial post My Two Cents: If you are going to use ChatGTP and its cousins to write a brief, Shepardize!!! Chief Justice John Roberts warned in his 2023 Year-End Report on the Federal Judiciary that "any use of AI requires caution and humility".

Given these considerations, a balanced approach to using generative AI in legal practice is necessary. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but with several important caveats:

1. Verification: All AI-generated content must be thoroughly verified for accuracy. Lawyers cannot abdicate their professional responsibility to ensure the correctness of legal arguments and citations.

2. Confidentiality: Extreme caution must be exercised to ensure that no confidential client information is input into public AI platforms.

3. Transparency: Lawyers should consider disclosing their use of AI tools to clients and courts, as appropriate.

The convergence of ai, its use in the practice of law, and legal ethics is here now1

4. Understanding limitations: Lawyers must have a solid understanding of the capabilities and limitations of the AI tools they use.

5. Human oversight: AI should be used as a tool to augment human expertise, not replace it.

This blog and podcast has consistently emphasized the importance of these principles. In our discussion with Katherine Porter in 🎙️ Episode 88, we explored how to maximize legal tech while avoiding common pitfalls. In my various posting, there has always been an emphasis on the need for critical thinking and careful consideration before adopting new AI tools.

It's worth noting that the legal industry is still in the early stages of grappling with these issues. As Jayne Reardon explored in 🎙️ Episode 99 of our podcast, the ethical concerns surrounding lawyers' use of AI are complex and evolving. The legal profession will need to continue to adapt its ethical guidelines as AI technology advances.

While generative AI tools offer exciting possibilities for enhancing legal practice, their use must be carefully balanced against ethical obligations. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but they must do so with a clear understanding of the risks and limitations involved. As the technology evolves, so too must our approach to using it ethically and effectively in legal practice.

MTC

My Two Cents: Embracing the Future: Navigating the Ethical Use of AI in Legal Practice.

Lawyers need to be mindful of their bar ethics when using Generative aI in their practice of law.

What can the Florida Bar Ethics Opinion 24-1, issued a couple of months ago, provides as guidance for all lawyers on the ethical use of generative artificial intelligence (AI) in their practice. Here are the key teachings and reminders for lawyers, not just in Florida but potentially applicable in any jurisdiction:

1. Confidentiality and Client Information: Lawyers must ensure the protection of client confidentiality when using generative AI. This includes understanding the AI program's policies on data retention, sharing, and learning capabilities to prevent unauthorized disclosure of client information.

2. Competence and Accuracy: Lawyers are responsible for their work product and must ensure that the use of generative AI aligns with their professional judgment and ethical obligations. This includes verifying the accuracy and reliability of information generated by AI tools.

3. Billing Practices: The opinion cautions against improper billing practices, such as double-billing for AI-generated work. Lawyers must ensure that fees and costs charged to clients are reasonable and ethically justified.

Generative AI can be a positive contribution to your law firm!

4. Advertising and Communication: When using AI chatbots for client communication, lawyers must comply with advertising restrictions and clearly disclose that the chatbot is an AI program, not a human lawyer or law firm employee.

5. Technological Competence: Lawyers have a duty to maintain competence in technology, which includes understanding the risks and benefits associated with new tools like generative AI.

6. Supervision and Oversight: Lawyers must develop policies for the oversight of generative AI to ensure its use is consistent with ethical standards. This includes reviewing AI-generated work products for accuracy and sufficiency.

7. Ethical Delegation: Lawyers should carefully consider which tasks can be ethically delegated to generative AI, ensuring that the AI does not perform duties that require a lawyer's personal judgment or constitute the practice of law.

8. Client Relationships: Lawyers must be cautious when using AI for client intake or communication to avoid inadvertently creating a lawyer-client relationship or providing legal advice through AI interactions.

… a lawyer may ethically utilize generative AI technologies but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligation

9. Informed Consent: In certain situations, particularly when using third-party AI services, lawyers may need to obtain informed consent from clients before disclosing confidential information to the AI.

This opinion underscores the importance of ethical considerations in the adoption and use of emerging technologies in legal practice. It encourages lawyers to embrace innovation while remaining vigilant about their professional responsibilities.  I think the opinion summarizes how lawyers can/should use AI wisely: 

In sum, a lawyer may ethically utilize generative AI technologies but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations. These obligations include the duties of confidentiality, avoidance of frivolous claims and contentions, candor to the tribunal, truthfulness in statements to others, avoidance of clearly excessive fees and costs, and compliance with restrictions on advertising for legal services. Lawyers should be cognizant that generative AI is still in its infancy and that these ethical concerns should not be treated as an exhaustive list. Rather, lawyers should continue to develop competency in their use of new technologies and the risks and benefits inherent in those technologies.

MTC

Happy Lawyering!