MTC: AI in Legal Email - Balancing Innovation and Ethics 💼🤖

lawyers have an ethical duty when using ai in their work!

The integration of AI into lawyers' email systems presents both exciting opportunities and significant challenges. As legal professionals navigate this technological frontier, we must carefully weigh the benefits against potential ethical pitfalls.

Advantages of AI in Legal Email 📈

AI-powered email tools offer numerous benefits for law firms:

  • Enhanced efficiency through automation of routine tasks

  • Improved client service and satisfaction

  • Assistance in drafting responses and suggesting relevant case law

  • Flagging important deadlines

  • Improved accuracy in document review and contract analysis

These capabilities allow lawyers to focus on high-value work, potentially improving outcomes for clients and minimizing liabilities for law firms.

AI Email Assistants 🖥️

Several AI email assistants are available for popular email platforms:

  1. Microsoft Outlook:

    • Copilot for Outlook: Enhances email drafting, replying, and management using ChatGPT.

  2. Apple Mail:

  3. Gmail:

    • Gemini 1.5 Pro: Offers email summarization, contextual Q&A, and suggested replies.

  4. Multi-platform:

Always Proofread Your Work and Confirm Citations!

🚨

Always Proofread Your Work and Confirm Citations! 🚨

Ethical Considerations and Challenges 🚧

Confidentiality and Data Privacy

The use of AI in legal email raises several ethical concerns, primarily regarding the duty of confidentiality outlined in ABA Model Rule 1.6. Lawyers must ensure that AI systems do not compromise client information or inadvertently disclose sensitive data to unauthorized parties.

To address this:

lawyers should always check their work; especially when using AI!

  1. Implement robust data security measures

  2. Understand AI providers' data handling practices

  3. Review and retain copies of AI system privacy policies

  4. Make reasonable efforts to prevent unauthorized disclosure

Competence (ABA Model Rule 1.1)

ABA Model Rule 1.1, particularly Comment 8, emphasizes the need for lawyers to understand the benefits and risks associated with relevant technology. This includes:

  • Understanding AI capabilities and limitations

  • Appropriate verification of AI outputs (Check Your Work!)

  • Staying informed about changes in AI technology

  • Considering the potential duty to use AI when benefits outweigh risks

The ABA's Formal Opinion 512 further emphasizes the need for lawyers to understand the AI tools they use to maintain competence.

Client Communication

Maintaining the personal touch in client communications is crucial. While AI can streamline processes, it should not replace nuanced, empathetic interactions. Lawyers should:

  1. Disclose AI use to clients

  2. Address any concerns about privacy and security

  3. Consider including AI use disclosure in fee agreements or retention letters

  4. Read your AI-generated/assisted drafts

Striking the Right Balance ⚖️

To ethically integrate AI into legal email systems, firms should:

  1. Implement robust data security measures to protect client confidentiality

  2. Provide comprehensive training on AI tools to ensure competent use

  3. Establish clear policies on when and how AI should be used in client communications

  4. Regularly review and audit AI systems for accuracy and potential biases

  5. Maintain transparency with clients about the use of AI in their matters

  6. Verify that AI tools are not using email content to train or improve their algorithms

Ai is a tool for work - not a replacement for final judgment!

By carefully navigating ⛵️ these considerations, lawyers can harness the power of AI to enhance their practice while upholding their ethical obligations. The key lies in viewing AI as a tool to augment 🤖 human expertise, not replace it.

As the legal profession evolves, embracing AI in email and other systems will likely become essential for remaining competitive. However, this adoption must always be balanced against the core ethical principles that define the practice of law.

And Remember, Always Proofread Your Work and Confirm Citations BEFORE Sending Your E-mail (w Use of AI or Not)!!!

Editorial Follow Up - From Apple Intelligence’s Inaccurate News Summarization of BBC News, to BBC’s Study on AI’s Accuracy Problem: What Lawyers Must Know After this Study 📢⚖️

Lawyers must keep a critical eye on the AI they use in their work - failure to do so could lead to violations of the MRPC!

Earlier, we discussed how "Apple Intelligence, made headlines for all the wrong reasons when it generated a false news summary attributed to the BBC 📰❌”.  Now, a recent BBC study has exposed serious flaws in AI-generated news summaries, confirming what many tech-savvy lawyers feared—AI can misinterpret crucial details. This raises a significant issue for attorneys relying on AI tools for legal research, document review, and case analysis.

As highlighted in our previous coverage, Apple’s AI struggles demonstrate the risks of automated legal processes. The BBC’s findings reinforce that while AI is a valuable tool, lawyers cannot blindly trust its outputs. AI lacks contextual understanding, often omits key facts, and sometimes distorts information. For legal professionals, relying on inaccurate AI-generated summaries could lead to serious ethical violations or misinformed case strategies. (Amazingly, the sanctions I’ve reported from Texas and New York seem light thus far.)

The ABA Model Rules of Professional Conduct emphasize that lawyers must ensure the accuracy of information used in their practice. See MRPC Rule 3.3: Candor Toward the Tribunal. This means AI-assisted research should be cross-checked against primary sources. Additionally, attorneys should understand how their AI tools function—what data they use, their limitations, and potential biases. See MRPC 1.1[e].

Human oversight by lawyers over the ai they use is a cornerstone to maintaining accuracy in their and ethical compliance with the Bar!

To mitigate risks, legal professionals should:
Verify AI-generated content before using it in legal work.
Choose AI solutions designed for legal practice, not general news or business applications, e.g., LawDroid.
Stay updated on AI advancements and legal technology ethics, and stay tuned to The Tech-Savvy Lawyer.Page Blog and Podcast for the latest news and commentary on AI’s impact on the practice of law and more!
Advocate for AI transparency, ensuring tech providers disclose accuracy rates.

The legal field is evolving, and AI will continue to play a role in law practice. However, as the BBC study highlights, human oversight remains essential. Lawyers who embrace AI responsibly—without over-relying on its outputs—will be best positioned to leverage technology ethically and effectively.

MTC

MTC: 🔒 Unlocked Laptop, Suspended License: How One Lawyer’s Cybersecurity Blunder Became a Near? Career-Killer (And What You Must Learn).

lawyers, don’t leave your tech unattended and accessible - it could lead to severe bar actions!

I was so astonished when I heard about this case that I needed to share it with you, The Tech-Savvy Lawyer.Page community!

A recent disciplinary case involving a Jefferson County, Missouri prosecutor’s suspension over a prank email highlights the escalating stakes of cybersecurity negligence in legal practice. The incident—where an unattended, unlocked laptop in an empty jury room used by attorneys to do some work, allowed a mischievous actor, a prosecutor nevertheless, to send a fake email to a sheriff about how she looked in khakis—serves as a stark reminder: basic physical safeguards are no longer sufficient in an era of sophisticated digital risks. Below, let’s discuss what NOT to do and the ethical landmines lurking in outdated tech habits.  

What Went Wrong: A Breakdown of Failures

The prosecutor’s missteps reflect a cascade of poor judgments:  

1. Leaving a device unattended and unlocked in a public setting, enabling unauthorized access.  

2. Failing to implement automatic screen locks or password protections during brief absences.  

3. Ignoring encryption tools for sensitive communications, despite ABA guidance.  

This lapse violated core duties under the ABA Model Rules of Professional Conduct:  

  • Rule 1.6 (Confidentiality): Lawyers must take “reasonable precautions” to prevent unauthorized disclosure of client information. An open laptop in a public space falls far short of this standard.  

  • Rule 1.1[8] (Competence): The 2012 amendment to Comment 8 mandates that lawyers understand the “benefits and risks associated with relevant technology”. Ignoring basic device security—a well-known risk—breaches this duty.  

How Tech Security Expectations Have Evolved  

The shift from casual vigilance to rigorous tech protocols is unmistakable:  

The ABA’s Formal Opinion 477R (2017) clarifies that lawyers must assess risks based on factors like data sensitivity and network security. Public Wi-Fi and unattended devices are now red flags requiring mitigation—not mere inconveniences.  

Consequences of Complacency 

The Jefferson County case underscores the professional, legal, and reputation fallout:   

  • Ethical investigations: State bars increasingly treat tech negligence as a violation of competency rules.

  • License suspension: The prosecutor faced disciplinary action for failing to safeguard confidential systems - in this case, an indefinite suspension.

  • Loss of client trust: Even non-malicious breaches erode confidence in a lawyer’s judgment.

* Interestingly, it appears the public defender got off lightly with a slap on the wrist, although the public defender did leave exposed client files and working notes. This led to the prosecuting attorney being moved off 19 cases he and the defense attorney were both working on - someone got lucky! 😲

What NOT to Do: A Checklist ✅

Avoid these critical mistakes:  

Not all nefarious tech interlopers wear masks! Keep your tech secure!

❌ Assume “quick” errands are harmless. Even 30 seconds unlocked can compromise data.

❌ Use unsecured public networks without a VPN.  

❌ Skip software updates, leaving devices vulnerable to exploits.  

❌ Store sensitive data locally without encryption or cloud backups.

❌ Use someone’s unsecured technology for malicious means or even for a prank.

Secure Your Practice: Best Practices  

  1. Enable automatic screen locks (under 5 minutes of inactivity).  

  2. Adopt encryption for emails and files containing client data.  

  3. Train staff on phishing scams and physical security protocols.  

  4. Develop an incident response plan to address breaches swiftly.  

Final Thoughts 🧐

As the Lawyer Behaving Badly Podcast highlighted in their episode Silly Little Goose, even “harmless” pranks can derail careers. In a world where a single unlocked laptop can trigger ethics investigations, proactive tech competence isn’t optional—it’s survival! Lock your devices, encrypt your data, and treat every public space as a potential threat vector. Your license depends on it. 🔒  

MTC

Word of the Week: "Zoom Mullets" in Legal Practice!

Zoom Mullets: Balancing Comfort & Courtroom Credibility ⚖️💻"

Office mullets can be a Wardrobe option for work - just make sure it’s appropriate and that you can’t be seen below the belt!

 The "Zoom mullet"—professional tops paired with casual bottoms during virtual meetings—has become a staple for remote legal work. While 75% of professionals adopt this hybrid attire 🕴️👖, its impact on courtroom decorum demands scrutiny. James “Jamie” Holland II, featured on *The Tech-Savvy Lawyer.Page* Podcast Episode #35, pioneered the first fully virtual trial in U.S. history via Zoom 🏛️💡. His insights reveal:  

Judges notice attire—even on camera. A wrinkled shirt or unkempt background can subconsciously undermine your credibility.
— Jamie Holland

Key considerations for attorneys:  

You don’t want the judge’s ire if you can be seen dressed inappropriately for court (even through a zoom hearing)!

  • Courtroom protocols: Texas and Michigan courts conducted 1.1 million+ virtual proceedings post-2020, with strict dress codes enforced despite partial visibility.  

  • Tech setup: Holland advises testing cameras/mics pre-hearing and using neutral virtual backgrounds to mask informal spaces.  

🚨Make sure that if you are wearing a Zoom Mullet, the viewer can’t see the bottom half! You don’t want to get in trouble with the judge, your client, or the bar!

📢 Shout out to previous podcast guest Wendy Meadows for illuminating me on this word! 🤗

🚨 BOLO: Apple's Latest Update Activates AI - Lawyers, Protect Your Clients' Data! 🚨

Attention tech-savvy lawyers! 📱💼 Apple's recent iOS and macOS updates have automatically enabled Apple Intelligence, raising significant concerns about client confidentiality and data privacy. As legal professionals, we must remain vigilant in protecting our clients' sensitive information. Here's what you need to know:

The Stealth Activation 🕵️‍♂️

In the last 24 hours, Apple released iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3, which automatically activate Apple Intelligence on compatible devices. This AI-powered suite offers various features, including rewriting text, generating images, and summarizing emails. While these capabilities may seem enticing, they pose potential risks to client confidentiality. 🚨

Privacy Concerns 🔒

Apple claims that Apple Intelligence uses on-device processing to enhance privacy. However, the system still requires 7GB of local storage and may analyze user interactions to refine its functionality. This level of data access and analysis raises red flags for lawyers bound by ethical obligations to protect client information.

Ethical Obligations ⚖️

Check your apple setting if you want to turn off “Apple Intelligence”!

The ABA Model Rules of Professional Conduct, particularly Rule 1.6, emphasize the duty of confidentiality. This rule extends to all forms of client data, including information stored on devices or accessed remotely. As tech-savvy lawyers, we must exercise reasonable care to prevent unauthorized disclosure of client information.

Potential Risks 🚫

Using AI-powered features without fully understanding their implications could lead to inadvertent breaches of client confidentiality. As we've discussed in our previous blog post, "My Two Cents: With AI Creeping Into Our Computers, Tablets, and Smartphones, Lawyers Need to Be Diligent About The Software They Use," lawyers must be cautious about adopting new technologies without proper vetting.

Lawyers MUST maintain reasonable competency in the use of technology! 🚨 ABA MRPC 1.1 [8] 🚨

Lawyers MUST maintain reasonable competency in the use of technology! 🚨 ABA MRPC 1.1 [8] 🚨

Steps to Take 🛡️

  1. Disable Apple Intelligence: Navigate to Settings > Apple Intelligence & Siri to turn off specific features or disable the entire suite.

  2. Educate Your Team: Ensure all staff members are aware of the potential risks associated with AI-powered features.

  3. Review Privacy Policies: Carefully examine Apple's privacy policies and terms of service related to Apple Intelligence.

  4. Implement Additional Safeguards: Consider using encrypted communication tools and secure cloud storage solutions for client data.

Final Thoughts 🧐

As we navigate this rapidly evolving technological landscape, it's essential to balance innovation with ethical obligations. Lawyers can thrive as tech-savvy professionals by embracing technology to enhance their practice while safeguarding client trust. Remember, maintaining reasonable competency in the use of technology is not just advisable—it’s an ethical duty. See Comment, #8, to ABA Model Rule, #1.1.

Subscribe to The Tech-Savvy Lawyer.Page for updates on this developing situation, news on the evolving impact of AI on the practice of law. Together, we can navigate the complexities of legal technology while upholding our professional responsibilities.

Stay safe, stay informed, and stay tech-savvy! 🚀📚💻

Happy Lawyering!

MTC: When AI Stumbles: Apple's Misstep and Its Lessons for Tech-Savvy Lawyers 🍎💻⚖️

Members of the legal profession have a due diligence to ensure l human oversight in any of their AI-driven legal work!

Apple's recent AI blunder serves as a stark reminder that even industry leaders can falter in the rapidly evolving world of artificial intelligence 🤖. The tech giant's new AI feature, Apple Intelligence, made headlines for all the wrong reasons when it generated a false news summary attributed to the BBC 📰❌. Apple is considered a Blue Ribbon star when it comes to cutting-edge technology; this misstep tarnishes its reputation 🏅➡️💔. This incident should be a wake-up call for lawyers embracing AI in their practice ⏰👨‍⚖️.

As we've discussed in previous episodes of The Tech-Savvy Lawyer.Page Podcast.🎙️, AI tools can significantly enhance legal work efficiency. However, the Apple incident underscores a critical point: AI is not infallible 🚫💯. In Episode #92: Finding the Right Crossroads for AI Use, Success, and the Law, Troy Doucette of AI. law., Troy Doucet of AI.Law emphasized the importance of preventing AI hallucinations in legal document drafting 📄🔍. This recent event proves that even tech behemoths like Apple are not immune to such issues 🍎🛡️❌.

Lawyers must approach AI with a blend of enthusiasm and caution 🤔💡. While AI can streamline tasks like document review and legal research, it should never replace human oversight 🧠👀. As highlighted in our blog post, "My Two Cents: With AI Creeping Into Our Computers, Tablets, and Smartphones, Lawyers Need to Be Diligent About The Software They Use," due diligence is crucial when incorporating AI into legal practice 🕵️‍♂️💻.

Its the lawyers general responsibility to make sure that the “facts” they generate with AI are indeed facts and not fake! 🧐

The Apple Intelligence mishap involved a false headline about a high-profile murder case, demonstrating how AI errors can have serious implications 🗞️🔪❌. For lawyers, such inaccuracies in legal documents or case summaries could be catastrophic, potentially leading to malpractice claims and ethical violations ⚖️💥.

To mitigate these risks, lawyers should:

  1. Always verify AI-generated content against primary sources 🔍📚.

  2. Understand the limitations of the AI tools they use 🧠🔧.

  3. Maintain a critical eye when reviewing AI outputs 👁️📝.

  4. Keep abreast of AI developments and potential pitfalls 📈📉.

In 🎙️Ep. 98: Streamlining legal workflows with Michael Anderson, Chief Product Officer at Filevine, on LPM evolution, Michael Anderson of FileVine discussed the ethical use of AI in legal practice management 🤝💼. This conversation gains new relevance in light of Apple's misstep. Lawyers must ensure that their use of AI aligns with ethical standards and doesn't compromise client confidentiality or the integrity of their work 🔒✅.

Furthermore, as Jayne Reardon explored in a recent podcast episode 99: Navigating the Intersection of Law Ethics and Technology with Jayne Reardon, the ABA Model Rules of Ethics provide crucial guidance for lawyers using AI 📜👨‍⚖️. These rules emphasize the need for competence, which extends to understanding the technologies used in legal practice 🧠💻.  See Rule 1.1(8).

The Apple incident also highlights the importance of transparency 🔍. If AI is used in legal work, clients should be informed, and its role should be clearly defined 🗣️📊. This aligns with the ethical considerations discussed in our podcasts like episodes #18: Learn How to "Do It Yourself" with DIY Software - My conversation with "Hello Divorce" creator Attorney Erin Levine! and #70: Growing your firm with Chatbots & Artificial Intelligence with Jared Jaskot about lawyers creating DIY legal services using AI and chatbots 🤖🛠️.

Final Thoughts

lawyers must examine potential inaccuracies when they use ai-generated results in their work.

While AI remains a powerful tool for the legal profession, the Apple Intelligence debacle serves as a timely reminder of its limitations ⏳⚖️. As tech-savvy lawyers, we must harness the benefits of AI while remaining vigilant about its potential pitfalls 🦅👀. By doing so, we can ensure that our use of AI enhances rather than compromises the quality and integrity of our legal services 📈👍.

Remember, in the world of legal tech, an Apple a day doesn't always keep bar counsel away – but diligence and critical thinking certainly help 🍎🚫👨‍⚖️➡️🧠💡.

MTC

🚨BOLO: AI Malpractice🚨: Texas Lawyer Fined for AI-Generated Fake Citations! 😮

We’ve been reporting on lawyers incorrectly using AI in their work; but, the lesson has not yet reached all practicing lawyers: Here is another cautionary tale for legal professionals!

No lawyer wants to be disciplined for using generative ai incorrectly - check your work!

A Texas lawyer, Brandon Monk, has been fined $2,000 for using AI to generate fake case citations in a court filing. U.S. District Judge Marcia Crone of the Eastern District of Texas imposed the penalty and ordered Monk to complete a continuing legal education course on generative AI. This incident occurred in a wrongful termination case against Goodyear Tire & Rubber Co., where Monk submitted a brief containing non-existent cases and fabricated quotes. Concernedly, he was using Lexis AI function in his work - check out the report card a Canadian law professor gave Lexis+ AI in my editorial here. The case highlights the ethical challenges and potential pitfalls of using AI in legal practice.

The judge's ruling emphasizes that attorneys remain accountable for the accuracy of their submissions, regardless of the tools used.

Read the full article on Reuters for an in-depth look at this landmark case and its implications for the legal profession.

Be careful out there!

MTC/🚨BOLO🚨: Lexis+ AI™️ Falls Short for Legal Research!

As artificial intelligence rapidly transforms various industries, the legal profession is no exception. However, a recent evaluation of Lexis+ AI™️, a new "generative AI-powered legal assistant" from LexisNexis, raises serious concerns about its reliability and effectiveness for legal research and drafting.

Lexis+ AI™️ gets a failing grade!

In a comprehensive review, University of British Columbia, Peter A. Allard School of Law law Professor Benjamin Perrin put Lexis+ AI™️ through its paces, testing its capabilities across multiple rounds. The results were disappointing, revealing significant limitations that should give legal professionals pause before incorporating this tool into their workflow.

Key issues identified include:

  1. Citing non-existent legislation

  2. Verbatim reproduction of case headnotes presented as "summaries"

  3. Inaccurate responses to basic legal questions

  4. Inconsistent performance and inability to complete requested tasks

Perhaps most concerning was the AI's tendency to confidently provide incorrect information, a phenomenon known as "hallucination" that poses serious risks in the legal context. For example, when asked to draft a motion, Lexis+ AI™️ referenced a non-existent section of Canadian legislation. In another instance, it confused criminal and tort law concepts when explaining causation.

These shortcomings highlight the critical need for human oversight and verification when using AI tools in legal practice. While AI promises increased efficiency, the potential for errors and misinformation underscores that these technologies are not yet ready to replace traditional legal research methods or professional judgment.

For lawyers considering integrating AI into their practice, several best practices emerge:

lawyers need to be weary when using generative ai! 😮

  1. Understand the technology's limitations

  2. Verify all AI-generated outputs against authoritative sources

  3. Maintain client confidentiality by avoiding sharing sensitive information with AI tools

  4. Stay informed about AI developments and ethical guidelines

  5. Use AI as a supplement to, not a replacement for, human expertise

Just like in the United States, Canadian law societies and bar associations are beginning to address the ethical implications of AI use in legal practice. The Law Society of British Columbia has published guidelines emphasizing the importance of understanding AI technology, prioritizing confidentiality, and avoiding over-reliance on AI tools. Meanwhile, The Law Society of Ontario has set out its own set of similar guidelines. Canadian bar ethics codes may be structured somewhat differently than the ABA Model Rules of Ethics and some of the provisions may diverge from each other, the themes regarding the use of generative AI in the practice of law ring similar to each other.

Canadian law societies and bar associations, mirroring their U.S. counterparts, are actively addressing the ethical implications of AI in legal practice. The Law Society of British Columbia has issued comprehensive guidelines that underscore the critical importance of understanding AI technology, safeguarding client confidentiality, and cautioning against excessive reliance on AI tools. Similarly, the Law Society of Ontario has established its own set of guidelines, reflecting a growing consensus on the need for ethical AI use in the legal profession.

While the structure of Canadian bar ethics codes may differ from the ABA Model Rules of Ethics, and specific provisions may vary between jurisdictions, the overarching themes regarding the use of generative AI in legal practice are strikingly similar. These common principles include:

  1. Maintaining competence in AI technologies

  2. Ensuring client confidentiality when using AI tools

  3. Exercising professional judgment and avoiding over-reliance on AI

  4. Upholding the duty of supervision when delegating tasks to AI systems

  5. Addressing potential biases in AI-generated content

Hallucinations can end a lawyers career!

This alignment in ethical considerations across North American jurisdictions underscores the universal challenges and responsibilities that AI integration poses for the legal profession. As AI continues to evolve, ongoing collaboration between Canadian and American legal bodies will likely play a crucial role in shaping coherent, cross-border approaches to AI ethics in law.

It is crucial for legal professionals to approach these tools with a critical eye. AI has the potential to streamline certain aspects of legal work. But Professor Perrin’s review of Lexis+ AI™️ serves as a stark reminder that the technology is not yet sophisticated enough to be trusted without significant human oversight.

Ultimately, the successful integration of AI in legal practice will require a delicate balance – leveraging the efficiency gains offered by technology while upholding the profession's core values of accuracy, ethics, and client service. As we navigate this new terrain, ongoing evaluation and open dialogue within the legal community will be essential to ensure AI enhances, rather than compromises, the quality of legal services.

MTC

MTC: Can Lawyers Ethically Use Generative AI with Public Documents? 🤔 Navigating Competence, Confidentiality, and Caution! ⚖️✨

Lawyers need to be concerned with their legal ethics requirements when using AI in their work!

After my recent interview with Jayne Reardon on The Tech-Savvy Lawyer.Page Podcast 🎙️ Episode 99, it made me think: “Can or can we not use public generative AI in our legal work for clients by only using publicly filed documents?” This question has become increasingly relevant as tools like ChatGPT, Google's Gemini, and Perplexity AI gain popularity and sophistication. While these technologies offer tantalizing possibilities for improving efficiency and analysis in legal practice, they also raise significant ethical concerns that lawyers must carefully navigate.

The American Bar Association (ABA) Model Rules of Professional Conduct (MRPC) provide a framework for considering the ethical implications of using generative AI in legal practice. Rule 1.1 on competence is particularly relevant, as it requires lawyers to provide competent representation to clients. Many state bar associations provide that lawyers should keep abreast of the benefits and risks associated with relevant technology. This scrutiny highlights AI’s growing importance in the legal profession.

However, the application of this rule to generative AI is not straightforward. On one hand, using AI tools to analyze publicly filed documents and assist in brief writing could be seen as enhancing a lawyer's competence by leveraging advanced technology to improve research and analysis. On the other hand, relying too heavily on AI without understanding its limitations and potential biases could be seen as a failure to provide competent representation.

The use of generative ai can have complex ethic's’ requirements.

The duty of confidentiality, outlined in 1.1, presents another significant challenge when considering the use of public generative AI tools. Lawyers must ensure that client information remains confidential, which can be difficult when using public AI platforms that may store or learn from the data input into them. As discussed in our October 29th editorial, The AI Revolution in Law: Adapt or Be Left Behind (& where the bar associations are on the topic), state bar associations are beginning (if not already begun) scrutinizing lawyers use of generative AI. Furthermore, as Jayne Reardon astutely pointed out in our recent interview, even if a lawyer anonymizes the client's personally identifiable information (PII), inputting the client's facts into a public generative AI tool may still violate the rule of confidentiality. This is because the public may be able to deduce that the entry pertains to a specific client based on the context and details provided, even if they are "whitewashed." This raises important questions about the extent to which lawyers can use public AI tools without compromising client confidentiality, even when taking precautions to remove identifying information.

State bar associations have taken varying approaches to these issues. For example, the Colorado Supreme Court has formed a subcommittee to consider recommendations for amendments to their Rules of Professional Conduct to address attorney use of AI tools. Meanwhile, the Iowa State Bar Association has published resources on AI for lawyers, emphasizing the need for safeguards and human oversight.

The potential benefits of using generative AI in legal practice are significant. As Troy Doucet discussed in 🎙️Episode 92 of The Tech-Savvy Lawyer.Page Podcast, AI-driven document drafting systems can empower attorneys to efficiently create complex legal documents without needing advanced technical skills. Similarly, Mathew Kerbis highlighted in 🎙️ Episode 85 how AI can be leveraged to provide more accessible legal services through subscription models.

Do you know what your generative ai program is sharing with the public?

However, the risks are equally significant. AI hallucinations - where the AI generates false or misleading information - have led to disciplinary actions against lawyers who relied on AI-generated content without proper verification. See my editorial post My Two Cents: If you are going to use ChatGTP and its cousins to write a brief, Shepardize!!! Chief Justice John Roberts warned in his 2023 Year-End Report on the Federal Judiciary that "any use of AI requires caution and humility".

Given these considerations, a balanced approach to using generative AI in legal practice is necessary. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but with several important caveats:

1. Verification: All AI-generated content must be thoroughly verified for accuracy. Lawyers cannot abdicate their professional responsibility to ensure the correctness of legal arguments and citations.

2. Confidentiality: Extreme caution must be exercised to ensure that no confidential client information is input into public AI platforms.

3. Transparency: Lawyers should consider disclosing their use of AI tools to clients and courts, as appropriate.

The convergence of ai, its use in the practice of law, and legal ethics is here now1

4. Understanding limitations: Lawyers must have a solid understanding of the capabilities and limitations of the AI tools they use.

5. Human oversight: AI should be used as a tool to augment human expertise, not replace it.

This blog and podcast has consistently emphasized the importance of these principles. In our discussion with Katherine Porter in 🎙️ Episode 88, we explored how to maximize legal tech while avoiding common pitfalls. In my various posting, there has always been an emphasis on the need for critical thinking and careful consideration before adopting new AI tools.

It's worth noting that the legal industry is still in the early stages of grappling with these issues. As Jayne Reardon explored in 🎙️ Episode 99 of our podcast, the ethical concerns surrounding lawyers' use of AI are complex and evolving. The legal profession will need to continue to adapt its ethical guidelines as AI technology advances.

While generative AI tools offer exciting possibilities for enhancing legal practice, their use must be carefully balanced against ethical obligations. Lawyers can potentially use these tools to analyze publicly filed documents and assist in brief writing, but they must do so with a clear understanding of the risks and limitations involved. As the technology evolves, so too must our approach to using it ethically and effectively in legal practice.

MTC

🎙️Ep. 99: Navigating the Intersection of Law Ethics and Technology with Jayne Reardon.

Meet Jayne Reardon, a nationally renowned expert on legal ethics and professionalism who provides ethics, risk management, and regulatory advice to lawyers and legal service providers. Jayne is an experienced trial lawyer who has tried cases in state and federal courts across Illinois and on appeal up to the United States Supreme Court. She also sits on the national roster of the American Arbitration Association for Commercial and Consumer Arbitration. Moreover, she is a certified neutral in the Early Dispute Resolution Process. Jayne's experience includes service as Executive Director of the Illinois Supreme Court Commission on Professionalism, an organization dedicated to promoting ethics and professionalism among lawyers and judges, and disciplinary counsel for the Illinois Attorney Registration and Disciplinary Commission.

In today's conversation, Jayne explores ethical concerns for lawyers using AI, focusing on ABA Model Rules. She also discusses billing ethics, advising transparency in engagement letters and time tracking. Furthermore, Jayne highlights online civility, warning against impulsive posts and labeling, and real-life cases to underscore the importance of ethical vigilance in AI-integrated legal practice.

Join Jane and me as we discuss the following three questions and more!

  1. What are your top three warnings to lawyers about using AI in line with the ABA model rules of ethics?

  2. Some lawyers are creating DIY services online through chatbots, AI for clients, through chatbots and AI for clients to handle their legal affairs. What are the top three ethical concerns these lawyers should be wary of when creating these services?

  3. What are your top three suggestions about lawyers being civil to one another and others online?

In our conversation, we cover the following:

[01:11] Jayne's Current Tech Setup

[04:50] Handling Tech Devices and Daily Usage

[08:51] Ethical Considerations for AI in Legal Practice

[19:21] Ethical Considerations for AI-Assisted Services

[26:37] Civility in Online Interactions

[30:58] Connect with Jayne

Resources:

Connect with Jayne:

Hardware mentioned in the conversation:

Software & Cloud Services mentioned in the conversation:

* the “W-Calendar” program I refered to apparently is no longer an active software program available for purchase.