MTC/šØBOLOšØ: Lexis+ AIā¢ļø Falls Short for Legal Research!
/As artificial intelligence rapidly transforms various industries, the legal profession is no exception. However, a recent evaluation of Lexis+ AIā¢ļø, a new "generative AI-powered legal assistant" from LexisNexis, raises serious concerns about its reliability and effectiveness for legal research and drafting.
In a comprehensive review, University of British Columbia, Peter A. Allard School of Law law Professor Benjamin Perrin put Lexis+ AIā¢ļø through its paces, testing its capabilities across multiple rounds. The results were disappointing, revealing significant limitations that should give legal professionals pause before incorporating this tool into their workflow.
Key issues identified include:
Citing non-existent legislation
Verbatim reproduction of case headnotes presented as "summaries"
Inaccurate responses to basic legal questions
Inconsistent performance and inability to complete requested tasks
Perhaps most concerning was the AI's tendency to confidently provide incorrect information, a phenomenon known as "hallucination" that poses serious risks in the legal context. For example, when asked to draft a motion, Lexis+ AIā¢ļø referenced a non-existent section of Canadian legislation. In another instance, it confused criminal and tort law concepts when explaining causation.
These shortcomings highlight the critical need for human oversight and verification when using AI tools in legal practice. While AI promises increased efficiency, the potential for errors and misinformation underscores that these technologies are not yet ready to replace traditional legal research methods or professional judgment.
For lawyers considering integrating AI into their practice, several best practices emerge:
Understand the technology's limitations
Verify all AI-generated outputs against authoritative sources
Maintain client confidentiality by avoiding sharing sensitive information with AI tools
Stay informed about AI developments and ethical guidelines
Use AI as a supplement to, not a replacement for, human expertise
Just like in the United States, Canadian law societies and bar associations are beginning to address the ethical implications of AI use in legal practice. The Law Society of British Columbia has published guidelines emphasizing the importance of understanding AI technology, prioritizing confidentiality, and avoiding over-reliance on AI tools. Meanwhile, The Law Society of Ontario has set out its own set of similar guidelines. Canadian bar ethics codes may be structured somewhat differently than the ABA Model Rules of Ethics and some of the provisions may diverge from each other, the themes regarding the use of generative AI in the practice of law ring similar to each other.
Canadian law societies and bar associations, mirroring their U.S. counterparts, are actively addressing the ethical implications of AI in legal practice. The Law Society of British Columbia has issued comprehensive guidelines that underscore the critical importance of understanding AI technology, safeguarding client confidentiality, and cautioning against excessive reliance on AI tools. Similarly, the Law Society of Ontario has established its own set of guidelines, reflecting a growing consensus on the need for ethical AI use in the legal profession.
While the structure of Canadian bar ethics codes may differ from the ABA Model Rules of Ethics, and specific provisions may vary between jurisdictions, the overarching themes regarding the use of generative AI in legal practice are strikingly similar. These common principles include:
Maintaining competence in AI technologies
Ensuring client confidentiality when using AI tools
Exercising professional judgment and avoiding over-reliance on AI
Upholding the duty of supervision when delegating tasks to AI systems
Addressing potential biases in AI-generated content
This alignment in ethical considerations across North American jurisdictions underscores the universal challenges and responsibilities that AI integration poses for the legal profession. As AI continues to evolve, ongoing collaboration between Canadian and American legal bodies will likely play a crucial role in shaping coherent, cross-border approaches to AI ethics in law.
It is crucial for legal professionals to approach these tools with a critical eye. AI has the potential to streamline certain aspects of legal work. But Professor Perrinās review of Lexis+ AIā¢ļø serves as a stark reminder that the technology is not yet sophisticated enough to be trusted without significant human oversight.
Ultimately, the successful integration of AI in legal practice will require a delicate balance ā leveraging the efficiency gains offered by technology while upholding the profession's core values of accuracy, ethics, and client service. As we navigate this new terrain, ongoing evaluation and open dialogue within the legal community will be essential to ensure AI enhances, rather than compromises, the quality of legal services.
MTC