In recent months, Canadian courts have grappled with a growing issue: the use of AI-generated fake legal citations in court documents. This phenomenon, often referred to as “AI hallucinations,” has sparked intense debate about the role of artificial intelligence in legal practice. While some advocate for outright bans on AI in courtrooms, others argue that such measures would be counterproductive and overlook the root of the problem.
The controversy gained momentum after several high-profile cases where lawyers submitted documents containing fictitious case citations generated by AI tools like ChatGPT. These incidents have raised concerns about the integrity of legal proceedings and the potential for technology to undermine trust in the justice system.
However, banning AI altogether is not the solution. Instead, the focus should shift to addressing the underlying issues: the lack of proper oversight, insufficient training for legal professionals, and the need for robust verification processes. AI, when used responsibly, offers significant benefits, such as streamlining legal research and improving efficiency. The key is to strike a balance between leveraging technology and ensuring its use does not compromise the fairness or accuracy of legal outcomes.
As the legal profession navigates this uncharted territory, one thing is clear: the solution lies not in prohibiting AI but in fostering a culture of accountability, transparency, and continuous education. By doing so, courts can harness the potential of AI while safeguarding the integrity of the justice system.
Recent incidents in Canadian courts have highlighted serious concerns regarding the use of generative AI tools in legal practice. Multiple cases have emerged where lawyers submitted documents containing fake or non-existent case citations that appear to have been generated by AI tools like ChatGPT.
In February 2024, Justice D. M. Masuhara reprimanded lawyer Chong Ke for incorporating two fictitious cases into a notice of application that were later discovered to have been generated by ChatGPT. The judge characterized these errors as “alarming” and ordered the lawyer to cover court costs.
More recently, lawyer Jisuh Lee faced potential contempt charges after submitting a factum containing links to non-existent cases in the Ontario Superior Court. Justice Fred Myers discovered the issue when he attempted to follow hyperlinks in her document that didn’t connect to the cited cases and couldn’t locate the referenced cases on CanLII.
Courts have responded firmly to these incidents. In one case, Justice Kenkel mandated a lawyer to create entirely new defence submissions with specific requirements, including numbered paragraphs and pages, precise case citations with references to relevant paragraphs, and verified citations with links to CanLII or similar resources.
Justice Myers emphasized that “the court must quickly and firmly make clear that, regardless of technology, lawyers cannot rely on non-existent authorities or cases that say the opposite of what is submitted.”
Facing potential contempt charges, Jisuh Lee admitted to the facts, apologized to the court, and proposed remedial steps. She explained that her factum had been prepared partly using ChatGPT, resulting in AI hallucinations. Lee committed to completing at least six hours of Continuing Professional Development training in legal ethics and technology, specifically addressing the professional use and risks of AI tools in legal practice.
These Canadian cases are part of a larger international trend. French lawyer Damien Charlotin has compiled 137 instances worldwide where generative AI produced erroneous legal content, many involving fabricated citations.
The growing prevalence of “AI hallucinations” in legal documents underscores the critical importance of lawyer oversight when using these tools. As Justice Masuhara noted, “generative AI cannot replace the professional expertise necessary within the justice system,” and “lawyers must possess competence in selecting and utilizing any technological tools, including AI-powered ones, to uphold the integrity of the justice system.”
Conclusion
The integration of generative AI in legal practice presents both opportunities and challenges. While AI tools like ChatGPT can enhance efficiency, the recent incidents in Canadian courts underscore the critical need for vigilance and ethical responsibility. Lawyers must recognize that AI cannot supplant human expertise and judgment. The consequences of AI-generated errors, including potential contempt charges and damage to professional reputation, highlight the imperative of rigorous oversight and adherence to ethical standards. As the legal profession evolves alongside technology, ongoing education and a commitment to integrity will be essential in maintaining the trust and integrity of the justice system.