top of page
  • LinkedIn

Minimizing Risk in using AI


AI tools are positively changing how legal research is done—but lawyers should be cautious about relying upon generative AI tools that can produce confabulations, otherwise known as “hallucinations”.
In a 2024 British Columbia Supreme Court case, a lawyer, Ms. Chong Ke, submitted to the court a family law Application citing two precedent cases, generated by Chat GPT, which cases were later discovered to be non-existent.  Ms. Ke acknowledged that she hadn’t verified the citations.  The court found this to be an “abuse of process and akin to making false statements to the court.”   While the court found no evidence of intentional deceit, she was ordered to personally pay costs to the opposing party.  The matter also triggered a B.C. Law Society investigation.    
This case should not dissuade lawyers from using AI tools.  Rather, it underscores the importance of acting responsibly when choosing technology – focusing on tools that consistently deliver reliable, verifiable results.
As a legal researcher, I have come to trust Lexis+ AI.  It offers enhanced reliability by grounding legal research in a continuously updated repository of verifiable case law, with citation links that are easily checkable, thereby minimizing the risk of citing fictitious caselaw or improper propositions of law.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page