Neinstein partner on why lawyers should embrace AI, but never without caution
This article was produced in partnership with Neinstein Personal Injury Lawyers
As artificial intelligence (AI) continues to make waves as a revolutionizer, its role in legal practice is both promising and precarious. While the former is alluring, the latter is where lawyers need to focus their efforts and remain clear-eyed. The integration of AI into law firms must be deliberate, measured, and rooted in reality, not hype.
“At first blush, like a fresh young candidate, AI interviews well – it’s impressive,” Rose Leto, a partner and head of operations at Neinstein Personal Injury Lawyers says. “The law isn’t just about crunching numbers and analyzing text. It’s about people, ingenuity and strategy. In those areas, AI just can’t compete with humans – at least not at my firm.”
The uptake of AI tools in Canadian law firms is accelerating. According to the 2024 State of the Canadian Law Firm Market report by Thomson Reuters and the Canadian Bar Association, over a quarter of lawyers are already using generative AI in their work. An additional 24% plan to follow suit within the next three years.
AI is increasingly a staple in modern legal practice, as it should be. Its productivity potential is clear — rapid document review, automatic chronologies, case summaries, and predictive analysis — but Leto warns it’s far from replacing the practice of law “the old-fashioned way.”
“At Neinstein, we’ve always engaged in technology to improve our firm’s efficiency, so engaging in AI was an organic evolution,” Leto explains, describing it as a natural step for a firm that has long been fearless in its pursuit of the best and brightest technology to streamline workflows. But like every other tool the firm has leveraged over the years, it does its homework.
“Does this mean the end of late nights spent on legal research, document review or drafting? I’m not so sure.”
While AI can significantly enhance productivity, especially in evidence-heavy practice areas like personal injury and medical malpractice, it is far from foolproof.
Many AI platforms struggle with the complexity of medical documents, Leto explains, meaning AI’s output is only as good as the input it can understand. Most tools omit images, for example, and overlooking those often-crucial components of medical malpractice litigation means that, without comprehensive analysis, reliance on AI could lead to incomplete or incorrect legal assessments.
“Our medical records are not always nicely typed out – there are handwritten records that are almost impossible to decipher, flowsheets, charts and decision trees,” she says. “The AI can’t always process and analyze the full medical records – leading to limitations and issues with the reliability of the AI generated answer.”
There’s also the troubling risk of “hallucinations,” with a 2024 report from Stanford University found hallucination rates of 58–82% in general-purpose AI tools used for legal tasks. Even specialized platforms like Lexis+ AI and Westlaw AI-Assisted Research, showed error rates of 17–33%. Leto poses the inevitable question: “Who is liable when a lawyer relies on a wrong AI generated response?”
Confidentiality also remains a pressing concern. Not all AI is created equal: open models may retain and reuse data, posing risks to client privacy.
“Best practices would be to purchase a closed AI program, with proper security features to prevent data breaches,” Leto advises, stressing that AI should be trained to reflect a firm’s practice needs, warning that generic, off-the-shelf products rarely suffice.
Neinstein’s approach is intentionally conservative. AI is a tool, not a replacement to what the firm’s lawyers bring to the table. Their emphasis remains on client relationships, practical problem-solving, and advocacy – elements no algorithm can replicate.
“No AI program – no matter how advanced – can find practical solutions to our client’s unique and non-coded problems,” Leto asserts.
Lawyers must critically assess not only what AI can do, but also what it should do. The technology offers a powerful assist, but the responsibility – and the risk – still rests squarely on human shoulders. Leto’s final message is one of cautious optimism.
“We use AI to improve efficiency where it makes sense for us, but it certainly is not a substitute for the judgement, experience or advocacy that define a great legal practice,” she sums up. “The future of AI in law is still unfolding — and we have the opportunity to shape it.”