How artificial intelligence can help navigate employee cannabis use legal issues

With the impending legalization of recreational marijuana in Canada and the rapid rise of artificial intelligence, times are changing for Canadians.

Ben Alarie

With the impending legalization of recreational marijuana in Canada and the rapid rise of artificial intelligence, times are changing for Canadians. Different as they may seem, these developments have something surprising in common: Artificial intelligence is being used by employers and employment lawyers to navigate the challenging question of what constitutes legally justified testing for employee cannabis use.

The availability of AI in this context is timely. Employers are wondering how to prepare for the prospect of increased cannabis use by employees and the potential safety issues that may result. Employers are increasingly engaging with counsel to identify more precisely the circumstances in which they can mandate drug testing.

Yet, questions abound. Can prospective employees be systematically screened for drug use? Can employers require all employees to submit to drug testing on a random basis? Do employers have to wait for injury or property damages to materialize to insist on a test? If a co-worker reports that a colleague was smoking marijuana on a break, is that sufficient justification for testing?

A challenge for counsel hoping to provide clear guidance to employers is that there is no statutory regime that provides a bright-line test when employers can mandate drug tests for employees. It has been left to judges and adjudicators to develop an approach through the common law. Perhaps unsurprisingly, there are hundreds of decisions addressing a variety of contexts, including pre-employment testing, reasonable cause testing, post-incident testing, post-reinstatement testing and random testing. This often makes conducting comprehensive research impractical or extraordinarily time consuming. Even when thorough research is done, the resulting legal opinion is often hedged and necessarily limited to a specific set of facts and a particular context.

In the growing body of case law, judges and adjudicators grapple with two competing interests — privacy and safety. On one hand, employees understandably prefer to maintain their privacy and freedom from invasive drug testing. Drug testing is time consuming, can impinge upon or influence an employee’s conduct while away from work and may foster an air of distrust in a workplace. On the other hand, employers have an interest in and duty to provide safe workplaces. The prospect of drug testing can act as a deterrent for employees who might otherwise use cannabis in the workplace, making the workplace safer for everyone.

In balancing privacy and safety, courts and adjudicators examine a range of relevant factors. For example, was the employee in a safety-sensitive role? Did they show signs of impairment (abnormal speech, odour, exaggerated gait, etc.)? Was there physical evidence of drug use? What was the severity of the realized damage to person or property, if any? What alternative explanations were possible for the incident? What precautions were taken? Even if there was little realized damage, was there a near miss? Did the employer explicitly consider the worker’s privacy interests before requiring a drug test? Was the testing random or carefully targeted based on other considerations?

When judges and adjudicators present their analyses and the corresponding legal result, they are usually careful to precisely outline their findings about the relevant factors. This is, of course, helpful to lawyers attempting to identify the balance that has been struck in similar cases in the past. It is also useful for machine-learning systems that can use past decisions as a source of data to provide tailored insights about how decision-makers have traded off various factors. Indeed, software is now available that uses artificial intelligence to synthesize the findings of hundreds of relevant decisions to make predictions about how relevant factors would interact in novel circumstances.

The software uses two steps to predict whether drug testing is legally permissible in a given scenario. The first step involves gathering information from the user about the context of the drug testing at issue. The information considered by the AI includes the relevant province, whether the drug test is post incident or with reasonable cause, the type of test (to assess invasiveness), the nature of the employee’s work responsibilities, the circumstances surrounding the incident and the resulting harms (if there was an incident), the specific indicators of impairment (if relevant), any alternative explanations that may have been provided and so on.

At the second step, the software uses AI to compare the information provided by the user to the patterns found in the data from all cases in the system and provides a report regarding how likely it is that a drug test would be found to be permissible or impermissible. The report includes a confidence level (expressed as a percentage), a plain-language explanation for the predicted outcome and a list of past decisions that are most similar to the circumstances presented to the system. The user can try multiple scenarios and see how the predicted outcome would change given different assumptions. AI-based prediction systems are able to achieve 90 per cent or greater accuracy, measured via a testing process that evaluates how accurately the system predicts cases it has never seen before. New decisions are fed back into the AI to allow the system to improve its predictions and keep up as judges and adjudicators resolve further uncertainties.

Just as digital legal research methods have come to dominate paper-based methods in terms of speed and effectiveness, AI-based tools represent the next step in the evolution of legal research. The practical utility of using AI-based research tools that can grapple effectively with timely and challenging legal issues is clear. While many lawyers are fearful of AI, here’s another prediction to ponder. In five years, it will no longer be AI that some lawyers fear; instead, many lawyers will be most fearful of having to advise clients on legal issues without the assistance of AI.

Benjamin AIarie is CEO of Blue J Legal and the Osler Chair in Business Law at the University of Toronto. Blue J Legal designs software that uses artificial intelligence and machine learning to predict legal outcomes.

Recent articles & video

NS Supreme Court imputes income in child support case due to non-disclosure

Federal Court orders re-evaluation of refugee claim due to unreasonable identity verification

BC Court of Appeal upholds immunity of nurses from personal liability in medical negligence case

UK family lawyers launch mental health resource for divorce clients

Bankruptcy attorney Jamie Sprayregen departs Kirkland & Ellis for Hilco Global

DLA Piper bolsters US-Africa practice with Kalidou Gadio as new co-chair

Most Read Articles

Canada Revenue Agency announces penalty relief for bare trusts filing late returns

Ontario Court of Appeal upholds spousal support order in 'unusual' divorce case

Ontario Superior Court awards partner share in the estate despite the absence of marriage

Developing an AI oversight system is vital for organizations: Tara Raissi at Beneva