Artificial intelligence-powered liability shakes up the medical field

As AI expands in health care, regulation and the law on professional responsibility will evolve with the technology

Artificial intelligence-powered liability shakes up the medical field

Early in Brian Moher’s legal career, he acted on a case where a machine substituted for a medical decision-maker, leading to a record-breaking fine and jail time for the practitioner.

A Hamilton optician, Bruce Bergez, ran a chain of optical stores called Great Glasses. The business offered free eye tests, using a technology that measured refractive error in the customer’s vision and generated eyeglass prescriptions. The College of Optometrists went after Bergez because dispensing corrective lenses via a computer-generated prescription rather than a prescription from an optometrist or physician violated Ontario’s Regulated Health Professions Act (RHPA).


Brian Moher

Despite a Superior Court judgment ordering Bergez and his companies to comply with provincial law, he continued the illegal business practice, growing the operation from three stores to more than twenty. After he appealed a contempt order unsuccessfully and was denied leave to the Supreme Court of Canada, Bergez’s non-compliance earned him a $17-million fine. Bergez appealed, again unsuccessfully. Seven years after the initial court order and still non-compliant, he served one year in prison for civil contempt.

Section 27(2) of the RHPA lists the “controlled acts” that are the exclusive domain of doctors, nurses, and other health care providers. These include delivering a baby, diagnosing an illness, and prescribing or dispensing eyeglasses other than simple magnifiers. Section 27(1) states that no one other than a member authorized by a health profession act, or a person delegated to by such a member, can perform a controlled act.

Contrasting the Great Glasses saga with current technological advances, through which artificial intelligence and machine-learning tools are capable of things like reading medical images and generating diagnoses, raises the issue of how the RHPA now applies, says Moher, who works as plaintiff’s counsel in medical negligence actions at Gluckstein Lawyers.

More than any other area, medicine must keep up with technological advancement, and doctors are obligated to ensure the “optimum standard of care,” he says. Yet, asks Moher, what degree of judgment, observation, and inspection is still required from the professional when relying on these sophisticated technologies?

“We are in an era when we seem to be accepting, collectively, that machines may be better equipped to perform these tasks. That does leave a question as to, if we are engaging in a period of transition, whether the current legislative framework is sufficient.”

Studies show AI can perform as well as or better than humans at several “key healthcare tasks,” according to a 2019 article in the UK’s Royal College of Physicians’ Future Healthcare Journal. Doctors use machine learning to predict treatment protocols and detect cancer in radiology images. They use natural language processing to create, analyze, and classify clinical documentation and published research. AI-embedded surgical robots help surgeons with vision, executing precise incisions, and stitching wounds. And AI is being developed for diagnosis and disease treatment.

According to Paul Harte, a medical malpractice trial lawyer, AI represents an “obvious opportunity” for tasks that humans do not do well. He says that tort law may soon incorporate a requirement to use certain AI tools into a doctor’s standard of care.


Paul Harte

“At some point, you’re going to see physicians and hospitals being held liable for not using AI.”

In terms of liability, current legal structures will adequately accommodate the increased use of AI in health care, says Harte. But this technology introduces a third player into the process: the developer of the AI system.

While there are well-known class actions involving failed hip implants and defective hernia meshes, product liability is “relatively uncommon” in medical malpractice. AI will introduce new subject matter and multiple parties with overlapping liability. Harte says that these new elements will make an already complex area of litigation even more complicated.

Increased AI use will also affect informed consent discussions with patients, says Harte. Patients will likely be entitled to know the extent to which a doctor relies on AI. And if the doctor is using a reputable, high-quality diagnostics tool but opts to deviate from its diagnosis, the doctor will probably be required to justify that decision and to allow patients to make an informed decision as to whether they accept the doctor’s rationale, he says.

Harte adds that regulation is a critical component.

In the research paper “Regulating the Safety of Health-Related Artificial Intelligence,” authors Michael Da Silva, Colleen Flood, Anna Goldenberg, and Devin Singh examine Health Canada’s approach to regulating AI in health care and identify three safety-related concerns: general safety, algorithmic bias, and privacy/security.


Michael Da Silva

How the government classifies a type of AI technology as a medical device is one potential regulatory gap, says Da Silva, a senior fellow in AI and health care at the University of Ottawa’s AI + Society Initiative and permanent lecturer at the University of Southampton School of Law.

Canada’s medical device regulations regulate the sale and importation of medical devices. So, if AI meets the definition of a medical device, he says it will only be regulated for its commercial uses or, in rare circumstances, for research purposes. That means doctors could escape regulation if they were to develop an AI medical device and use it in their hospital if they did not sell it to anyone.

The regulations currently exclude AI from licensing requirements if the device “is ‘not intended to acquire, process, or analyze a medical image or signal,’ ‘intended to display, analyze, or print medical information,’ ‘only intended to support’ provider decision making, and ‘not intended to replace … clinical judgment,’” write Da Silva and his co-authors in the article.

“What this means is that there are certain important roles that AI tools play in the actual health care system that aren’t being regulated, either because they don’t fit under the definition and its formal version or because Health Canada doesn’t understand them as fitting under the definition,” says Da Silva. This carve-out captures administrative support, patient management, and when the tool has a “wellness or lifestyle application,” he adds.

Machine learning, which adapts its operation based on inputs acquired during use, also does not fall under the current regulations, which prevent these tools from being approved for the Canadian market. But Health Canada is developing a new regulatory framework for adaptive machine-learning tools, and Da Silva advised on the framework in an external working group, he says.

“This is a really exciting moment because we’re going to get these new regulations,” says Da Silva. “But as of this moment … we’re at a point where we can’t actually benefit from maybe some of the most exciting developments that could come along because adaptive AI could solve a lot of real problems that we have in the Canadian health care system.”

It could produce more accurate and efficient care and address human biases – and to maximize those benefits, Canada will need to permit adaptive machine learning, “and we’re not there yet,” Da Silva says.

Harte says the fear of liability should not halt the progress of AI, which can reduce lawsuits and patient harm in a system that experiences thousands of avoidable injuries every year.

“Based on my almost 30 years of experience in medical malpractice, a lot of those errors are amenable to prevention through the use of technology, including artificial intelligence.”

AI health care applications

  • predicting treatment protocols and detecting cancer in radiology images
  • creating, analyzing, and classifying clinical documentation and published research
  • robotic surgical tools
  • diagnosis and disease treatment
  • administrative activities

 

Recent articles & video

Blakes, Stikeman Elliott, Norton Rose Fulbright, Dentons counsel mining sector key players

BC Supreme Court orders father to pay fines for continuous breaches of conduct and parenting orders

NB Court of Appeal upholds denial of workers’ compensation for non-workplace incident

BC Supreme Court awards damages to pedestrian severely injured in crosswalk accident

Manitoba Court of King's Bench rejects request for extension in dental malpractice case

BC Supreme Court revokes probate grant for failure to properly notify testator’s son in Mexico

Most Read Articles

BC Supreme Court upholds mother’s will against son's claims for greater inheritance

BC Supreme Court clarifies when spousal and child support obligations should end

Federal Court approves $817 million settlement for disabled Canadian veterans

Ontario Superior Court rejects worker's psychological impairment claim from a workplace injury