When it comes to AI regulation, Canada can do better

In 2018, the Future of Life Institute — a think tank led by tech industry leaders such as the founders of DeepMind, Google executives and Elon Musk — pledged not to develop artificial intelligence weapons that could make decisions to take lives and called for regulation to prevent their development.

Christopher Alam

In 2018, the Future of Life Institute — a think tank led by tech industry leaders such as the founders of DeepMind, Google executives and Elon Musk — pledged not to develop artificial intelligence weapons that could make decisions to take lives and called for regulation to prevent their development.

Most of us hope not to encounter autonomous lethal robots in our lifetime. But as AI pervades more benign fields, including autonomous vehicles, financial management and health sciences, regulatory bodies around the world continue to suggest and debate a range of approaches toward its governance.

Globally, countries are considering the guiding philosophies behind regulation.

Last year, U.S. Federal Reserve Governor Lael Brainard proposed that AI regulation in the financial services sector be balanced, that it mitigate risk without hampering innovation and that regulation of supervised institutions not be so onerous that it drive experimentation to unsupervised areas. Almost simultaneously, the EU’s AI ethics chief, Pekka Ala-Pietila, proposed that there should be virtually no efforts on regulation in order to allow innovation.

In the same year, the Israel Innovation Authority issued a paper entitled “The Race for Technological Leadership” in which it asked the question, “To protect the public, should there be a stringent threshold that requires the production of a retrospective account of the algorithm’s decision-making process, or would it be better to lower the bar of culpability in order to promote the adoption and development of AI-based innovation with all its benefits?”

Regulatory authorities are not aligned in their philosophical approach.

Canada — despite interest in innovation in recent years — shows less engagement.

In the United States, conversely, AI regulation is a focus across a number of federal departments. An executive order issued by the Trump White House earlier this year has led to a remarkably robust road map for the U.S.’s AI strategy. This road map addresses ethical, legal and societal implications, aiming at AI that can be deployed safely and securely. It also dovetails with other legislative initiatives, such as the Open Government Data Act. In Canada, open government data remains a weak, non-legislated initiative of the Treasury Board. More recently, the Treasury Board also issued a 2019 Directive on Automated Decision-Making to begin guiding government departments on such usage.

In April, the U.S. Food and Drug Administration rolled out a paper requesting comment on how best to deal with regulating AI-driven medical devices in a way that complements standard pre-market approvals with a comprehensive total product life cycle monitoring process. The 2019 Canadian federal budget, on the other hand, only proposes — without much detail — a regulatory sandbox for the development of AI products.

A positive Canadian example is the federal government’s approach to the regulation of driverless cars. Last year, it rolled out development guidelines, which dovetail with provincial approaches to road testing. The result is a cohesive regulatory framework that will provide those wishing to engage in product development with a firm understanding of how to proceed in Canada.

Nonetheless, Canada should make AI regulation and engagement with the philosophies that guide it a greater priority. Risk mitigation concerns aside, without a quickly proceeding regulatory framework, AI developers may look outside Canada to jurisdictions that induce confidence through governance. And without contributing to the discussion on philosophy of regulation, we may be dragged along by first movers.

Christopher Alam is a partner at Gowling WLG and head of the firm’s Toronto Real Estate and Financial Services department. He speaks and writes on AI regulatory matters at www.jurisai.ca. Follow him at @aijuris.

Recent articles & video

Deepfakes: GenAI making phoney and real evidence harder to discern, says Maura Grossman

Federal Court approves $817 million settlement for disabled Canadian veterans

BC Court of Appeal orders partial stay in business dispute over arbitration agreement scope

NB Court of King’s Bench favours realty firm in slip and fall case

BC Supreme Court upholds mother’s will against son's claims for greater inheritance

Alberta Court of Appeal allows appeal of consent order due to questions about valid consent

Most Read Articles

Five firms dominating M&A activity in Canada in recent years

First Nation's land entitlement claim statute-barred, but SCC finds treaty breach by Crown

BC Supreme Court dismisses shopping mall slip and fall case due to inexcusable delay

Ontario Court of Appeal upholds jury's award in medical malpractice lawsuit against a neurologist