Privacy Commissioner offers up principles for using and developing generative AI systems

Lawyer says report shows commissioners 'planting a stake' in ground for generative AI regulation

Privacy Commissioner offers up principles for using and developing generative AI systems
Charles Morgan

Privacy commissioners across Canada appear to be making the claim that regulating generative AI falls under their purview and responsibility with the release of a best practices document, according to Charles Morgan, the national co-leader of McCarthy Tétrault LLP’s cyber/data group.

“It almost feels a little bit like the privacy commissioners are planting a stake in the ground,” says Morgan. “The privacy commissioners seem to be saying, ‘Wait a minute. We have a big role to play here, and we’re going to assert our authority to regulate this space.’ That is significant.”

Morgan is referring to “Principles for responsible, trustworthy and privacy-protective generative AI technologies,” a guidance document released by the Office of the Privacy Commissioner of Canada and its provincial and territorial counterparts that addresses the risks to privacy posed by generative AI systems and their associated collection, use and disclosure of personal information.

He notes that Bill 27, the Consumer Privacy Protection Act (CPPA), which includes the Artificial Intelligence and Data Act (AIDA), is working its way through the parliamentary system and that embedded in the proposed legislation is an idea that a new regulatory authority could be created to oversee AI systems in Canada. Morgan adds that the federal Privacy Commissioner attempted to participate in the public consultation process regarding AIDA, but there were some bumps in the road.

“It was a complicated process. And on the very day that the Privacy Commissioner started to make his presentation, the process got shut down while the parties discussed whether it was appropriate to move forward with the text of the amendments or not, so there may have been some frustration on the part of the Privacy Commissioner that he wasn’t able to express himself as he might have.”

The Privacy Commissioner's guidance document comprises nine sections covering legal authority and consent, appropriate purpose, necessity and proportionality, openness, accountability, individual access, limiting the collection, use, and disclosure of personal information, accuracy, and safeguards. Morgan says taking such a broad view of the AI regulatory landscape leads to questions about its future applicability.

“It raises in my mind the concern that we could have a scenario of overlapping jurisdictions and potentially contradictory or inconsistent results [between this and AIDA],” he says, noting that the document addresses issues associated with AI use, including the creation of false and defamatory information, potential violations of human rights, and biases. “The whole gamut of concerns. And they’re legitimate concerns to be addressing but should the Privacy Commissioner of Canada be asserting jurisdiction over these matters? It’s an open question.”

The document offers some similarly open questions – or at least open possibilities – even if they are ones that the Privacy Commissioner expects users and developers to seriously consider from a privacy point of view. For example, in the document's wording, the Privacy Commissioner draws particular attention to the language used to draft the principles. “Obligations under privacy legislation in Canada will vary by nature of the organization (such as whether it is in the private, health, or public sector) as well as the activities it undertakes. As such, while we use ‘should’ throughout this document many of the considerations listed will be required for an organization to comply with applicable privacy law.”

Similarly, the principles state that organizations considering using an AI tool should determine if it’s strictly necessary. “Consider whether the use of a generative AI system is necessary and proportionate, particularly where it may have a significant impact on individuals or groups. This means that the tool should be more than simply potentially useful. This consideration should be evidence-based and establish that the tool is both necessary and likely to be effective in achieving the specified purpose.”

Morgan says that this particular phrasing is notable.

“That notion of necessity is something that is very central to privacy law analysis. It’s very strictly interpreted, or it has been by privacy commissioners… Companies might say, ‘Well, sure [using AI] is necessary, and they can make everything necessary, but my concern is the flip side: that the necessary standard as it has been interpreted and applied in the privacy law context is very strict and it forces companies to make a very rigorous analysis that they can defend as to the necessity of the uses of the personal information that they’re implementing.”

However, there are some areas where the Privacy Commissioner is explicitly clear about what is and is not considered acceptable behaviour. Even though the document acknowledges that legal findings haven’t come down yet, it sets out “no-go zones” saying AI systems shouldn’t create content “(including deep fakes) for malicious purposes, such as to bypass an authentication system or to generate intimate images of an identifiable person without their consent”; use “conversational bots to deliberately nudge individuals into divulging personal information (and, in particular, sensitive personal information) that they would not have otherwise; or generate and publish false or defamatory information about an individual.”

The document also states that consent is required “for collection, use, disclosure and deletion of personal information that occurs as part of the training, development, deployment, operation, or decommissioning of a generative AI system.”

That idea of consent in the creation of a dataset might play an essential role in the future as the privacy authorities for Canada, Quebec, British Columbia and Alberta have launched an investigation into ChatGPT and whether its developers violated the individual privacy rights of Canadians when using information found online to train the system.

“That would be a significant outcome,” says Morgan. “Is this suggesting that the privacy commissioners have taken the stance that the use of personal information for the purposes of training their models without consent is in itself a privacy violation?”

If that is the case, what happens will depend on the fate of the CPPA, explains Morgan.

“Right now, at the federal level, it’s basically a name-and-shame regime. If you violate PIPEDA [Personal Information Protection and Electronic Documents Act], there might be an adverse finding. There won’t be a big fine. If the CPPA is passed, then the Privacy Commissioner of Canada has very large enforcement powers, and big sanctions could apply – millions of dollars or a percentage of worldwide revenue.”

Recent articles & video

SCC orders retrial for man not informed of his right to French language trial

BC Supreme Court declines to stay Sixties Scoop class action against province

BC Court of Appeal holds RCMP communications center partly liable for traffic accident

BC Court of Appeal upholds monthly spousal support for ex-RCMP officer despite claims of hardship

Ontario Court of Appeal restores owner's right to repurchase property after initial buyback fails

Ontario Court of Appeal dismisses malpractice suit over child who was assaulted after doctor visit

Most Read Articles

BC Supreme Court orders father to pay fines for continuous breaches of conduct and parenting orders

BC Supreme Court revokes probate grant for failure to properly notify testator’s son in Mexico

Canadian lawyers need to replace resilience with real change

NS Supreme Court clarifies disclosure standards in a divorce and property division case