Developing an AI oversight system is vital for organizations: Tara Raissi at Beneva

An in-house perspective on the importance of a proactive approach to legal and regulatory risk

Developing an AI oversight system is vital for organizations: Tara Raissi at Beneva
Tara Raissi

Artificial intelligence is changing how organizations operate and how customers interact with those organizations. AI-driven tools like chatbots can mimic and process human conversations in response to various prompts. These systems, which range from chatbots answering customer questions to general-purpose tools that analyze massive datasets and generate insights, are easy to use and readily available.

As AI integrates into more sectors, concerns about its use and potential drawbacks become pronounced. Reliability, accountability, transparency, fairness, privacy, safety and ethical use are at the core of proposed international standards and legislation to oversee AI. Organizations must implement systems to monitor the development, deployment and use of such technology to balance gaining the benefits of AI-powered tools and minimizing their risk.

Legal and regulatory risks are inevitable without a framework for oversight throughout the life cycle of AI-driven systems. Assessing the level of risk associated with the automated process to determine where human intervention is needed is an excellent place to start. The findings from this assessment will help the organization put controls in place, such as a system of governance, protocols and training programs for using and interacting with AI-driven tools. These measures can improve the likelihood of responsible, trustworthy and privacy-conscious AI use across the organization.

“ChatGPT can make mistakes. Consider checking important information.”

OpenAI released ChatGPT in December 2022. It is among the first of several similar chatbots available to the public to communicate in English. ChatGPT is built on large language models (LLMs), which are algorithms trained on large amounts of data to predict underlying patterns and structures of language. An LLM processes a user’s prompt to create new content, such as text, based on patterns and examples in the data it has been trained on.

Concern about the reliability of AI-generated content is a driving force behind the demand for increased oversight. ChatGPT warns users that it can make mistakes and that important information should be checked. Though often correct, the generated output is not rooted in fact or knowledge. Instead, it relies on probability – an algorithm that processes natural language inputs and predicts the next word based on what it has already seen. As a result, the output is not always correct. At times, “hallucinations,” which occur due to LLMs predicting patterns that are not based on training data, can even lead to nonsensical results.

Aside from incorrect or hallucinatory outputs, AI-generated results also risk other unintended consequences, such as introducing bias or providing discriminatory or unethical responses. For example, ChatGPT may learn and repeat biases taken from the prompt or rooted in the data it was trained on. Organizational reliance on these responses is a human rights issue leading to litigation.

Liability for AI-driven systems cannot be evaded.

Recognizing the limitations of AI accuracy is paramount for organizational accountability. An organization using AI cannot disassociate itself from liability for generated content or its potentially harmful outcomes. This was recently exemplified in the Mofatt v. Air Canada decision. An Air Canada customer in British Columbia was misled by the company website chatbot about the airline’s bereavement fares. In rendering the decision, the Civil Resolution Tribunal presiding over the matter was unequivocal that Air Canada could not evade the liability of using the AI-driven website chatbot.

Oversight of the chatbot to ensure that it communicated correct information about airline policies could have avoided the issue in the first place. The award against Air Canada to remedy the misrepresentation by its company chatbot confirms that organizations are liable for representations made to the public, whether by their customer service representative or a company chatbot.

What happens to sensitive personal information?

AI governance and privacy are intrinsically linked. The increasing use of AI technology raises concerns about protecting sensitive personal information. As AI-driven systems become more sophisticated and pervasive, they can collect, analyze and process confidential and sensitive information. OpenAI’s privacy policy states that collected data includes log data, usage data, device information, cookies, user content, communication information, social media information and account information.

Aside from the usual cybersecurity risks that any chatbot is vulnerable to, an organization needs to raise awareness about the extent of ChatGPT’s data collection. ChatGPT can collect conversations and anything that is provided to it as input. It may repeat sensitive information supplied by users to other users. As a result, caution must be exercised when engaging in “conversation” with the chatbot. It should not be provided with prompts that contain personal, confidential or proprietary information.

Whose content is it?

The interaction of AI-driven tools, data ownership and intellectual property rights is a source of litigation and regulatory concern. Since generative AI creates text based on patterns and examples in the data it has been trained on, there may be disputes over the ownership of the content it produces, giving rise to claims of copyright infringement. For example, if proprietary information is provided to the chatbot as part of a prompt, that information may be revealed to another user in response to a different prompt.

The rapid growth of AI-powered tools in recent years has brought concerns about its risks to the forefront. While widespread adoption of these tools is still nascent, organizations must take a proactive approach to minimize risks. A system of oversight, including controls, guidelines and guidance about AI interactions across the organization, will strengthen the company’s ability to use this technology safely and effectively.

Recent articles & video

Ontario Superior Court certifies class action against crypto asset trading platform Binance

NS Court of Appeal denies request for the production of CCTV footage in a personal injury action

NS Supreme Court clarifies disclosure standards in a divorce and property division case

Federal Court overturns study permit denial due to immigration officer’s unreasonable assessment

Ontario Court of Appeal dismisses stroke-related medical malpractice suit against physician

Military judges being subject to chain of command does not sacrifice independence, impartiality: SCC

Most Read Articles

BC Supreme Court orders father to pay fines for continuous breaches of conduct and parenting orders

Ontario Superior Court certifies class action against The Bank of Nova Scotia

Manitoba First Nations' class action seeks treaty annuity payments

BC Supreme Court revokes probate grant for failure to properly notify testator’s son in Mexico