Gartner IDs six ChatGPT legal and compliance risks to stay ahead of

Using AI may inadvertently publicise confidential business data, among others

Gartner IDs six ChatGPT legal and compliance risks to stay ahead of

Gartner has identified six ChatGPT-related legal and compliance risks orgnisations should stay ahead of.

As large language models such as ChatGPT steadily gain popularity and their use in business operations becomes more and more acceptable, companies risk accumulating unchecked vulnerability to generative artificial intelligence-related legal and compliance risks they do not understand.

To address this, Gartner’s legal and compliance practice has issued tips on how legal and compliance officers can manage their organisation’s exposure to key ChatGPT risks and identify guardrails supporting the responsible use of generative AI resources.

“Legal and compliance leaders should assess if these issues present a material risk to their enterprise and what controls are needed, both within the enterprise and its extended enterprise of third and nth parties,” said Gartner legal and compliance practice senior director analyst Ron Friedmann. “Failure to do so could expose enterprises to legal, reputational, and financial consequences.”

Risk 1: Inaccurate answers

Friedmann said that LLM tools – including ChatGPT – tended to provide plausible-sounding but ultimately incorrect information.

“ChatGPT is … prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” he said.

Friedmann’s advice to legal and compliance leaders in this regard was to issue guidance requiring the review of all ChatGPT-generated output for accuracy and appropriateness before accepting them.

Risk 2: Data privacy

Organisation members may feed confidential information into ChatGPT – which automatically becomes part of ChatGPT’s training set for future output unless chat history is manually disabled.

“Sensitive, proprietary, or confidential information used in prompts may be incorporated into responses for users outside the enterprise,” Friedmann said. He suggested that organisations establish a company-wide compliance framework for public LLM use, which should include a clear prohibition against entering sensitive organisational or personal data into ChatGPT and other LLM tools.

Risk 3: Model and output bias

OpenAI’s efforts to minimise occurrences bias and discrimination in ChatGPT are still ongoing – and likely never to stop, Gartner suggested.

“Complete elimination of bias is likely impossible,” Friedmann said. “But legal and compliance need to stay on top of laws governing AI bias and make sure their guidance is compliant. This may involve working with subject matter experts to ensure output Is reliable and with audit and technology functions to set data quality controls.”

Risk 4: Copyright and intellectual property

ChatGPT’s near-infinite library of internet data was likely to include copyrighted material, Gartner warned, as ChatGPT did not normally cite its source references or explain how its output had been generated. ChatGPT outputs thus ran the risk of violating copyright or IP protection.

“Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights,” Friedmann said.

Risk 5: Cyber security

ChatGPT and other large language models are susceptible to “prompt injection,” a hacking technique feeding the generative AI model malicious, adversarial prompts to trick it into performing tasks that it was not intended for, including writing malware codes or developing phishing sites resembling popular and well-trusted sites.

“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann. “They should also conduct an audit of due diligence sources to verify the quality of their information.”

Risk 6: Consumer protection

Gartner pointed out that a recently passed “chatbot law” in California required organisations to clearly and conspicuously disclose each time a consumer was communicating with AI. Even without such a law, businesses that failed to disclose ChatGPT usage to consumers, e.g. as customer support, ran the risk of being charged under unfair practice laws or at the very least losing their clients’ trust.

“Legal and compliance leaders need to ensure their organisation’s ChatGPT use complies with all relevant regulations and laws and appropriate disclosures have been made to customers,” said Friedmann.

Recent articles & video

Charter applies to self-governing First Nation’s laws, but s. 25 upholds Charter-breaching law: SCC

Parks Canada partnering with Indigenous groups to implement Indigenous systems of law, governance

Canada's Finest Legal Professionals: Celebrate Excellence at the Canadian Law Awards!

Are you keeping up with the dizzying pace of innovation?

Amanda Fowler on how she balances her sports law practice and legal role at Aviva Canada

Top 25 Most Influential Lawyers 2024 - nominations now open

Most Read Articles

Canada Revenue Agency announces penalty relief for bare trusts filing late returns

Ontario Court of Appeal upholds spousal support order in 'unusual' divorce case

Ontario Superior Court awards partner share in the estate despite the absence of marriage

Developing an AI oversight system is vital for organizations: Tara Raissi at Beneva