General counsel can work on these four action points to stay on top of AI regulation: Gartner

Lawmakers may be lagging behind, but heads of legal can still keep up

General counsel can work on these four action points to stay on top of AI regulation: Gartner

While Gartner predicts that artificial intelligence may take years for jurisdictions to properly regulate, it has identified four action points on AI for general counsel and legal leaders to consider as early as now, just as lawmakers begin to propose guidance on increasingly pervasive large language model (LLM) tools such as ChatGPT and Google Bard.

These four action points would create AI oversight and enable an organisation to continue to leverage LLM tools while awaiting final regulation.

First: Embed transparency in AI use

“Transparency about AI use is emerging as a critical tenet of proposed legislation worldwide,” said senior principal for research at Gartner Laura Cohn. “Legal leaders need to think about how their organisations will make it clear to any [human] when they are interacting with AI.”

Legal leaders could update the privacy notices and terms of condition on their company’s websites to reflect AI use in marketing content or its hiring process. But Gartner recommended two even better practices. Legal leaders could either post a point-in-time notice when using AI to collect data or develop a separate section on the organisation’s online ‘trust centre’, which specifically discussed the ways the company leveraged AI.

Companies could also update their supplier code of conduct to instruct vendors to notify them about plans to use AI.

Second: Ensure risk management is continuous

Besides an organisation’s legal team, general counsel should start to involve data management, privacy, information security, compliance, and other relevant business units in the process of AI risk management to get a fuller picture of the risks involved, Gartner advised. A “cross-functional effort” was especially important considering in-house and general counsel did not typically own the business processes they reviewed or helped control.

“[General counsel] and legal leaders should participate in a cross-functional effort to put in place risk management controls that span the lifecycle of any high-risk AI tool,” said Cohn. “One approach to this may be an algorithmic impact assessment that documents decision making, demonstrates due diligence, and will reduce present and future regulatory risk and other liability.”

Third: Build governance that includes human oversight and accountability

“One risk that is very clear in using LLM tools is that they can get it very wrong while sounding superficially plausible,” said Cohn. “That’s why regulators are demanding human oversight which should provide internal checks on the output of AI tools.”

Gartner suggested that companies designate an AI point person responsible for helping technical teams design and implement human controls over AI-powered tools. This employee could be the digital workplace head, a part of the organisation’s security or privacy staff, or an employee equipped with enough technical know-how, depending on how integrated the company’s AI plans and functionalities were with the rest of the business.

Alternately, the general counsel could also initiate establishing a digital ethics advisory board of legal, operations, IT, marketing, and external experts who could help different departments and project teams manage ethical issues and alert the board of directors of important findings.

Fourth: Guard Against Data Privacy Risks

“It’s clear that regulators want to protect the data privacy of individuals when it comes to AI use,” said Cohn. “It will be key for legal leaders to stay on top of any newly prohibited practices, such as biometric monitoring in public spaces.”

To stay on top of the data privacy risks uniquely posed by AI, Gartner suggested that legal and compliance leaders apply “privacy-by-design” principles to their company’s AI initiatives. Applying these principles meant requiring privacy impact assessments early into projects, or dedicating a privacy team member to every project who would assess privacy risks before project implementation.

Internally, Gartner advised that organisations should inform their workforce that everything they entered into the public versions of LLM tools could become a part of the AI’s training dataset – meaning sensitive or proprietary information used in an LLM tool’s prompts could find their way into the same LLM’s responses for users outside the business.

Legal leaders and general counsel could work on manuals which adequately informed staff of AI risks and establish guidelines on how to safely use AI-powered tools to minimise their company’s vulnerability to AI data privacy risks.

Recent articles & video

Legal profession approaching genAI with hesitancy, but also excitement: Thomson Reuters report

Manitoba Chief Justice Marianne Rivoalen on going digital and what informs her judicial philosophy

The search is on for the Top 25 Most Influential Lawyers

Law Society of Manitoba issues guidelines to help lawyers navigate generative AI in practice

National Council for Reconciliation Act officially becomes law

Ontario Superior Court emphasizes estate trustee must account for trust property

Most Read Articles

BC Court of Appeal upholds monthly spousal support for ex-RCMP officer despite claims of hardship

Ontario Court of Appeal dismisses malpractice suit over child who was assaulted after doctor visit

Ontario Court of Appeal restores owner's right to repurchase property after initial buyback fails

Ontario Superior Court refuses to dismiss medical negligence case under frivolous litigation rule