Human supervision, data privacy essential for legal-sector AI use: Evisort GC and chief tech officer

'With the right data and privacy guidance… it can be an accelerator for legal professionals:' CTO

Human supervision, data privacy essential for legal-sector AI use: Evisort GC and chief tech officer
Amine Anoun and Margaret Minister

For the legal sector, the rapid rise of generative artificial intelligence holds much potential and much responsibility.

“It is important for the legal space to adopt generative AI,” says Amine Anoun, founder and chief technology officer at Evisort, an AI-powered contract management software. “It offers a lot of functionality. It improves efficiency for legal professionals around drafting and negotiation of contracts, data extraction, and summarization on existing contracts in their repository.”

“As long as it's done with the right data and privacy guidance around it, it can be an accelerator for legal professionals.”

Before helping launch Evisort in 2018, Anoun was a data scientist at Uber and graduated from MIT’s Sloan School of Management.

The ethical considerations vary depending on the use case, says Margaret Minister, general counsel at Evisort. Lawyers are responsible for reviewing any output or work product that goes to a client. Just as they cannot delegate most legal work to a paralegal, they cannot delegate it to AI without human supervision, she says.

Anoun adds that, mainly when the generative AI tool is used for decision-making, it is critical that a human can override any AI-made decisions, outputs, or recommendations at any point.

There are data-privacy considerations. Lawyers are obliged to protect identifying and sensitive information on data subjects under their jurisdictions' privacy legislation. “That is probably the biggest consideration because that applies across all industries and all segments,” says Minister. Before joining Evisort in 2021, Minister was a partner at Pierce Atwood and attended Harvard Law School.

When using generative AI for legal work, the generated content’s ownership raises another ethical consideration, says Anoun. Businesses adopting the technology need clear policies on who owns that content.

He says tools trained on data must also be developed and deployed with a mind toward bias and discrimination.

“Especially when it's used for decision making, it is important that the AI is trained on diverse and representative data sets and regularly tested for bias.”

There is a distinction between publicly available tools and those, like Evisort, which are available through application-programming-interface (API) integration, says Minister. The latter are trained on customer data, are not scraping the internet, and do not run into the same ownership and intellectual property issues as the publicly available tools pulling data from the internet.

To protect customer data and prevent it from leaking downstream, it is also essential that it is anonymized, says Anoun. And to ensure the user can benefit from the AI’s value while protecting customer data, users must continuously monitor the AI to verify that it delivers the proper outcomes and maintains compliance with applicable laws. Users must also assess the use cases to examine what data is being shared, the integration methods used, what retention policies are in place and the data anonymization techniques and safeguards in play, he says.

“Having a clear data-retention policy is critical,” says Minister. “Because if you have the data being stored by a publicly available AI tool or even your vendor for too long, there's a potential risk of data leakage. Those policies are very important to understand, and each tool has a different policy or no policy at all.”

While there are many ethical questions associated with using generative AI because the technology can enhance service delivery and lower client costs, ethical risks are also associated with not using these tools.

“We actually have an increasing number of attorney governance rules – these are the rules that are promulgated by various states that govern lawyers – and various states, including the American Bar Association, have called out that lawyers have a responsibility to develop competence in the risks and the benefits of new technology…Whether lawyers want to adopt it or not, they need to learn about it, they need to understand what the benefits are for their clients for their own practices, and they also need to understand what the risks are.”

Recent articles & video

'We need to have the competence to question:' LegalTech panel on genAI fakes in the legal system

MPD Law Firm LLP appears in $20-million commercial case

BC court orders new hearing on worker’s mental disorder claim due to expert's incomplete information

Tax Court of Canada rules against additional 15 percent tax on dividends to Luxembourg shareholders

BC Supreme Court rejects husband’s claim against wife’s counsel over family home sale proceeds

Ontario Superior Court dismisses medical malpractice lawsuit due to lack of expert evidence

Most Read Articles

Survey shows many Canadians not keeping track of financial information crucial for estate planning

Sharing news website subscription password not copyright infringement, finds Federal Court

Lawyer salaries may vary more in wake of competition law changes: recruiter report

Jennifer Teskey, the Canadian managing partner at Norton Rose Fulbright, on talent and motivation