Artificial intelligence

AI will change the legal profession, just not how you are expecting. Fernando Garcia is looking forward to the day when he can get his hands on Beagle, an automated contract analysis system powered by artificial intelligence that reads contracts in seconds, highlights key information visually with easy-to-read graphs and charts and gets “smarter” with each reviewed contract.

Artificial intelligence

AI will change the legal profession, just not how you are expecting.

Fernando Garcia is looking forward to the day when he can get his hands on Beagle, an automated contract analysis system powered by artificial intelligence that reads contracts in seconds, highlights key information visually with easy-to-read graphs and charts and gets “smarter” with each reviewed contract. Also on his bucket list is an offering by yet another Canadian legal tech startup, Blue J Legal, that also uses AI to scan legal documents, case files and decisions to predict how courts will rule in tax decisions. At a time when the majority of in-house counsel are under intense pressure to shave costs and run a lean team, such powerful tools are a godsend. “There’s always that pressure to do more with less, so when a tool comes along that can provide more efficiency, more risk mitigation and can let you do your job better and focus on providing value added, it is a strategic advantage,” notes Garcia, general counsel, government affairs and corporate secretary with Nissan Canada Inc. “It’s going to fundamentally change our job.”



This fundamental change has been a long time coming. Nearly two decades ago, the former justice of the High Court of Australia, Michael Kirby, remarked with uncanny prescience in a speech before the Bombay High Court in Mumbai that “it would be a bold observer” who would deny the possibility of artificial intelligence to “enhance” lawyering and judicial-making. But even he could not foresee how artificial intelligence is now in many ways already everywhere. Ever since Watson, IBM’s AI system, captured the public imagination and blew away the tech industry six years ago when it defeated two champions at the popular television quiz show Jeopardy, the technology has been developing at a dizzying pace and has immersed itself into business and in the daily lives of people around the world. Smartphones feature virtual personal assistants such as Siri and Google Now. Large U.S. retailers such as Amazon and Target use AI to anticipate the needs of consumers through the use of predictive analytics. Financial institutions use it for fraud detection. Smart home devices have the ability to learn a person’s behaviour patterns by adjusting the settings of appliances or thermostats, while self-driving cars are inching their way to reality. And AI systems are detecting cancers. “It’s moving so quickly, it’s even a little mind-boggling for us,” remarks Aaron Courville, an AI researcher at the Montreal Institute for Learning Algorithms.

The practice of law, however, has been largely shielded by technological developments over the past 50 years, suffering little more than glancing blows. While the way that legal professionals process and share information has evolved with new technologies — primarily with the emergence of personal computers, email and the Internet — it did not fundamentally transform it.

That may be on the cusp of changing. Fuelled by Big Data, increased computing power and more effective algorithms (a routine process for solving a program or performing a task), AI has the potential to change the way that legal work is done, the way that law firms conduct business and the way that lawyers deal with clients. A number of technologies under the umbrella of artificial intelligence, such as machine learning, natural language processing, experts systems (the ability to emulate decision-making of a human expert) and others, allow computers to perform things that normally require human intelligence. Artificial intelligence systems, also known as augmented intelligence or cognitive computing, can be used to do many of the tasks lawyers routinely perform in areas such as compliance, contract analysis, case prediction, document automation and e-discovery. According to proponents, the emerging technologies will do it cheaper, faster and more efficiently, a development some law practitioners find disconcerting.

“What machines give you is the option to get access to more and more data faster and cheaper — that’s the real core of it,” explains David Holme, chief executive officer and founder of Exigent Group Limited, a global provider of legal process outsourcing services that leverage machine learning technology for discovery and contract processing. “It’s like a searchlight that can look into the corners of the organization. Machine learning and better information will allow experts to make better judgments. And experts must be humble enough to realize that this is a tool that they can use rather than being threatened by it.”

Some law firms are paying heed. A number of Canadian legal tech startups are beginning to draw attention in a market that traditionally has shied away from embracing technology with much enthusiasm. ROSS Intelligence, the brainchild of a group of University of Toronto students, has become the poster boy for illustrating AI’s potential in the legal world. A virtual legal assistant powered by IBM Watson and its own proprietary innovations, ROSS uses natural language processing to understand questions posed by lawyers, sifts through legislation, case law and secondary sources and returns an evidence-based answer. But ROSS does even more. It constantly monitors the law and uses its machine learning capabilities to continuously improve its results, which in turn produces results more quickly. ROSS began by learning bankruptcy law, but the firm layered it on top of that with intellectual property law, “which proved our hypothesis that we could scale ROSS’s learning between practice areas,” says Andrew Arruda, ROSS’s chief executive officer and one of the co-founders. “The goal is to build an entire ecosystem of legal AIs which enhance lawyers’ abilities.”

The firm is also at the preliminary stages of applying ROSS’s “underlying learnings and technology” to internal firm documents, which would represent a “massive step forward” for knowledge management, adds Arruda. That would certainly pique the interest of law firms and legal insurance protection insurers, noted Scott Ferrauiola, associate general counsel at Watson IBM Corporation, at a conference held last fall in Montreal. Law firms and insurers are drawn to the possibility of being able to harness the power of AI to identify, capture, evaluate, retrieve and share all of an organization’s information assets, says Ferrauiola. “Who are your experts on certain legal issues? Do they have memos or briefs? Where are they? Can we access them? Can we search them? It’s almost a back-office function. It’s not quite decision-making, but it helps in decision-making,” adds Ferrauiola.

Using machine learning to predict legal outcomes is another area that may sway lawyers to explore the potential of AI, according to experts. Last year, the lord chief justice of England and Wales warned jurists that AI will be better at predicting the outcome of cases than the “most learned Queen’s Counsel” as soon as it has better statistical information. That day may have come. In a breakthrough development, computer scientists last fall using AI reached the same verdicts as judges at the European Court of Human Rights in nearly four out of five cases involving torture, degrading treatment and privacy, marking the first time that AI successfully predicted the outcomes of a major international court by analyzing case text. “This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions,” noted the authors of the study.

Blue J Legal is another player in this area. The Canadian legal tech startup boasts that its AI simulation product, Tax Foresight, a joint initiative with Thomson Reuters (publisher of Canadian Lawyer), is able to predict with greater than 90-per-cent accuracy what a court would hold in new circumstances. The tool has the additional allure of being simple to use: The machine learning tool asks questions about the client’s situation and then it analyzes thousands of cases produced by the Tax Court of Canada, the Federal Court of Appeal and the Supreme Court of Canada. The AI system then provides a prediction, a tailored explanation and a list of relevant cases for further research. “It will make a prediction based on all of the cases and not just the leading cases,” explains Benjamin Alarie, the Osler Chair in business law at the University of Toronto and one of the co-founders of Blue J Legal. He maintains that such technologies will change the nature of litigation as it will increase the likelihood of settlement, while the likelihood of cases going to court will fall, “save perhaps for the most ambiguous,” where further legal development will be most valuable. “These are tools that allow people to perform some elements of their jobs better, and these algorithms can do a better job in certain things,” says Alarie. “It’s a very powerful complement to human judgment.”

But law firms are proving to be a hard sell. A recent survey reveals yet again that the vast majority of law firms are uncomfortable being early adopters. According to a 2016 International Legal Technology Association-InsideLegal Technology Purchasing Survey, more than half of all firms (53 per cent) reported larger tech budgets in 2016 than in 2015, but the majority focused their efforts on bolstering cybersecurity, information governance, business continuity or disaster recovery concerns and security compliance requirements. A staggering 87 per cent of respondents said their firms are currently not evaluating or utilizing artificial intelligence technologies or systems.

In many ways, those figures are not surprising. For one thing, the legal industry spends less than one per cent on research and development compared with an average of 3.5 per cent for the typical U.S. business, according to Dan Jensen, head of Nextlaw Labs, a business accelerator focused on investing in, developing and deploying new technologies to transform the practice of law and an autonomous, wholly owned subsidiary of global law firm Dentons LLP.

Even law firms themselves acknowledge that investing in new technologies is a challenge, mainly due to the traditional partnership model. “Law firms are notoriously slow to adopt new technologies,” says Elizabeth Ellis, director of knowledge management at Torys LLP. “Our decision-making process is not what I would call optimal necessarily. We just seem to take a long time to evaluate something, to get all of the views.”

On top of that, most lawyers view AI as a threat instead of seeing it as an opportunity to help deliver better outcomes for clients, said Jordan Furlong, an analyst of the global legal market with Law21. A recent study by McKinsey & Company estimates that 23 per cent of lawyer time is automatable, while similar research by the highly respected AI expert Dana Remus at the University of North Carolina School of Law concludes that just 13 per cent of lawyer time can be performed by computers.

“People don’t have to worry,” says Khalid Al-Kofahi, vice-president, R&D at the Thomson Reuters Centre for Cognitive Computing, a new technology centre that will focus on research in machine perception, reasoning, knowledge management and human-computer interfaces. “Most of the innovations in artificial intelligence and machine learning will introduce automation at the task level, which will allow people to focus on more complex tasks.”

But perceptions run deep, counters Furlong. “When lawyers turn their minds to AI, one of the first questions they are essentially asking is will it replace me,” says Furlong. “That is the wrong question. It’s not about the lawyer. It’s about the client. The question a client will ask is whether using AI will help me get what I need faster, more affordably or more effectively, with a better outcome.”

All of this does not bode well for traditional law firms. A recent global research study by Deloitte concluded that conventional law firms are no longer meeting today’s business needs. The majority (55 per cent) of participants in the study — legal counsel, CEOs and CFOs — have taken or are considering a significant review of their legal suppliers. The study also points out that purchasers of legal services want better and more relevant technologies, to be used and shared on integrated platforms.

Some law firms have seen the writing on the wall. “Our business is actually to make it as easy as possible for clients to solve things in the most practical, efficient way for them, and that’s why I get excited about the role that law firms can play because we should be best positioned to be the problem solver, this re-aggregator of all these different pieces and solutions so that what the client sees at the end of the day is this simple, integrated solution to the different problems that they have,” says Matthew Peters, national innovation leader at McCarthy Tétrault.

The risk that some law firms may run into is that they will be seduced by the hype surrounding AI, erroneously believing that it will solve “all sorts of problems,” without examining all of their options, adds Peters. Before an AI system is considered, attention should be turned toward legal process improvements, labour arbitrage and employing more efficient work tools, he suggests. A case in point is a new document automation service, complete with e-signatures and a contract management tool, that McCarthys developed in partnership with Exigent that will be rolled out in the near future for its clients. “Let’s make sure that we are addressing what the client needs and not make this more complicated than it needs to be,” says Peters.

That doesn’t mean that Peters is not interested in AI offerings. In fact, he is now testing a series of AI products before settling on one that he intends to launch in a couple of months.

A push to meet the needs of clients also drove Osler Hoskin & Harcourt LLP to examine, try and ultimately implement new technologies, including a couple of AI offerings. Clients were demanding that the law firm provide legal services more efficiently at a lower cost, explains Mara Nickerson, Osler’s chief knowledge officer. While exploring different options to meet growing client demands, the focal point throughout the exercise was centred on legal process management.

“The focus needs to be on where you can gain efficiencies in your process and what technology can help you,” notes Nickerson. “If it’s AI, great; but not AI for the sake of AI.”

Osler eventually settled on using an AI e-discovery tool called Relativity, it has been using Kira Systems, a machine learning contract analysis system, since August 2016 and has tested Tax Foresight — all of which have yielded positive results. But even then, Osler determined that in order to get the “maximum value” out of these AI offerings, they would have to be placed in the hands of a dedicated team that spent time to “really” learn how it works so that it could “train” the system.

“The exciting thing about AI is that it is bringing additional functionality and capabilities to technologies that we didn’t have before and so bringing exponential efficiencies to our processes in a way that we haven’t said,” says Natalie Munroe, head of Osler Works – Transactional, a new technology-based platform based in Ottawa to support coverage of corporate deals. But, adds Nickerson, all of these new technologies need oversight by lawyers to review and grasp the nuances of the responses churned out by the machine learning systems.

Implementing new technology, especially involving AI systems, needs to be carefully planned, requires time, ongoing support and buy-in from associates and partners. “You need to continue to evolve your practice as the technology improves, and as you work more closely with the program, you start seeing more opportunities to use the technology that you may have not realized originally,” points out Ellis, speaking from her experience overseeing the implementation of Kira. “That all takes time and effort, and that is probably the hardest thing.”

Some would argue that the most challenging task is to convince lawyers within the firm or legal department to use the new technology. Buy-in in the middle ranks is critical, says Nextlaw Labs’ Jensen. “You have to have buy-in across the board, make sure you can drive the implementation and the integration and put project management skills against it, and then manage expectations about what the tool is and what it’s not.”

It is also crucial that the AI tool be clean, simple and intuitive; otherwise, lawyers will simply not use it, says Chuck Rothman, director of e-discovery services at Wortzmans, now a division of McCarthy Tétrault. “In order for artificial intelligence to be really adopted in the legal industry, it has to be presented in a way that lawyers can very quickly grasp what the system is saying so that they can use it, because, if they don’t understand it, they are not going to trust it and, if they don’t trust it, they won’t use it.”

The drive toward AI, however incrementally, will likely also mean that law firms are going to have to review their traditional billing model, says Furlong. The time when law firms were the only game in town, where lawyers were the “only vehicle” by which legal services could be delivered, is coming to a close, and AI is going to help to put that to an end, he says. “All of these innovations like artificial intelligence are going to reduce the amount of time and amount of effort required to obtain a legal outcome, so the very lax business model of selling time and expertise, rather than outcomes and results, is coming to an end.”

Firms such as McCarthys, Osler and Torys are paying attention to the evolving market demands. McCarthys is planning to have 50 per cent of its work charged on a non-hourly basis, while Torys is moving toward a fixed-fee billing model. “The model is changing as we incorporate these new technologies and because of the demands of the client,” says Nickerson.
In the meantime, in-house counsel like Garcia are going to likely have to bide their time if they expect to bear witness to a monumental change thanks to AI. As Peters puts it: “For sure, artificial intelligence is going to play a role in the future, but not as soon and not in the way that a lot of people are imagining it now.”
 

Canada: a hub for AI

When AlphaGO, Google’s artificial intelligence system, defeated the 18-time world champion in the complex and highly intuitively game of the ancient Chinese board game GO, it was not just a demonstration of yet another computer beating a human at a game. GO, a game with simple rules but profound complexity, has more possible positions than there are atoms in the universe, leading some to describe it as the Holy Grail of AI gaming. It was a remarkable feat because AlphaGo was not taught how to play Go. It learned how to play, and win, by playing millions of games, using a form of AI called deep learning, which utilizes neural networks that allow computer programs to learn just like humans. More than that, the victory showed that a computer is now able to rely on its own intuition, something that was thought only humans could do.

AlphaGO was developed by a group of computing scientists led by University of Alberta grads, underscoring Canada’s enviable position as a world leader in AI. The Montreal Institute for Learning Algorithms, the University of Toronto and the University of Alberta are all recognized as pioneers in deep learning. Over the past couple of years, Canada has been cementing its status as an AI hub with the likes of Google flocking to Montreal, the Royal Bank of Canada establishing its machine learning division as part of an initial partnership with the University of Toronto and with Thomson Reuters founding a research lab in the Waterloo region and a technology centre for cognitive computing in Toronto. All of this has touched off an arms race for AI talent, and particularly Canadian machine learning researchers. “There’s a lot of poaching of professors that is going on,” remarks Aaron Courville, an AI professor with the Montreal Institute for Learning Algorithms. “It’s a big problem. It’s like you are biting the hand that feeds you in some ways. You are taking professors out and who’s going to train the next generation of people with this expertise.”

On the other hand, the rapid pace of developments in AI will almost certainly lead to greater collaboration between publicly funded and private research organizations, creating a win-win situation for all researchers, says Khalid Al-Kofahi, vice-president of R&D at the Thomson Reuters Centre for Cognitive Computing. “The rate of change that is now occurring in technology, especially around machine learning and artificial intelligence, significantly exceeds our ability to learn,” says Al-Kofahi. “We don’t have to collaborate at the application level, but we can collaborate on problems that are pre-application stage.”

In the meantime, the Canadian AI scene has spawned a lot of activity such as pioneering legal tech startups such as ROSS Intelligence, Blue J Legal and Beagle, as well as innovative AI offerings. Thomson Reuters’ Waterloo lab, which uses data science to look for new insights or analysis in order to solve customer problems, recently developed, in the space of three months, an AI tool that helps employers determine settlement offers in the case of employee termination. The AI product is a “research tool for legal professionals, not a calculator that does the job of a human — and that’s the way that the legal profession should view artificial intelligence in the first place,” says Brian Zubert, the director of the lab.
 

The dark side of AI: bias and loss of skills

Algorithms — the set of instructions computers use to carry out a task — have become an integral part of everyday lives, and they are immersing themselves in law. In the U.S., judges in some states can use algorithms as part of the sentencing process. Many law enforcement officials in the U.S. are using them to predict when and where crimes are likely to occur. They have been used for years in law firm recruitment. And with advancements in machine learning, they are also being used to conduct legal research, predict legal outcomes and to find out which lawyers win before which judges.

Most algorithms are created with good intentions, but questions have surfaced over algorithmic bias at job hunting web sites, credit reporting bureaus, social media sites and even the criminal justice system where sentencing and parole decisions appear to be biased against African-Americans. And the issue is likely to gain traction as machine learning and predictive coding become more sophisticated, particularly since with deep learning (which learn autonomously), algorithms can reach a point where humans can often no longer explain or understand them, says Nicolas Vermeys, assistant director at Cyberjustice Laboratory in Montreal. “We have no idea how they arrived at their decision and, therefore, cannot evaluate whether the decision has value or not,” says Vermeys, whose research institution is studying the issue. “There is a risk to relying completely on machines without necessarily understanding its reasoning.”

No human is completely objective, and so it is with algorithms as they have been programmed by programmers, notes Ian Kerr, a law professor at the University of Ottawa and the Canada Research Chair in Ethics, Law and Technology. Programmers operate on certain premises and presumptions that are not tested by anybody else, which leads to results based on those premises and presumptions that in turn give rise to bias, adds Kerr. On top of that, it is very difficult to challenge such decisions because “whoever owns the algorithms has trade secrets, isn’t likely to show you the source code, isn’t likely to want to talk about the secret source and what makes the algorithm work,” says Kerr. “What justifies the algorithm is its success or perceived success, which is very different from whether or not it operates in biased ways.”

Aaron Courville, a professor with the Montreal Institute for Learning Algorithms, shares those concerns. “We are really in a phase where these algorithms are starting to do interesting things, and we need to take seriously the issues of responsibility,” he says.

Both Kerr and Vermeys are also concerned about artificial intelligence performing more and more legal grunt work. By delegating an increasing amount of tasks to machines, there is a danger that existing skills will atrophy, says Kerr. “We have to be aware of that and make sure we make good judgments about which things to delegate and which things not to.”

Vermeys says there is some merit to performing thankless and menial tasks because it is, in many ways, how lawyers become good and experienced. His institute is also looking into the issue. Lawyers, for instance, should learn how to write contracts, tedious as the work may be, and should do it numerous times to be able to have a solid grasp of it, he says.

“We’re going to try to figure out how these artificial intelligence solutions should be used while not affecting the quality of service lawyers are giving today and will be able to give in five to 10 years.”

Recent articles & video

AI funding announcement good news for tech sector, but also means legislation coming: BLG lawyer

Manitoba Court of Kings's Bench underscores lawyers' responsibilities to clients in estate planning

2024 budget contains a few surprises, says Davies tax partner Christopher Anderson

Canadian Human Rights Commission releases 2023 Annual Report highlighting challenges and progress

Shannon Mason named as newest judge of Nova Scotia Supreme Court Family Division

Alberta welcomes seven new judges: Friesen, Hawkes, McGuire, Brookes, Parker, Ho, and Jugnauth

Most Read Articles

BC Supreme Court upholds mother’s will against son's claims for greater inheritance

BC Supreme Court clarifies when spousal and child support obligations should end

Federal Court approves $817 million settlement for disabled Canadian veterans

2024 Canadian Law Awards Excellence Awardees revealed