Getting it right: Using consumer-facing AI tools wisely

Consumers are entitled to explanations if organizations are using AI to make decisions about them

Lisa R. Lifshitz

The use of artificial intelligence technology can be a double-edged sword. On the one hand, the growing ability of AI machines and algorithms to make predictions, recommendations, and decisions has been touted as a way to improve upon human productivity and increase knowledge. However, the use of AI also presents risks, namely concerns about bias, inaccuracy, the potential for unfair or discriminatory outcomes, and the perpetuation of existing socioeconomic disparities. Concerns especially arise when AI technology, particularly the use of data and algorithms, is used to make critical decisions about consumers.

Recently the U.S. Federal Trade Commission sought to shed some light on these issues. On April 8, Andrew Smith, director of the FTC’s Bureau of Consumer Protection, published “Using Artificial Intelligence and Algorithms” (FTC Guidance) on the organization’s business blog.

The overarching principles in the FTC Guidance regarding consumer protection, privacy and security and data security are equally applicable to Canadian consumers and serves to remind prospective users of AI tools that such technology must always be transparent, explainable, fair, and empirically sound, fostering accountability.

The U.S. has in place legislation that directly addresses certain concerns regarding the use of AI and consumers. For example, the U.S. Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA) both address automated decision-making, and financial services companies have been applying these laws to machine-based credit underwriting models for decades. The FTC’s own FTC Act has been used by the FTC as sufficient authority to prohibit unfair and deceptive practices to address consumer injury arising from the use of AI and automated decision-making.

In the absence of consumer legislation that specifically targets AI tools, Canadians can look to existing laws that prohibit the negative impacts that might arise from the use of algorithms to make decisions. For example, Provincial Human Rights Codes, section 15 of the Canadian Charter of Rights and Freedoms, s 7, Part 1 of the Constitution Act, 1982 and section 10 of Québec's Charter of Human Rights and Freedoms prohibit discrimination based on various enumerated grounds, in the private and public sectors. Federal and provincial human rights codes protect against discrimination in the delivery of services, accommodation, contracts and employment.

Canada's Competition Act prohibits deceptive marketing practices and false and misleading representations in a material respect which pertain to the purpose of promoting indirectly or directly, supply of products or business interests. Provincial consumer protection legislation also prohibits misleading and false representations made between consumers and suppliers. In Ontario, changing the terms of the deal using an algorithm would be considered a false and misleading representation because this would be supplying services not in accordance with a previous representation.

The FTC Guidance offers the following useful advice for developers, vendors and potential purchasers of AI tools that affect consumers:

Be transparent. Don’t deceive consumers about how automated tools are used. While consumers may not actually see the AI operating in the background, it is important not to mislead them about the nature of the interaction, particularly if AI technology such as chatbots interacts with customers. Years ago Ashley Madison, a “discreet married dating” service, faced criticism from both the FTC and the Canadian federal privacy commissioner for deceiving consumers by using fake “engager profiles” of attractive prospects to induce customers to sign up. As the FTC noted, if a company’s use of fake dating profiles, phony followers, “deepfakes” or AI chatbots actively misleads consumers, that organization could face an FTC enforcement action or, in Canada, investigation by the Competition Bureau.

It’s also important to be transparent when collecting sensitive data and personal information. Secretly collecting audio, visual, or any sensitive data to feed an algorithm could give rise to an FTC action and, in Canada, complaints to regulators under various federal and provincial privacy laws.

Explain your decision to the consumer. The FTC Guidance recommends that if a company denies consumers something of value based on algorithmic decision-making, they have an obligation to explain why. So, in the credit-granting world, companies are required to disclose the principal reasons why a consumer was denied credit. It’s not sufficient to simply say, “your credit score was too low,” or “you don’t meet our criteria.” Organizations must be specific — e.g., “you’ve been delinquent on your credit obligations” or “you have an insufficient number of credit references” — even if this requires the organization to know what data is used in the AI model and how that data is used to make a decision. Consumers are entitled to coherent explanations if organizations using AI to make decisions about them refuses them credit or other services.

The FTC Guidance also notes that if algorithms are used to assign risk scores to consumers, organizations should also disclose the key factors that affected the score, rank-ordered for importance. Similarly, companies that change the terms of a deal based on automated tools should disclose such change to consumers.

Ensure that your decisions are fair. Above all, the use of algorithms should not result in discrimination on the basis of what we Canadians would consider prohibited grounds of race, ethnic origin, colour, religion, national origin, sex, sexual orientation, gender identity or expression, marital status, family status, genetic characteristics, disability, age, or other factors such as socioeconomic status. If, for example, a company made credit decisions based on consumers’ zip or postal codes, the practice can be challenged under various U.S. and Canadian laws. Organizations can save themselves headaches if they rigorously test their algorithms before they use them and periodically afterwards to ensure their use does not create disparate impacts on vulnerable groups or individuals.

In evaluating algorithms for fairness, the FTC notes that it looks at both inputs and outcomes. For example, when considering illegal discrimination, do the model’s inputs use ethnically based factors? Does the outcome of the model discriminate on a prohibited basis? Does a nominally neutral model end up having an illegal disparate impact on certain groups or individuals? Organizations using AI and algorithmic tools should be engaging in self-testing of AI outcomes in order to manage the consumer protection risks. Developers must always ensure that their data and models are robust and empirically sound.

It is also critical to give consumers access and an opportunity to correct information used to make decisions about them. Under both the FCRA and Canadian provincial consumer and credit reporting legislation, consumers are entitled to obtain the information on file about them and dispute that information if they believe it to be inaccurate. Further, adverse action notices are required to be given to consumers when that information is used to make a decision adverse to the consumer’s interests. That notice must include the source of the information that was used to make the decision and must notify consumers of their access and dispute rights. If organizations are using data obtained from others — or even directly from the consumer — to make important decisions about the consumer, then the organization should consider providing a copy of that information to the consumer and allowing the consumer to dispute the accuracy of that information.

Hold yourself accountable for compliance, ethics, fairness and non-discrimination. While it’s tempting to rush to acquire the latest shiny AI tools, prospective corporate users of AI technology should be asking important questions before using the algorithms with consumers. In 2016, the FTC issued a report entitled “Big Data: A Tool for Inclusion or Exclusion?” that advised companies using Big Data analytics and machine learning to reduce the opportunity for bias or other harm to consumers. To avoid bias or other harms to consumers, any operator of an algorithm should ask four critical questions: (i) how representative is your data set? (ii) does your data model account for biases? (iii) how accurate are your predictions based on Big Data? (iv) does your reliance on Big Data raise ethical or fairness concerns?

Build in necessary security and other protections. It is also critical to protect algorithms from unauthorized usage, and developers of such tools should be mindful of how access controls and other technologies can be built into the tools from the beginning to prevent such abuse. The FTC Guidance notes that thanks to machine learning, voice-cloning technologies can now use a five-second clip of a person’s actual voice to generate a realistic audio of the voice saying virtually anything. This is incredible technology that can help individuals who have challenges speaking, but that could also be misused by fraudsters. Accordingly, protective measures must also be carefully considered.

While AI developers should be considering their accountability mechanisms, the FTC Guidance also noted that, in some instances, organizations are too close to their products and would do better to use independent standards or expertise to step back and evaluate the AI. Objective observers who independently test the algorithms may be in a better position to evaluate them and pick up on patterns and outcomes of bias. The FTC Guidance noted that such outside tools and services are increasingly available as AI is used more frequently, and companies may want to consider using them.

To conclude, while the use of AI technology for consumers certainly comes with some risks and legal compliance challenges, these may be mitigated through common-sense approaches, as exemplified in the recent guidance provided by the FTC.

Recent articles & video

Jennifer King at Gowling WLG on ESG and being recognized as a Top 25 Most Influential Lawyer

SCC to hear case clarifying what constitutes material change in securities law

Last week to nominate for the Top 25 Most Influential Lawyers

BC enhances Provincial Court with appointment of Mandy Klein and Sabena Thompson

BC introduces police reform legislation aimed at enhancing oversight and governance

Dario Pietrantonio appointed managing director of ROBIC

Most Read Articles

ESG-related legal risk is on the rise, says KPMG's Conor Chell

First Nation's land entitlement claim statute-barred, but SCC finds treaty breach by Crown

Why this documentarian profiled elder rights advocate Melissa Miller in Hot Docs film Stolen Time

Five firms dominating M&A activity in Canada in recent years