Companies must bear in mind a host of privacy considerations write Amanda Branch and Jasmine Godfrey
Artificial intelligence can be a powerful tool for organizations. AI is an intelligent system that can change its behaviour or adapt based on information or experience. For AI to work meaningfully, it requires data — a lot of it.
Since data often contains personal information, there are real concerns about the privacy implications of AI, such as how personal information can be used and what privacy and security measures are required to protect that data. What privacy considerations should companies keep in mind as they implement AI? Here we look at issues relating to consent and data retention and offer some practical tips for organizations.
Consent is an important element of privacy law. Organizations must be open and transparent about their privacy practices. Consent should be obtained at the time of collection, as well as for new uses of personal information.
Consent is considered valid only if it is meaningful — that is, if it is reasonable to expect that individuals to whom a business’ activities are directed would understand the nature, purpose and consequences of the collection and the use or disclosure to which they are consenting. One of the major challenges organizations face is how to communicate their privacy practices to users. AI presents challenges around consent. For example, a goal of machine learning can be to create a system that can teach itself, but how can you obtain consent for something that a machine created with limited human intervention?
Organizations should incorporate and consider privacy principles at each step in the design and implementation process (known as privacy by design). Notification to consumers should be done in a user-friendly way, such as using clear language, layered policies and just-in-time notices at the point where particularly sensitive data is being collected.
AI may depend on large volumes of data to learn. However, this can be at odds with privacy legislation.
Under the Personal Information Protection and Electronic Documents Act, the collection of personal information must be limited to that which is needed for the purposes identified by the organization. Personal information may be retained only so long as necessary to fulfil the reasonably stated purpose for which it was initially collected. Indefinite retention is generally not appropriate.
The limitation principle can lead to tension with the need for AI to use all the available information to learn. By limiting certain datasets, AI may have bias introduced because it is only being trained on subsets of the dataset, instead of all the available data.
Retaining data for long periods of time may also result in having a larger data pool. In these circumstances, there may be a risk that data that has been de-identified in isolation may be capable of re-identification when arranged and analyzed as part of a larger data set.
As the use of data becomes more complicated, weigh the value of the data to the organization against potential risks. Consider some practical tips for organizations using AI:
- Retain only data that is likely to deliver a return on investment, and securely destroy any other customer data. Consider implementing a recurring document and data review to destroy data that is no longer needed
- Provide adequate employee training in the design, function and implementation of the algorithm or automated decision-making process to be able to review, explain and oversee its operations. In designing and training the AI, include the lawyers to help identify what may be a problem and why
- Designate individuals to take accountability for the organization’s compliance with ethical guidelines and privacy legislation
- Employ good data governance mechanisms, such as mapping data flows and ensuring the quality and integrity of the data.
It is important to remember that, despite these privacy concerns, AI has many advantages, and not just to the companies that create it, but also to consumers themselves. AI has been able to improve products and services by learning what consumers want and providing a customized experience. Organizations should consider how to strike the right balance between the use of AI and protecting personal information.
Amanda Branch is an associate at Bereskin & Parr LLP with extensive experience in privacy law, including cybersecurity and data breach. Her practice focuses on copyright and digital media, as well as regulatory, advertising and marketing law.
Jasmine Godfrey is an articling student with Bereskin & Parr LLP. Her practice focuses on trademarks, copyright, regulatory, advertising & marketing and litigation.