Language selection

Search

A Regulatory Framework for AI: Recommendations for PIPEDA Reform

November 2020

Once a concept limited to the pages of science fiction novels, artificial intelligence (AI) has become a reality of the digital age, supporting many of the services individuals use in their daily lives. It marks a transformative point in society, introducing novel ways in which personal information is processed.

AI has immense promise, including helping to address some of today’s most pressing issues. For example, it can detect and analyze patterns in medical images to assist doctors in diagnosing illness, improve energy efficiency by forecasting demand on power grids, deliver highly individualized learning for students, and manage traffic flows across various modes of transport to reduce accidents and save lives.

It allows organizations to innovate in consumer products as well as in business operations, such as automating quality control and resource management. AI stands to increase efficiency, productivity, and competitiveness – factors that are critical to the economic recovery and long-term prosperity of the country.

However, uses of AI that are based on individuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals' behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals, including whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour. Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination. AI systems can also be used to influence, micro-target, and “nudge” individuals’ behaviour without their knowledge. Such practices can lead to troubling effects for society as a whole, particularly when used to influence democratic processes.

In January 2020, the OPC launched a public consultation on our proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA). Our working assumption was that legislative changes to PIPEDA are required to help reap the benefits of AI while upholding individuals’ fundamental right to privacy.

We received 86 submissions, and held two in-person consultations with stakeholders in Montreal and Toronto. We compiled and analyzed the feedback, and drew on the knowledge of experts, including Professor Ignacio Cofone, who authored the accompanying report commissioned by the OPC, to make our recommendations.

This document, “A Regulatory Framework for AI: Recommendations for PIPEDA Reform”, provides an overview of the OPC’s final recommendations. It draws on Professors Cofone's report, which details these and other measures, and accounts for stakeholder feedback from our consultation.

In our view, an appropriate law for AI would:

  • Allow personal information to be used for new purposes towards responsible AI innovation and for societal benefits;
  • Authorize these uses within a rights based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • Create provisions specific to automated decision-making to ensure transparency, accuracy and fairness; and
  • Require businesses to demonstrate accountability to the regulator upon request, ultimately through proactive inspections and other enforcement measures through which the regulator would ensure compliance with the law.

Using Data for Socially Beneficial and Legitimate Commercial Purposes

Personal information can be used in ways that clearly benefit society, even when collected during the course of a commercial activity. As discussed, AI can be a powerful tool for advancing those benefits. Its capability to generate forecasts and draw trends can reduce uncertainty or risk, and inform action to address societal problems. For example, it can be used to address food insecurity,Footnote 1 improve navigation,Footnote 2 and accelerate scientific discovery.Footnote 3

Consent, which forms the basis of PIPEDA and many other data protection laws globally, is not without its challenges. For individuals, long, legalistic and often incomprehensible policies and terms of use agreements make it nearly impossible to exert any real control over personal information or to make meaningful decisions about consent. For organizations, consent does not always work in the increasingly complex digital environment, such as where consumers do not have a relationship with the organization using their data, and where uses of personal information are not known at the time of collection, or are too complex to explain. These shortcomings are even more glaring in the context of AI, where processing is more complex and AI is designed to make discoveries from personal information, including inferences.

AI highlights the shortcomings of the consent principle in both protecting individuals’ privacy and in allowing its benefits to be achieved. Consent can be used to legitimize uses that, objectively, are completely unreasonable and contrary to our rights and values. Additionally, refusal to provide consent can sometimes be a disservice to the public interest when there are potential societal benefits to be gained from use of data.

In 2018, the OPC conducted work to enhance the role of consent through the publication of our meaningful consent guidelines. While these were important enhancements, it is essential to state that in 2020, privacy protection cannot hinge on consent alone.

This is why we are recommending a series of new exceptions to consent that would allow the benefits of AI to be better achieved, but within a rights based framework. The intent is to allow for responsible, socially beneficial innovation, while ensuring individual rights are respected. We recommend exceptions to consent for the use of personal information for research and statistical purposes, compatible purposes, and legitimate commercial interests purposes.

I. Research and Statistical Purposes

Under this exception, personal information that is de-identified would be relieved from consent, purpose specification, and data minimization requirements when used for research and statistical purposes internal to the organization. The objective of this exception is to facilitate the training of AI, which relies on statistical functions, in order to encourage AI development in Canada. In allowing organizations to re-use existing information in this fashion, AI algorithms can learn and be based on data that are more representative, thereby increasing its problem-solving potential and accuracy.

Currently under PIPEDA, there are exceptions to consent that can allow personal information to be used for statistics or scholarly research, however such exceptions are very limited and are not optimized for an AI environment. This is important because productive AI systems require large quantities of data from which to train.

II. Compatible Purposes

Compatibility against the original purpose of collection is a well-known privacy principle, and is manifested in many international privacy laws, as well as in Canada’s Privacy Act and Quebec’s Bill 64 (referred to in these laws as “consistent purpose”). This exception would allow for the use of personal information without consent when the new purpose is compatible with the original purpose. This measure acknowledges business interests by providing greater flexibility to use personal information, including for the training of AI.

Many laws do not define “compatible purposes” with sufficient precision, which can leave it open to broad interpretation and result in potential abuse. We have observed this to be the case under the Privacy Act on occasion. To address this issue, the exception should be subject to clear limitations as to what may be considered compatible, such as the threshold contained in Bill 64, which states the purpose must have a “direct and relevant” connection with the purposes for which the information was collected.Footnote 4

III. Legitimate Commercial Interests

The two specific exceptions to consent we propose, in addition to those that already exist in PIPEDA, are not sufficient to accommodate new, unforeseen but responsible uses of information in society’s interest or for legitimate commercial interests. This is one of the most difficult challenges in developing a modern privacy law: How to define permissible uses of data in a rapidly evolving digital economy so as to both enable responsible innovation and protect the rights and values of citizens?

Because it is impossible to predict all future uses of technology, even in the relatively short term, we believe the law should address this challenge by defining permissible uses and applicable rights in broad terms, leading to appropriate interpretation in context at the relevant time.

With these considerations in mind, we believe that in the context of PIPEDA, a statute that regulates commercial activities, the most appropriate way to define permissible uses is through a legitimate commercial interests exception to consent. Consent would remain the rule, but this generally framed exception, similar to the provision in the GDPR and therefore enhancing interoperability of laws, would provide the desired flexibility to authorize unforeseen reasonable purposes. This is preferable to otherwise stretching or distorting the concept of implied consent so as for it to become meaningless.

As explained in guidance from the U.K. Information Commissioner’s Office: “The legitimate interests of the public in general may also play a part when deciding whether the legitimate interests in the processing override the individual’s interests and rights. If the processing has a wider public interest for society at large, then this may add weight to (the organization’s) interests when balancing these against those of the individual.”Footnote 5 In this way, a legitimate commercial interest clause would acknowledge that commercial and societal interests may overlap.

It would be imperative that such an exception be accompanied by enhanced rights, the proposed safeguarding measures outlined below and that it only occur within a legal framework that requires demonstrable accountability on the part of organizations and which provides for enhanced enforcement on the part of the regulator.

Safeguards:

The proposed exceptions to consent must be accompanied by a number of safeguards to ensure their appropriate use. This includes a requirement to complete a privacy impact assessment (PIA), and a balancing test to ensure the protection of fundamental rights. The use of de-identified information would be required in all cases for the research and statistical purposes exception, and to the extent possible for the legitimate commercial interests exception.

Privacy Impact Assessment

Each of the recommended exceptions to consent should first require the conduct of a PIA to ensure legal compliance is assessed, and that risks to privacy are identified and mitigated. PIAs are widely recognized privacy protection tools, and are required for federal government institutions.

PIAs are necessary given the highly contextual nature of the recommended exceptions. For example, a proposed research objective must be carefully assessed against the information being used to ensure proper risk mitigation. If the risks cannot be sufficiently mitigated, the activity should not proceed. PIAs also support the need for demonstrable accountability, as discussed later in this document.

To provide clarity as to what a PIA should entail, specific requirements should be detailed in a regulation or OPC guidance.

Balancing Test

A balancing test similar to that found under the GDPR’s legitimate interests basis for processing should also be required when organizations invoke the recommended exceptions to consent. This test is three-fold, requiring an assessment of the purpose, the necessity and proportionality of the measure, and consideration of the interests and fundamental rights and freedoms of the individual to determine whether they override the measure. This test can be assessed and documented within the PIA.

De-identification

There should be a requirement for information to first be de-identified to use information without consent for research or statistical purposes, and to the extent possible for legitimate commercial interest purposes.

As part of this requirement, de-identification should be defined in PIPEDA. The Ontario Personal Health Information Protection Act (PHIPA) contains a definition of “de-identify”: ‘to remove, in accordance with such requirements as may be prescribed, any information that identifies the individual or for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual’. PIPEDA could incorporate a similar approach. In so doing, consideration should be given to the concept of “identifiability” as established in federal court jurisprudence (Gordon v. Canada (Health), supra, at para 34).

The law should prohibit re-identification when personal information is de-identified pursuant to one of PIPEDA’s exceptions, and the practice should be subject to financial penalties, similar to the approach in Bill 64. These measures are in recognition of the fact that de-identification, even properly implemented, does not negate all risk.Footnote 6

Aside from the flexibility measures proposed in this paper, de-identified information should otherwise fall within the scope of the law.

Recognizing Privacy as a Human Right

While the law should allow for more flexible uses of personal information, it should only do so within a rights-based regime that recognizes privacy in its proper breadth and scope. Privacy is a fundamental right, and is necessary for the exercise of other human rights. This is particularly relevant in the context of AI, where risks to fundamental rights, such as the right to be free from discrimination, are heightened.

A rights-based regime would not stand in the way of responsible innovation. In fact, it would help support responsible innovation and foster trust in the marketplace, giving individuals the confidence to fully participate in the digital age. In our 2018-2019 Annual Report to Parliament, our Office outlined a blueprint for what a rights-based approach to protecting privacy should entail. This rights-based approach runs through all of the recommendations in this paper.

While we propose that the law should allow for uses of AI for a number of new purposes as outlined, we have seen examples of unfair, discriminatory, and biased practices being facilitated by AI which are far removed from what is socially beneficial. Given the risks associated with AI, a rights based framework would help to ensure that it is used in a manner that upholds rights. Privacy law should prohibit using personal information in ways that are incompatible with our rights and values.

Another important measure related to this human rights-based approach would be for the definition of personal information in PIPEDA to be amended to clarify that it includes inferences drawn about an individual.Footnote 7 This is important, particularly in the age of AI, where individuals’ personal information can be used by organizations to create profiles and make predictions intended to influence their behaviour. Capturing inferred information clearly within the law is key for protecting human rights because inferences can often be drawn about an individual without their knowledge, and can be used to make decisions about them.

Specific Provisions for Automated Decision-Making

One of PIPEDA’s key characteristics is its technological neutrality. This aspect should remain intact in regulating AI, which continues to evolve. Instead of regulating the technology itself, PIPEDA should aim to address AI’s impact on privacy rights by providing protections in relation to automated decision-making.

Automated decision-making powered by AI systems introduces unique risks that warrant distinct treatment in the law. As Professor Cofone notes in his paper:

Under automated decision-making, discriminatory results can occur even when decision-makers are not motivated to discriminate.”Footnote 8 Automated decision-making processes reflect and reinforce biases found in the data they are fed (trained with) into the decisions they yield. They reproduce and amplify the inevitably biased scenarios they were trained with.Footnote 9 Protected categories that decision-makers are prohibited from considering, such as gender or race, are often statistically associated with seemingly inoffensive characteristics, such as height or postal code. Algorithmic decision-making can easily lead to indirect discrimination on the basis of gender or race by relying on these characteristics as proxies for the prohibited traits.Footnote 10

The algorithms used to reach a decision concerning an individual can be a black box, leaving an individual in the dark as to how the decision was determined. It is also recognized that data are not inherently objective, but tell only a specific story based on factors such as how they are collected.Footnote 11 Automated decisions run the risk of being unfair, biased, and discriminatory.

To respond to the risks to privacy rights presented by automated decision-making, PIPEDA will need to define automated decision-making to create specific protections to apply to it. Unlike the GDPR or Quebec’s Bill 64, the term should drop any qualifier such as “solely” or “exclusively”, which scopes the applicability of specific protections very narrowly. These also make the term susceptible to subversion where a human role is added in the process to merely evade additional obligations.

In addition, we recommend that individuals be provided with two explicit rights in relation to automated decision-making. Specifically, they should have a right to a meaningful explanation of, and a right to contest, automated decision-making under PIPEDA. These rights would be exercised by individuals upon request to an organization. Organizations should be required to inform individuals of these rights through enhanced transparency practices to ensure individual awareness of the specific use of automated decision-making, as well as of their associated rights. This could include requiring notice to be provided separate from other legal terms.

I. Right to Meaningful Explanation

The right to a meaningful explanation relates to existing PIPEDA principles of accuracy, openness, and individual access. This right would allow individuals to understand decisions made about them and would facilitate the exercise of other rights such as to correct erroneous personal information, including inferences. The right would be similar to what is found in Article 15(1)(h) of the GDPR, which requires data controllers to provide individuals with “meaningful information about the logic involved” in decisions.

The law should be explicit as to what constitutes a “meaningful explanation” to provide certainty as to what the right encompasses. Language similar to that presented in Professor Cofone’s paper should be considered, which defines the right as: ‘an explanation that allows individuals to understand the nature and elements of the decision to which they are being subject or the rules that define the processing and the decision’s principal characteristics.’

An objective of this right is to address potential scenarios where black box algorithms and unknown personal information is used to automatically determine an individual’s fate. It provides an avenue of recourse and respects basic human dignity by ensuring that the organization is able to explain the reasoning for the particular decision in understandable terms. While trade secrets may require organizations to be careful with the explanations they provide, some form of meaningful explanation should always be possible without compromising intellectual property. An individual should not be denied their right to an explanation on the grounds of proprietary information or trade secrets.

II. Right to Contest

In addition, individuals should be provided with a right to contest automated decisions. This would apply both to those scenarios where an individual has provided consent for the processing of their personal information as well as those where an exception to consent was used by the organization. It serves as a complement to the right to explanation.

This right would take a similar approach to Article 22(3) under the GDPR, where in certain circumstances an individual can express their point of view to a human intervener, and contest the decision, except it would not be limited to decisions based “solely” on automated processing as in the GDPR. This ability would reduce the risk of algorithmic discrimination or other unfair treatment. The right to contest would be in addition to the ability to withdraw consent currently provided for in PIPEDA, or a “right to object” which functions in a similar manner. However, it is necessary to have both rights, as withdrawal of consent/right to object is an all-or-nothing decision, whereas contestation provides individuals with recourse even when they choose to continue to participate in the activity for which automated decision-making was employed.

Demonstrable Accountability

Privacy laws are hollow if they lack the necessary mechanisms to incentivize and enforce compliance. The role of the regulator in upholding the privacy rights of individuals in the marketplace is of utmost importance in light of the increasing complexity of information flows. Individuals should have a privacy regulator with effective enforcement powers to ensure they are able to enjoy the benefits of AI safely.

The business community has emphasized the inadequacy of the consent model and has advocated for transparency and accountability to play a larger role in PIPEDA instead. However, a shift to greater reliance on accountability means greater latitude or freedom for organizations to use personal information, sometimes in dubious ways. Therefore, this approach should be accompanied by a greater role for the regulator, to ensure accountability is demonstrated and ultimately protects the rights of individuals.

Individuals cannot definitively rely on organizations to handle their valuable personal information appropriately, especially when automated decisions can disrupt their lives, and an organization may not always be transparent about their practices. This is compounded by the stark power imbalance between individuals and organizations that use AI, including an asymmetry of knowledge and resources.

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. In addition to the measures detailed below, this should be underpinned by a record keeping requirement similar to that in Article 30 of the GDPR. This record keeping requirement would be necessary to facilitate the OPC’s ability to conduct proactive inspections under PIPEDA, and for individuals to exercise their rights under the Act.

I. Designing for Privacy and Human Rights

Integrating privacy and human rights into the design of AI algorithms and models is a powerful way to prevent negative downstream impacts on individuals. It is also consistent with modern legislation, such as the GDPR and Bill 64.Footnote 12 PIPEDA should require organizations to design for privacy and human rights by requiring organizations to implement “appropriate technical and organizational measures” that implement PIPEDA requirements prior to and during all phases of collection and processing.

A requirement to design for privacy and human rights can be complemented by a regulation or guidance from the OPC as to what this should entail.

PIAs are useful tools when designing for privacy and human rights in AI. They help organizations meet legislative requirements and identify and mitigate negative impacts that programs and activities may have on individuals’ privacy. They support demonstrable accountability by integrating privacy and human rights into the design of the activity, and allow the privacy regulator to review the documented assessments, which can show the due diligence the organization took before implementing the AI activity.

While PIAs are recommended as a required safeguarding measure that accompany the use of the exceptions to consent in this paper, they should otherwise be promoted in PIPEDA as a means through which an organization can design for privacy and human rights, as well as to demonstrate accountability.

II. Traceability

In light of the new proposed rights to explanation and contestation, organizations should be required to log and trace the collection and use of personal information in order to adequately fulfill these rights for the complex processing involved in AI. Tracing supports demonstrable accountability as it provides documentation that the regulator could consult through the course of an inspection or investigation, to determine the personal information fed into the AI system, as well as broader compliance.

Within Canada, Ontario’s recent PHIPA amendments require the maintenance of an electronic audit log in the context of electronic personal health information, which must be provided to the Ontario Information and Privacy Commissioner on request.Footnote 13 Bill 64 also includes traceability rights related to automated decision-making for individuals on request, including the right to know the personal information used to render the decision, the reasons or factors that led to the decision, as well as the right to have the personal information used to render the decision corrected.Footnote 14

III. Proactive Inspection

Demonstrable accountability must include a model of assured accountability pursuant to which the regulator has the ability to proactively inspect an organization’s privacy compliance. In today’s world where business models are often opaque and information flows are increasingly complex, individuals are unlikely to file a complaint when they are unaware of a practice that might cause them harm. This challenge will only become more pronounced as information flows gain complexity with the continued development of AI.

It is critical for the privacy regulator to have the authority to proactively inspect the practices of organizations, in the absence of a complaint or open investigation. The OPC currently has this ability under the federal Privacy Act, and it is a common practice in numerous other fields including employment standards, food safety, and health, among others. There are numerous international examples of privacy authorities having such inspection powers, including but not limited to the United KingdomFootnote 15 and AustraliaFootnote 16.

IV. Order Making and Penalties

The significant risks posed to privacy and human rights by AI systems require a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant. To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties.

In other jurisdictions within Canada and abroad, privacy and data protection regulators have the authority to issue binding orders and impose financial penalties. Such legislation does not seek to punish offenders or to prevent them from innovating. Rather, it seeks to ensure greater compliance, an essential condition of trust and respect for rights.

Penalties must be proportional to the financial gains that organizations may generate by disregarding privacy, as marginal fines will be viewed as a cost of doing business. That said, the process should be fair and transparent, which may entail statutory criteria or parameters to be considered in the OPC’s decision-making for imposing financial penalties.

Ultimately, enforcement mechanisms should result in quick and effective remedies for individuals, and broad and ongoing compliance by organizations and institutions. Without effective enforcement, rights become hollow and trust dissipates.

Conclusion

AI presents great promise in increasing efficiency, improving productivity, and tackling some of today’s most prominent societal challenges. However, its rapid development stands to disrupt many different areas of society, and privacy is no exception. It challenges how Canada’s federal private-sector privacy law, enacted in 2000, addresses privacy in an age where computers are able to learn from personal information, run complex processes with little human involvement, and even predict future outcomes.

Canada has uniquely positioned itself as a leader in AI development. With the rapid development and adoption of this technology, Canadians should be provided with enhanced privacy rights that are enjoyed by many of our global trading partners, in order to ensure the safe consumption and responsible development of AI. Unfortunately, our current federal laws do not provide a level of protection suited to today’s digital environment. Too often, we have seen rights violated in the pursuit of interests far removed from what could be considered beneficial to society.

A rights-based law that includes rights to explanation, contestation, and demonstrable accountability, while introducing consent exceptions to allow for more innovative and socially beneficial uses information, promotes a realistic and effective approach to privacy in AI. Privacy is a fundamental right, and, as AI is increasingly showing us, is necessary for the exercise of other human rights. Automated decision-making and AI inferences based on personal information raise serious concerns about the fairness, accuracy, and bias within AI algorithms, and the discrimination they can enable. Economic and social development through technology cannot be viable or sustainable unless rights are protected.

We are facing, as Canadians have indicated, a deficit of trust in how organizations handle personal information.Footnote 17 Trust in Canadian businesses is essential to their economic recovery, long-term growth, and ultimate success in the Canadian marketplace. Canadians want to – and need to – enjoy the benefits of digital technologies, but they want to do it safely. We hope our recommendations contribute to the formation of an updated privacy regime that serves to both foster innovation and protect rights.

Date modified: