Language selection

Search

Policy Proposals for PIPEDA Reform to Address Artificial Intelligence Report

Prepared by Professor Ignacio Cofone, Assistant Professor at McGill University's Faculty of Law

November 2020

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.


1. Introduction

On January 28, 2020, the Office of the Privacy Commissioner (OPC) issued an open consultation for reforming the Personal Information Protection and Electronic Documents Act (PIPEDA), to ensure the appropriate regulation of artificial intelligence (AI).Footnote 1 The open consultation explained that, as part of the OPC’s policy analysis on legislative reform, the office was seeking to consult with experts as to how privacy principles can and should apply to the responsible development of AI.

The consultation presented and invited views on eleven proposals, which received over eighty-five individual and institutional submissions from a wide array of stakeholders, including the non-profit sector, the private sector, academia, and other stakeholders such as hospitals and bar associations. The feedback provided helpful considerations about implementation, support, and possible concerns that Parliament should consider when amending the statute.

Enacted in 2001, PIPEDA is the cornerstone of Canada’s federal private-sector privacy regime. The OPC has long indicated the need for PIPEDA’s modernization.Footnote 2 Indeed, PIPEDA is in dire need of a reform that can modernize the main statute that protects and regulates Canadian’s privacy in the private sector. Much has changed since it was enacted in 2001 from a technological, social, and legal perspective.

From a technological perspective, as the OPC explains in the consultation proposal, PIPEDA must be adapted to address the novel risks posed by AI. The development and uses of AI can be highly socially valuable and Canada must have responsive regulations that allow the nation to extract its social benefits while protecting Canadians’ human rights.

From a social perspective, much has changed about how people use technology. Privacy is no longer an ancillary value.Footnote 3 We have a better understanding of its connection with other human rights, such as equality and non-discrimination.Footnote 4 Sharing personal information is much more embedded in our daily lives than it was in 2001, including tools and services that people cannot opt-out of if they wish to continue functioning normally in our society, such as email and cellphones. Aggregated and inferred data about us has become more valuable and poses greater risks than almost any type of personal information shared. Aggregation and inferences are particularly strengthened by AI.

From a legal perspective, this proposal for PIPEDA reform follows a wave of data protection and privacy reforms around the world. In the past two years alone, thirty states enacted new privacy and data protection statutes or amended existing ones,Footnote 5 as well as many others since PIPEDA was enacted. Many of Canada’s trading partners have spent the last decade building robust privacy and data protection regimes.Footnote 6 Not trivially, a reform is necessary to maintain adequacy status with regard to the General Data Protection Regulation (GDPR) in order to be able to continue trading with the European Union.Footnote 7

This report, prepared by invitation of the OPC, builds on the OPC consultation proposals, the feedback contained in the numerous responses to the consultation, years of work by the OPC on PIPEDA reform, and research from privacy law scholars and subject-matter experts. It does so in dialogue with provincial and international legislative precedent, with attention to maintaining Canada’s adequacy status, and in view of PIPEDA’s constitutional constraints. It develops recommendations for achieving a modern PIPEDA that protects privacy and human rights and acknowledges businesses’ legitimate interests in view of AI. The recommendations are grouped into four modules: a right-based approach, flexibility measures, automated decision-making, and accountability.

2. A Rights-based Approach

The recommended modifications in this section build on consultation proposals 2 (Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights) and 11 (Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law), and underpins all subsequent recommendations.

a. A human rights approach

PIPEDA should take an explicit rights-based approach that grounds data protection on Canadians’ human right to privacy, which is essential to protecting other human rights in the context of commercial activity. This recommendation underpins all subsequent recommendations in this report.

This recommendation will concretize fundamental guarantees of Canadian law in PIPEDA. The Charter guarantees privacy on searches and seizures;Footnote 8 Canada has signed and ratified international instruments that recognize privacy as a fundamental human right;Footnote 9 and, as established by the Supreme Court, individuals’ privacy is inherently linked to other recognized rights.Footnote 10 Consequently, as supported by OPC positions, PIPEDA has been recognized as having quasi-constitutional status.Footnote 11

Making a rights-based approach explicit in PIPEDA is particularly important in the AI context, where risks to fundamental rights (such as the right to be free from discrimination) are heightened.Footnote 12 This recommendation has been supported by the OPC,Footnote 13 received majority and cross-sectoral support in the OPC's consultation, and concretizes Canadian law’s pre-existing guarantees in PIPEDA,Footnote 14 which the Supreme Court has already recognized as having quasi-constitutional status. It is also consistent with modern international precedent, such as the E.U. General Data Protection Regulation (GDPR),Footnote 15 the California Consumer Privacy Act (CCPA),Footnote 16 the recently proposed Personal Data Protection Bill in India,Footnote 17 and statutes in Brazil and Peru.Footnote 18 Due to the human rights approach recommended and in view of acknowledging legitimate business interests, the principles of necessity, proportionality, and minimal intrusiveness, core to Canadian rights-based balancing tests, run through the recommended modifications to PIPEDA.Footnote 19

b. Preamble and purpose clause

PIPEDA’s rights-based approach should be made explicit twice in the statute: in a preamble and in an amended purpose statement.

A preamble for PIPEDA would serve a communicative function that may promote trust in Canada’s privacy law. Preambles can help communicate core ideas and principles because they can provide a narrative and highlight the legislation’s aims more readily than statutory provisions can.Footnote 20 A preamble can express a human rights-based approach better than a purpose statement alone.Footnote 21 Clearly articulating the meaning, scope, and importance of privacy rights in PIPEDA is key for emphasizing that the right to privacy in s. 3 of PIPEDA is more than a simple interest,Footnote 22 and is therefore not on equal footing with commercial interests.Footnote 23 A preamble can also clarify how a rights-based approach is sensitive to PIPEDA’s constitutional constraints.

Furthermore, there are extensive introductory sections similar to a preamble in Quebec’s Bill 64Footnote 24 and Alberta’s Freedom of Information and Protection of Privacy Act.Footnote 25 Adding a preamble, moreover, is consistent with the Canadian trend to have preambles in high-profile public policy legislationFootnote 26 and finds precedent in foreign legislation (e.g. the GDPR has 173 recitals).Footnote 27

In terms of statutory language, the preamble proposed in the 2018-2019 OPC Annual Report to Parliament satisfies these purposes.Footnote 28 In particular, this preamble communicates that privacy and technological innovation are not a zero-sum game. In other words, technology that protects privacy and human rights is essential for preserving and promoting public trust and advancing businesses’ interests, as the OPC noted in such report.Footnote 29 The preamble further recognizes PIPEDA’s quasi-constitutional statusFootnote 30 and makes clear that, like other human rights, privacy is not absolute.Footnote 31

PIPEDA’s rights-based approach should also be reflected in an amended purpose statement. An amended purpose statement facilitates purposive statutory interpretation by clarifying legislative intent and helping weigh privacy and human rights appropriately in balancing exercises.Footnote 32 Purposive interpretation is particularly relevant for PIPEDA given that legislative intent plays an enhanced role in quasi-constitutional statutory interpretation.Footnote 33

The purpose statement serves a key role even with the addition of a preamble. First, a refined purpose statement would provide clarity into how a rights-based approach works in practice, as requested in feedback to the consultation. Second, it provides direct interpretative authority as, unlike a preamble, it is binding.Footnote 34 Preambles are sometimes excluded from subsequent legislative compilations, which reinforces the importance of a purpose statement.Footnote 35 Lastly, having a purpose statement is consistent with PIPEDA’s current approach and with various international examples.Footnote 36

c. Clear rights and obligations over personal information

Most importantly, to implement this rights-based approach, PIPEDA provisions should be written as establishing rights and obligations, rather than recommendations, by (i) revising PIPEDA’s Part 1 provisions to change recommendations to obligations where appropriate (particularly for the new rights and obligations discussed herein) and (ii) repealing Schedule 1 and amending it by replacing the principles it outlines with rights (for example, amend principle 4 from ‘limiting collection’ to ‘right to data minimization’). This approach is reflected in the recommendations contained herein; for example, a right to report and file a complaint to the OPC, a right to justified data collection and processing, a right to an explanation, a right to demonstrable accountability, a right to data traceability, and a right to a process designed for privacy and human rights. The statute should be updated accordingly, replacing suggestions with clear indications of individual rights.Footnote 37

Having PIPEDA establish clear rights and obligations is key for implementing a human rights-based approach, as supported by prior OPC positions.Footnote 38 It also provides greater clarity for compliance than PIPEDA’s current list of best practices and recommendations. As the OPC has stated, “Although praised for being principles-based and technology-neutral, PIPEDA has been criticized for being difficult to understand. Having rights and obligations contained in Schedule 1, instead of in the body of the law, and cast in non-legal language, mixing obligations with best practices (shall v. should) have posed challenges for individuals and organizations, as well as the courts, to understand.”Footnote 39

Related to this human rights-based approach, the definition of personal information in PIPEDA should be amended to clarify that personal information includes both collected personal information and inferences about individuals.Footnote 40 This definition is consistent with the OPC’s position on credit scores (that credit scores fall within the meaning of personal information and trigger accuracy and access requirements).Footnote 41 The OPC, similarly, held that inferences amount to personal information in a complaint made under the Privacy Act.Footnote 42 Clarifying that inferences are included in PIPEDA’s definition of personal information is consistent with the Supreme Court’s understanding of informational privacy, which includes inferences and assumptions drawn from information.Footnote 43 Finally, it accords with modern legislation such as the CCPA,Footnote 44 which is lauded for explicitly including inferences in its definition of personal information.Footnote 45

Protecting inferred information is key for protecting human rights because inferences can be as harmful to individuals and groups as collected information,Footnote 46 as recognized by the Supreme Court.Footnote 47 As the European Article 29 Data Protection Working Party (A29WP) has also stated, “[m]ore often than not, it is not the information collected in itself that is sensitive, but rather, the inferences that are drawn from it and the way in which those inferences are drawn, that could give cause for concern.”Footnote 48 This is particularly so when dealing with AI, which can produce inferences about groups and individuals that are difficult to anticipate and therefore present significant human rights risks if not covered under PIPEDA.

Related to this recommendation, PIPEDA should include an explicit right to correct inaccurate personal information, including inferences (PIPEDA s. 4.9). This addition would implement the right to accuracy, which already exists in PIPEDA (principles 4.6, 4.9.5), in the AI context, reflecting the increased importance of inferred information. PIPEDA should establish that the accuracy obligation is ongoing and the responsibility to maintain accurate records cannot be shifted to individuals, as determined by case law.Footnote 49

d. Public enforcement

To guarantee compliance and protect human rights, PIPEDA should empower the OPC to issue binding orders and financial penalties. The OPC has called for this authority,Footnote 50 which received wide support in responses to the consultation (74%). These enforcement powers would work in concert with the principle of demonstrable accountability to provide PIPEDA with a deterrent effect and to provide incentives for compliance.Footnote 51 They would, for example, help avoid situations where organizations refuse to comply with Canadian law, as Facebook recently did.Footnote 52

Order-making powers backed by penalties find abundant international precedent, such as the GDPR,Footnote 53 the UK Data Protection Act,Footnote 54 the CCPA,Footnote 55 and the Singapore Personal Data Protection Act.Footnote 56 They are also consistent with the order-making powers of provincial authorities in Alberta,Footnote 57 British Columbia,Footnote 58 and Quebec under Bill 64.Footnote 59 The Information Commissioner, likewise, has the power to issue enforceable orders under the Access to Information Act as per Bill C-58.Footnote 60 Moreover, other Canadian regulatory agencies already have such powers, such as the Competition Bureau of Canada,Footnote 61 Canadian Radio-television and Telecommunications Commission,Footnote 62 Canadian Food Inspection Agency,Footnote 63 Canada Energy Regulator,Footnote 64 and Canada Nuclear Safety Commission.Footnote 65 These powers are essential for managing the inherent risks of AI, as identified in other proposals for law reform.Footnote 66 It is also instrumental to maintaining adequacy status.Footnote 67 At a more abstract level, order-making powers and the power of proactive inspections, which is detailed below, are necessarily complementary, as these inspections are needed to detect instances of non-compliance.

The OPC’s binding orders and financial penalties would have to respect procedural fairness. Crucially, they should be subject to judicial review by the Federal Court of Appeal. But procedural guarantees should go further. PIPEDA should require fines to be (1) effective, (2) proportionate, and (3) dissuasive for each individual case, similar to the procedural safeguards found in the GDPR.Footnote 68 A statutory catalogue of criteria to consider in rendering a decision (such as intentional infringement, failure to take measures to mitigate damage which occurred, lack of collaboration with authorities) may also be useful for these purposes.Footnote 69 Lastly, the OPC could create internal review mechanisms to avoid bias similar to the mechanism introduced by Canada’s Anti-Spam legislation.Footnote 70

PIPEDA should incorporate penalties of up to $10,000,000 or 2% of an organization’s worldwide turnover, whichever is greater, as does Bill 64.Footnote 71 They would provide the OPC with adequate tools, similar to those provided to data protection authorities by the GDPR,Footnote 72 but appropriate for the Canadian context. They take into consideration small and medium enterprises’ (SME) needs by providing fining discretion, no minimums, and maximums according to turnover. All fines should be administrative monetary penalties, rather than criminal sanctions. Administrative sanctions avoid adding criminal procedure to the fining process and provide the OPC with needed discretion in enforcement. More generally, they avoid the limitations of other federal statutes, which, unlike PIPEDA, have required a criminal component to stay within federal legislative power, but see their sanctions rarely enforced.

These powers would require shifting the OPC model from ombudsperson to regulator. This received wide support in responses to the consultation (74%) and, as argued by the Privacy Commissioner, is part of a modernized regulatory approach that is important for restoring citizen trust.Footnote 73 It would involve two primary changes. First, it would involve providing the OPC with discretion to decide what complaints to pursue, not pursue, or abandon, as a complement to its ability to start investigations absent a complaint, as suggested in the OPC report to Parliament.Footnote 74 Second, it would involve empowering the OPC to issue binding guidelines or regulations so that it can provide a clear and concrete understanding of what the law requires and provide further certainty to individuals and organizations as to established rights and obligations, as advocated in the 2018-19 Report to Parliament.Footnote 75 It would promote legal predictability and complement the law’s technological neutrality by providing the OPC the opportunity to concretize “a principles-based law written at a high level of generality”, as suggested by the OPC.Footnote 76 This second power would be consistent with the suggested fining powers and with existing OPC investigatory powers. Both powers would be consistent with other Canadian regulatory agencies.Footnote 77 Regulating bodies should have “powers commensurate with the increasing risks that new disruptive technologies pose for privacy,” as the OPC has held.Footnote 78 These risks are much higher than they were in 2001, particularly with uses of AI.

e. Private enforcement

The human rights-based approach recommended, and the recognition of the limits of the OPC’s powers given its limited resources, also lead to the need for incorporating private rights of action for breaches of PIPEDA’s obligations.Footnote 79 This can be done by expanding PIPEDA 16(c). Private rights of action flow from the rights-based approach.Footnote 80 Enforcement would become more effective because the OPC’s enforcement powers and resources alone are insufficient for achieving optimal compliance.Footnote 81 Such rights would also make enforcement more efficient, as it would multiply enforcement resources, thereby lightening the burden placed on government budgets—as regulation scholars often observe, budgetary limitations are a recurring constraint on public enforcement capacity.Footnote 82

Concerns of private rights of action increasing business costs can be tempered by curtailing the right. Costs can be moderated with measures such as giving businesses thirty days to redress an alleged violation before the right is triggered or requiring plaintiffs to pay business’ legal fees for frivolous claims, found in international precedent.Footnote 83 Costs can also be moderated by providing a statutory range for damages for each violation. One possible range is $100 to $10,000 per statutory breach,Footnote 84 which is the maximum in Ontario’s Personal Health Information Protection Act (PHIPA).Footnote 85 While the cap for damages will often be too low to justify claims being brought forward, they may prove useful in cases of widespread violations, particularly if courts decide to allow class certification.Footnote 86

Private rights of action go hand in hand with giving the OPC more flexibility to decide what claims to pursue. They are a complement to public enforcement mechanisms for a meaningful rights-based approach as recommended in this report. Private rights of action were supported in the 2016-2017 OPC report to Parliament as a mechanism for individuals to have alternatives to the current complaint model and as a reasonable evolution of Canadian private-sector privacy protection.Footnote 87 They were also supported in the 2017-2018 report to Parliament due to the OPC’s limited ability to pursue all claims due to its limited resources;Footnote 88 and in the 2018-2019 report to Parliament as a way of ensuring that individuals are not left without a remedy.Footnote 89 Given the recommended change in the OPC’s role from ombudsperson to regulator and its concomitant enforcement discretion, individuals should not be required to have an OPC investigation before they can go to court—although, due to the suggested statutory maximum in damages, discovery will be expensive so most individuals will still request an OPC investigation before doing so.

Lastly, these private rights of action would be consistent with related Canadian legislation that combine public enforcement from a regulator with individuals’ right to sue for violations of the act. Examples include the Telecommunications Act,Footnote 90 the Competition Act,Footnote 91 and a number of provincial acts.Footnote 92 They likewise fit provincial statutory privacy torts and Canadian common law.Footnote 93 It would also follow modern legislation, such as Quebec’s Bill 64,Footnote 94 the CCPA for cybersecurity breaches,Footnote 95 and Singapore’s Personal Data Protection Act.Footnote 96 The GDPR also includes the right to private action,Footnote 97 but these provisions are narrower than those contained in more modern legislation.Footnote 98 PIPEDA should also make explicit that it does not pre-empt provincial tort law, particularly due to federalism considerations, as established by case law.Footnote 99

3. Flexibility Measures

The flexibility measures recommended in this section involve consent exceptions, de-identified information, purpose specification, and appropriate safeguards. The recommended modifications build on consultation proposals 6 (Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective), 7 (Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable), and 8 (Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification).

a. Consent and public good

While consent is and should remain a key part of protecting privacy, its capacity to address privacy risks has been repeatedly questioned by privacy law experts.Footnote 100 It is neither realistic nor reasonable to expect individuals to make informed choices in the modern information economy. Assessing the costs and benefits of their personal information is close to an impossible task and the power asymmetry between organizations and individuals is enormous.Footnote 101 Privacy harm is difficult to consider because inferred personal data, which is enhanced by AI processing, is unpredictable.Footnote 102 As a reaction, most people rarely read privacy noticesFootnote 103 or change default privacy settings.Footnote 104 Moreover, even if individuals could make optimal privacy decisions, relying on consent does not account for the group and social impacts of individual privacy decisions.Footnote 105 The consent paradigm is thus widely recognized as insufficient to confront new privacy issues such as those posed by AI. Consent can be used to legitimize activities that are unreasonable or contrary to Canadian rights and values. Privacy protections cannot hinge on consent alone.

PIPEDA should clarify that, when not subject to an exception, consent must be meaningful according to the applicable OPC guidelines.Footnote 106 In connection with the recommendation to clarify that inferred data is personal data, PIPEDA should incorporate a right to object to the collection, use, or disclosure of personal information, subject to reasonable notice. Currently, PIPEDA contains a right to withdraw consent at any time, subject to legal or contractual restrictions and reasonable notice.Footnote 107 The recommended addition would cover the gaps in the current right to withdraw consent, particularly for inferred information. All exceptions and flexibility measures to the consent requirement should also apply to this right to object. This can also be achieved by extending the right to withdraw consent to inferred information.

To facilitate organizations’ development of AI, PIPEDA should incorporate new exceptions to consent that add flexibility when obtaining meaningful consent is not practicable. This recommendation received support in the responses to the consultation and accords with the rights-based framework recommended.

PIPEDA should incorporate an exception to consent requirements when collecting or processing personal data is needed for the public good, with de-identification as a condition. This was suggested in the OPC consent report.Footnote 108 Requiring de-identification for this exception is consistent with the GDPR's approach to processing for research where de-identification is a condition.Footnote 109

The use of this exception should be subject to a privacy impact assessment and balancing in the form of a necessity and proportionality test, as detailed below. This recommendation follows a similar approach to the GDPR’s legitimate interest basis, which is subject to balancing,Footnote 110 while maintaining consent as the main avenue for collection and processing in PIPEDA.Footnote 111 Additionally, the use of public good must be subject to spot checks, in accordance with the recommendation to strengthen OPC enforcement powers, to assess whether the exception is being applied correctly by organizations and prevent abuse in balancing tests. This condition relates to demonstrable accountability, as it will be critical for guaranteeing that balancing tests are not abused.

PIPEDA should add a definition of public good under this exception. The reason is that “amorphous ‘public good’ arguments without meaningful transparency, consent provisions, and mechanisms of redress are not likely to withstand potential Charter challenges or international legal scrutiny.”Footnote 112 This risk is particularly prevalent for PIPEDA because most definitions of public good or public interest in comparable legislation refer to activities that, in Canada, fall under the Privacy Act and not PIPEDA (such as definitions in the GDPR and UK Data Protection Act), so PIPEDA would be incorporating an uncommon consent exception. This exception is desirable because the private sector can too develop initiatives that work towards the public good, such as partnership with the public sector for COVID-19 contact tracing apps. A definition similar to Mexico’s should be implemented,Footnote 113 which includes activities that fall under PIPEDA and are well-defined. Possible language could be: ‘information is collected or processed for the public good when the activity is beneficial to society and not simply of individual or commercial interest or profit.

b. Research exceptions

PIPEDA should add flexibility to the collection, use, and dissemination of de-identified information for research and statistics while protecting privacy and human rights. To do so, it should exempt sufficiently de-identified data (subject to the safeguards detailed below) for the purposes of research or the production of statistics from: (i) purpose specification; (ii) data minimization; and (iii) consent requirements for collection, use, or dissemination.

The majority of respondents to the consultation favoured such added flexibility. Views differed by sector, however, with strong support from the private sector and some opposition from civil society, the non-profit sector, and academia. Ensuring that de-identified data remains within the scope of PIPEDA while providing flexibility accounts for the arguments in favour of the proposal and the concerns raised against it.

The exceptions include statistical purposes to clearly encompass training of AI independent of whether the collection, use, or dissemination of data is for a public or a commercial purpose, while remaining technologically neutral. Thus, the exception accommodates feedback to consultation proposals, as it will allow for training of AI, and accounts for minority feedback in favour of making requirements less onerous at the training stage of AI. As a result, flexible requirements for processing de-identified data for statistical purposes will serve to encourage AI innovation in Canada. This recommendation provides greater flexibility for low-risk situations, as de-identified information can encourage low-risk activity when coupled with safeguards such as those recommended in this module.

The first exception (purpose specification) provides important flexibility because organizations often find a useful purpose only after the data has been collected and processed, as expressed in consultation feedback. This issue is particularly salient with the greater processing capacity that came with AI. For training AI, data that are collected for one purpose often turn useful or become meaningful when paired with other data.Footnote 114 Similar provisions exist in comparable legislation, including Japan’s Act on the Protection of Personal Information,Footnote 115 health privacy statutes,Footnote 116 and Bill 64.Footnote 117

The second exception (data minimization) reacts to a concern in consultation feedback suggesting that, if organizations cannot collect large amounts of data, AI will be trained with smaller data samples. The concern is that training AI with smaller samples to comply with data minimization may stifle AI development and innovation and, eventually, reduce AI’s accuracy. The recommended data minimization exception addresses this concern.

Data minimization is realistic in AI, for example by reducing identifiability.Footnote 118 As the UK ICO has pointed out in its auditing framework, there are “[a] number of techniques which enable both data minimisation and effective AI development and deployment”Footnote 119 Thus, data minimization does not warrant modification to allow for AI, particularly when complemented with the flexibility measures for low-risk situations recommended in this report to accelerate AI development in Canada. The idea that data minimization can co-exist with AI received support in the responses to the consultation, and is consistent with international legislative precedent and other data protection authorities’ positions, including Australia, Ireland, Norway, South Korea, and the UK.Footnote 120 Data minimization, moreover, prioritizes proportionality, which underpins rights-based balancing, as noted by the Norwegian Data Protection Authority.Footnote 121 The right protects individuals’ rights, as it mitigates risk of harm by preventing organizations from collecting and processing more than necessary. Data minimization can also serve a public communication role by conveying to individuals how their data must be collected and processed respecting their human rights.

The formulation of the third exception (consent) applies to the collection, use, and dissemination of de-identified information, while avoiding the requirements of the existing research exception for personally identifiable information in PIPEDA s. 7(2)(c). It should thus work in conjunction with PIPEDA s. 7 exceptions for identifiable information. This recommended exception indirectly addresses consultation feedback that PIPEDA should incorporate greater flexibility for further processing of information not initially identified in the original purpose by adding flexibility to the purpose specification exception. It is also consistent with international legislative precedent, particularly the GDPR.Footnote 122 This third exception should encompass sharing that is internal to the organization and external sharing subject to data sharing agreements that prevent re-identification.

c. De-identified information

PIPEDA should replace the term pseudonymization with de-identification. The latter has advantages over alternative terms because it makes clear that re-identification is possible.Footnote 123 The term de-identification also incorporates feedback to the consultation and fits with modern international legislation.Footnote 124

PIPEDA should prohibit re-identifying de-identified data when data is de-identified pursuant to one of PIPEDA’s exceptions, and should adopt OPC-issued administrative monetary penalties for breach of such prohibition. This recommendation is motivated by recognizing that de-identification, even properly implemented, does not negate all risk. This prohibition accords with equivalent prohibitions in Bill 64,Footnote 125 the CCPA,Footnote 126 and the approach in the UK,Footnote 127 Australia,Footnote 128 New Zealand,Footnote 129 Japan,Footnote 130 and South Korea,Footnote 131 as well as with feedback to the consultation. PIPEDA should clarify that fines for re-identification must be proportionate to risks produced by re-identification. PIPEDA should specify the range of penalties to provide procedural safeguards to organizations.Footnote 132 It should follow the approach of the UK Data Protection Act, in providing some exceptions, such as counter-security measures,Footnote 133 responding to consultation feedback on such measures.

PIPEDA should further require organizations to implement technical safeguards designed to prevent third parties from re-identifying de-identified data. This requirement would help ensure that organizations benefit from this exception in a way that safeguards Canadians’ privacy and human rights. It would follow the approach taken by the CCPA,Footnote 134 South Korea’s Personal Information Protection Act,Footnote 135 and by several data protection authorities where safeguards are not specified in the statute itself.Footnote 136

PIPEDA should clarify that, because any de-identification is imperfect, de-identified information is still identifiable information.Footnote 137 As per case law and prior OPC positions, personal information is information “about” an identifiable individual. Such “about” means that the information relates to or concerns the subject.Footnote 138 De-identified data is always “about” an identifiable individual because de-identified information can be retraced by combining different pieces of de-identified data with public information—data is never fully de-identified and re-identification is always possible.Footnote 139 That means that de-identified information falls under PIPEDA’s scope and is exempted from some, but not all, the rights and obligations that PIPEDA establishes.Footnote 140 Recognizing de-identified information as identifiable personal information provides human rights protection while allowing for flexibility.Footnote 141

De-identified data, moreover, still reveals information and trends about groups of people. There is, thus, potential for data that remains de-identified to harm members of those groups by, for example, being used in a discriminatory manner, even unintentionally, when processed by AI.Footnote 142 Harms in group privacy do not rely on re-identifying individuals: decisions can affect de-identified individuals on the basis of group attributes (such as gender, sexual orientation, political preference).Footnote 143 Breaches in group privacy can amount not only to discrimination, but also infringements of other Charter rights such as freedom of opinion and expression, and freedom of association.Footnote 144 Regulating de-identified data is crucial for preventing AI bias.Footnote 145 Including de-identified data in PIPEDA is thus consistent with the recommendation above regarding the importance of recognizing group rights in the preamble and purpose statement. Such an inclusion is also suggested in feedback to the consultation and supported by ISEDFootnote 146 and the Standing Committee on Access to Information, Privacy and Ethics.Footnote 147

De-identification should be defined in PIPEDA. PHIPA presents a desirable definition that PIPEDA could incorporate: ‘to remove, in accordance with such requirements as may be prescribed, any information that identifies the individual or for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual’.Footnote 148 This definition is consistent with the Gordon v Canada standard,Footnote 149 as it is formulated for a statute (in the style of the GDPR and CCPAFootnote 150 but provides a clearer standard than Gordon). It has the added benefit of making explicit that an OPC regulation will provide further details.

This definition should be complemented with an OPC guideline on meaningful means of de-identification. This guideline would provide guidance to organizations while keeping PIPEDA technologically neutral. The suggestion is responsive to feedback from the Toronto and Montreal roundtable consultation, which suggested that primary legislation should not be too prescriptive regarding methods of de-identification.

d. Safeguards to recommended exceptions

PIPEDA should mandate safeguards when new exceptions for public good and research are invoked to ensure that human rights are protected when providing flexibility. De-identified information needs safeguards because: (i) it can be re-identified; (ii) aggregated de-identified data reveals information about groups that create risks for members of such groups; and (iii) there are social risks in processing personal information beyond individual risks, connected to the idea of privacy as a human right.Footnote 151 Think, for example, of risks to democracy revealed by the Cambridge Analytica scandal.

The first recommended safeguard is that PIPEDA should include a balancing test inspired by and compatible with the Oakes analysis under s. 1 of the Charter,Footnote 152 and require the application of such a balancing test when organizations invoke the recommended exceptions. Balancing will mitigate the risk of over-collection of personal information based on new exceptions, such as overly broad interpretations of public good. Balancing must not presume a privacy violation. It must rather ensure the necessity and proportionality of the use of exceptions so that human rights are protected while providing flexibility.

This incorporation of balancing follows from the human-rights approach recommended and provides clarity on how to implement it while recognizing legitimate business interests, consistent with OPC positions and stakeholder feedback.Footnote 153 To implement balancing, and to stay consistent with the right to accountability below, clarity as to how an exception was invoked or applied will be crucial. For example, the “public good” must be clearly articulated (so as to permit the balancing analysis), pressing and substantial, rationally connected to the means deployed to pursue it, minimally impairing of privacy and human rights, and proportional to the means deployed to pursue it.

The second safeguard is that PIPEDA should mandate privacy impact assessments when consent or de-identified information exceptions are used. These will allow both the OPC and organizations to identify real risks of significant harm in collection, processing, and dissemination when usual protections such as consent, data minimization, and purpose specification are absent. The requirement will be useful to the OPC for verifying balancing without creating significant enforcement costs. It will similarly allow organizations to perform balancing obligations with a clearer picture of risks and benefits, for example for determining proportionality. Privacy impact assessments become more relevant when designing and deploying AI because privacy risks become more unpredictable. This fact was recognized, for example, by the UK ICO when guiding data controllers to consider the trade-offs they make with AI, particularly in view of fairness and proxies for protected categories.Footnote 154

e. Purpose specification and compatibility

PIPEDA should clarify that purpose specification, when applicable, constitutes a right to consent per purpose that prohibits bundling consent. This has been recognized in OPC reports.Footnote 155 It finds international precedent, such as in the European Union as per GDPR provisionsFootnote 156 and comments from the A29WP guidelines on consent,Footnote 157 Australia,Footnote 158 Azerbaijan under some interpretations,Footnote 159 Hong Kong,Footnote 160 and, nationally, Bill 64.Footnote 161

As an additional flexibility measure in view of AI, PIPEDA should add flexibility to purpose specification. In particular, PIPEDA should allow for greater flexibility for further processing of information not initially identified in the original purpose when the new purpose is compatible with the original purpose. In other words, if a new purpose emerges that is compatible with the purpose for which the personal information initially collected, the requirement of re-obtaining consent should be waived. Compatibility should include a direct and relevant connection.

Implementing this recommendation will require the OPC to issue guidance regarding what constitutes a compatible purpose and a direct and relevant connection, helping organizations assess compatibility. Guidance could be formulated in line with guidance provided for a similar provision by the A29WP.Footnote 162 It is crucial that this exception is not interpreted too broadly, allowing for avoiding consent on any kind of use.

This recommendation implements consultation proposals in a manner that is consistent with the Privacy Act, which establishes that consent need not be re-obtained where it was obtained “for a use consistent with that [initial] purpose.”Footnote 163 It also finds international precedent. Under the GDPR, for example, data can be further processed in a way that is “not incompatible” with initial purpose.Footnote 164 The CCPA refers to purposes that are “compatible within the context in which the personal information was collected.”Footnote 165 Bill 64, in turn, establishes that data can be used for purposes “consistent with the purposes for which it was collected” (secondary purpose exception).Footnote 166

This idea relaxes the purpose specification requirement to acknowledge business interests. It received some support in the responses to the consultation from private industry, civil society, and academia even while not being one of the OPC’s initial questions in the consultation. In particular, purpose compatibility would be tremendously beneficial for SMEs, which may not have the capability to continuously collect new data for new purposes. This recommendation connects to accountability because verifying what organizations claim to be compatible purposes requires spot-checking or oversight with data traceability. In particular, an organization’s documented analysis verifying compliance may be required. It should thus be clarified that specified purpose is included in record-keeping obligations. These purpose specification requirements set with sensitivity to business needs account for minority consultation feedback of making purpose specification more flexible during training stages of AI, while still maintaining technological neutrality.

Purpose specification, like data minimization, is realistic in the AI context, especially when complemented with the recommended flexibility measures for research, public good, and compatible purposes. Therefore, purpose specification should be maintained. This idea received consultation respondent support. Enhancing predictability and reducing uncertainty, which purpose specification does, are important for consent to be meaningful. Maintaining purpose specification thus ensures meaningful consent when individuals have a right to consent, as is the case with Bill 64 and GDPR. Purpose specification, moreover, enhances accountability by allowing individuals to gain clarity on organizations’ data practices.

4. Provisions Specific to Automated Decision-making

The recommended modifications in this section build on consultation proposals 1 (Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI), 3 (Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions), and 4 (Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing).

a. AI and technological neutrality

PIPEDA should not define AI. Defining AI would compromise PIPEDA’s technological neutrality, which the OPC explained to be a desirable principle.Footnote 167 This PIPEDA reform would be the first major one in 20 years and, as such, it should contain provisions that are likely to last for decades. The majority of feedback from the consultation was opposed to defining AI for this reason. Interpreting PIPEDA should not be contingent on technologies’ specificities, which will continue to evolve. Instead, the statute should focus on the risks to human rights posed by technology.Footnote 168 Similarly, AI is not defined in the GDPR, the UK Data Protection Act, or Bill 64, all of which nevertheless introduce useful provisions to address the new risks posed by AI.

Maintaining technological neutrality would not impede using instead language such as ‘automated decision-making’ to refer to the type of activity that poses risks (rather than referring to the specific technologies such as AI). This language is employed, for example, in the GDPR and Bill 64.Footnote 169

Automated decision-making introduces unique risks that warrant distinct treatment from other types of data processing in a human rights statute, such as PIPEDA.Footnote 170 “Under automated decision-making, discriminatory results can occur even when decision-makers are not motivated to discriminate.”Footnote 171 Automated decision-making processes reflect and reinforce biases found in the data they are fed (trained with) into the decisions they yield. They reproduce and amplify the inevitably biased scenarios they were trained with.Footnote 172 Protected categories that decision-makers are prohibited from considering, such as gender or race, are often statistically associated with seemingly inoffensive characteristics, such as height or postal code. Algorithmic decision-making can easily lead to indirect discrimination on the basis of gender or race by relying on these characteristics as proxies for the prohibited traits.Footnote 173 Legal rules aimed at AI systems must also recognize the potential risks to human rights, particularly privacy and non-discrimination, in the way that they are designed. For example, there is “growing evidence that on-demand platforms’ algorithms aggregate users’ conscious or unconscious biases, leading to discrimination against groups including female and ethnic minority workers.”Footnote 174 The two recommended rights that follow respond to these concerns.

b. The right to an explanation

PIPEDA should incorporate a right to obtain a meaningful explanation when an individual is subject to automated decision-making that uses their personal information. This recommendation promotes accountability rights by helping individuals to understand decisions about them. It protects other rights in PIPEDA (including rights recommended in this report), such as the right to contest, to informed consent, and to find and correct erroneous personal information including inferences.Footnote 175 The right to explanation also protects human rights, such as the right not to be subject to discriminatory decisions.Footnote 176

Moreover, a right to explanation stems from applying PIPEDA’s existing openness and transparency principle to the AI context. It also stems from the principle of individual access, as it ensures that accountability is available also at an individual. While individuals should have a right to transparency independent of the type of information processing, automated decision-making warrants additional provisions. As reflected in the consultation feedback, the right to explanation is connected to the principles of privacy, accountability, fairness, non-discrimination, safety, security, and transparency. The effort to guarantee these rights supports the need for a right to explanation.

There are good reasons for limiting the scope of the right to explanation to automated decision-making. This formulation maintains technological neutrality and avoids the difficulty of defining AI in the statute. This scope covers situations in which human rights are at risk while avoiding the increased business costs entailed by wider formulations (such as a right to explanation for all profiling). This scope is particularly favourable for SMEs, since fewer resources would be required to comply. Additionally, it avoids interfering with other areas of law, such as employment law, which set duties of explanation for decisions. It thus ensures that PIPEDA stays within its constitutional scope.

PIPEDA should provide a brief definition of ‘automated decision-making’ that makes clear that the term includes decisions that involve automated means—it is not meant for decisions that involve automated means exclusively. This recommended formulation reflects the mixed consultation feedback, which asked to maintain technological neutrality, but also to define terms clearly, and to recognize that AI introduces unique risks. Furthermore, this definition would present an improvement over GDPR and Bill 64 formulations.Footnote 177 It would do so because it affords substantial protective value by not limiting the scope of the right to processing done solely by automated means.Footnote 178 That is, it avoids the heavily criticized human in the loop exception, noted as a concern in consultation feedback. Since there is no such thing as a purely automated decision, it is important to provide a functional definition that is in accordance with the current state of technology and can change over time.

The term ‘automated decision-making’ should be complemented by OPC guidelines that clarify how the term applies to the current technological context. These guidelines can add precision to rights and obligations that are technology-specific, which will ensure that PIPEDA remains relevant in the long term, because what is included in an appropriate explanation will often depend on the type of system.Footnote 179 Appropriate language for an OPC guideline may be, for example: ‘automated decision-making is a decision that involves the use of a machine learning algorithm to suggest an outcome or narrow down the options for outcomes, even with human involvement.’ This approach, which combines the statutory term with a guideline that clarifies it, approaches the issue similarly to the GDPR and Bill 64 (and the TBS directive on automated decision-makingFootnote 180), and follows suggestions in consultation feedback.

PIPEDA should also define ‘meaningful explanation’. The OPC should issue a guideline or regulation that clarifies the breadth and depth of explanation requirements in a way that specifies relevant explanations for different technologies. This combination maintains PIPEDA’s technological neutrality (as identifying appropriate explanation will largely depend on the technology) while creating maximal certainty for organizations and individuals. It can address, for example, what happens when organizations cannot provide common types of explanations, thus avoiding separate legislative definitions of meaningful explanation for black box algorithms. For such definition, the following language is recommended: ‘an explanation that allows individuals to understand the nature and elements of the decision to which they are being subject or the rules that define the processing and the decision’s principal characteristics.’ This definition accounts for both sides of the academic debate on whether explanations should pertain to the decision or to the technology.Footnote 181 It follows the example of France’s Digital Republic Act and presents an improvement over the GDPR.Footnote 182

In response to feedback from the consultation, the right to explanation should take into account that some types of explanations are unfeasible for some automated systems. That said, for no system it will be the case that no explanation is possible. For example, even with the most inscrutable deep learning model, an organization can present the input data, training data, model, and, if not protected by IP, the code.Footnote 183 An organization can likewise provide a minimal explanation on the nature of the decision and the information processed (independent of the technology). It is worth clarifying that only an extreme and inflexible right to explanation (that prevents developing less explainable technologies) would reduce accuracy. That is not the version recommended here. That said, the complexity of a system or a decision-making process should not be an excuse for failing to provide any form of sufficient explanation. Certain components of an explanation should always be required from organizations, regardless of the type of system (e.g. legible and understandable information).Footnote 184 Explanations should provide sufficient details to enable individuals to contest the decision.

Consultation feedback raised similar concerns about the impact of explanation obligations on trade secrets. Providing some explanation is fully compatible with protecting trade secrets. However, trade secrets should not deprive individuals from accessing essential information about a decision. This recognition follows from the Supreme Court decision in Ewert v Canada.Footnote 185 Determining explanations’ compatibility with protecting trade secrets follows the same considerations as the cases of technical impossibilities discussed earlier. In particular, decision-based (pedagogical) explanations, as opposed to model-based (made by decomposition) explanations, can avoid disclosing trade secrets.Footnote 186 Measures can be taken to protect intellectual property even while providing explanations over protected information, such as protective orders or filing under seal.Footnote 187 However, in line with consultation feedback and trade secrecy concerns, PIPEDA should not require public filings for algorithms or algorithmic impact assessments, as they can present issues to legitimate interests of organizations due to their costs while not needed to protect individuals’ rights.

PIPEDA should require organizations to inform individuals of their right to an explanation when applicable. Otherwise, individuals cannot exercise the right. The right to an explanation still confers a heavy burden onto individuals, as it requires them to seek an explanation and follow with a challenge to the decision if desired.Footnote 188 Being informed about their right, as well as receiving a clear explanation, lightens this load.

c. The right to contest

In addition, PIPEDA should create a right to contest decisions that are made about an individual using his or her personal information based on automated decision-making.Footnote 189

As Mulligan points out, “the ability to contest decisions is at the heart of legal rights that afford individuals access to personal data and insight into the decision-making processes used to classify them”.Footnote 190 This ability is consistent with the rights-based approach recommended in this report. The ability for individuals to contest decisions mitigates the risk of algorithmic discrimination in automated decisions, as mentioned in consultation feedback. The right to contest is thus a useful complement to the right to explanation. However, this right is less burdensome than the right to object to decisions contained in the consultation proposal. Consultation feedback suggested that the right to object is not always technically possible or contextually appropriate without large compliance costs. The narrower right to contest proposed in this report avoids this problem: this recommendation simply requires a human to take a second look at the decision to check for systemic issues of algorithmic decision-making.

The reasons supporting formulating the right to explanation for ‘automated decision-making’ also apply to formulating the right to contest for automated decision-making. As with the right to explanation, this formulation of the right to contest avoids the heavily criticized human in the loop exception in GDPR,Footnote 191 which was also mentioned as a negative element in consultation feedback.Footnote 192 Additionally, the right to contest is closely linked to automated decision-making, as the right to contest a decision to a human decision-maker would not be necessary if the original decision was already made by a human. This approach is consistent with the scope of the right to contest in Bill 64Footnote 193 the GDPR,Footnote 194 and the UK Data Protection Act,Footnote 195 which are all limited to automated decisions.Footnote 196 A broader right to contest any decision—not only automated decisions—would foray into federal and provincial legislation as it would change rights to appeal decisions (and obligations to provide an appeal) from private entities in private law, such as employment law.Footnote 197 Moreover, limiting the right to automated decision-making tempers possible concerns over increased costs.

5. Accountability and Design

The recommended modifications in this section build on consultation proposals 5 (Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection), 9 (Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle), and 10 (Mandate demonstrable accountability for the development and implementation of AI processing).

a. Accountability

PIPEDA should incorporate a right to demonstrable accountability for individuals, which would mandate demonstrable accountability for all processing of personal information. This is consistent with the recommended human rights-based approach and with mandating clear rights and obligations, as demonstrable accountability is integral to compliance and enforcement.Footnote 198 Demonstrable accountability moves privacy from “theory to practice,” as the A29WP has put it.Footnote 199 The right is justified by the amount of personal data collected, processed, and disseminated in Canada on a daily basis; by such data’s value; and by the potentially serious negative effects of personal information breaches across sectors.Footnote 200

On this, PIPEDA should take a similar approach to the GDPR, which contains a general provision that articulates the right and makes organizations responsible for demonstrating compliance,Footnote 201 and in subsequent provisions requires specific accountability measures.Footnote 202 Other legislation with provisions on specific accountability measures include Bill 64,Footnote 203 Ireland’s Data Protection Act,Footnote 204 and the UK Data Protection Act.Footnote 205

There is precedent for an explicit right to accountability even within PIPEDA, as the OPC requires organizations to demonstrate relevant portions of their privacy management programs over the course of their investigations.Footnote 206 While accountability is important for compliance, its purpose extends beyond that. It also encourages organizations to consider the totality of their practices. It boosts public trust by signaling to stakeholders (employees, customers, suppliers, regulators, etc.) that privacy-related obligations are taken seriously. It promotes a cultural shift towards transparency and the development of up-to-date data-related policies and procedures, such as employee training.Footnote 207

PIPEDA should incorporate a right to data traceability as a measure of accountability. Specifically, PIPEDA should mandate all information processing systems to log and trace data collection and use, and to provide related documentation to the OPC upon request to demonstrate compliance. Traceability is necessary for organizations to demonstrate compliance as well as to fulfill new and existing legal obligations. This point was shared among respondents to the consultation, in which traceability received wide support (67% of respondents). Some feedback to the consultation called data traceability “The most important and feasible [guarantee] in practice.” Traceability is part of a transparent data processing architecture.Footnote 208 It relates closely to accountability and explainability.

Incorporating traceability in this way follows national and international precedent. Within Canada, Bill 64 includes traceability rights related to automated decision-making for individuals on request, including the right to know the personal information and the reasons or factors that contributed to automated decision-making processes, as well as the right to request the source of the information.Footnote 209 Other related provisions in Bill 64 include those that require mandatory privacy impact assessments and establish the right to data portability.Footnote 210 Similarly, Ontario’s recently amended PHIPA requires auditing in the context of electronic personal health information.Footnote 211 To perform these audits, organizations must maintain a certain level of data traceability. The addition is consistent with usage across a variety of industries that use detailed traceability (e.g. medicine, law, accounting, and mechanical engineering).Footnote 212 Internationally, the right is implied, for example, throughout the GDPR,Footnote 213 in the proposed US’ Algorithmic Accountability Act (which incorporates traceability for automated decision-making),Footnote 214 and authority positions in Singapore. In the GDPR, while there is no explicit traceability requirement, aspects of the statute require traceability. For instance, the GDPR requires organizations to implement a wide range of measures related to data governance and accountability, such as data protection impact assessmentsFootnote 215 and records of processing activity.Footnote 216 In the proposed US’ Algorithmic Accountability Act, the Federal Trade Commission would be granted powers to mandate private sector organizations to assess or audit systems for bias and discrimination.Footnote 217 Similarly, the Singapore Personal Data Protection Commission recommends (but has not yet officially implemented) ‘data lineage’ and ‘data provenance records’, which would act as data- and algorithm-tracing tools.Footnote 218

The same considerations regarding how the right to an explanation works with trade secrets apply to the compatibility of trade secrets and the right to data traceability, thereby addressing possible concerns about the confidentiality of commercial information. Similarly, the considerations about technical limitations to the right to explanation, which are relevant to make the right compatible with AI, are applicable to data traceability.Footnote 219 Furthermore, traceability may bolster the completeness of explanations given to individuals when the right to explanation is invoked but trade secrecy prevents other disclosures.

In keeping with the right to data traceability, accountability should be implemented through active record-keeping, including logging requirements. The obligation should stipulate that relevant data must be stored appropriately to avoid reasonable risks of degradation or alteration, and be kept for a reasonable amount of time. This recommendation takes a similar approach to the GDPRFootnote 220 and Bill 64.Footnote 221 It promotes more robust OPC investigations to ensure compliance. It helps individuals have access to and understand the personal data that leads to decisions made by AI algorithms so that they can identify and challenge wrongful or discriminatory decisions.Footnote 222

As another accountability measure, PIPEDA should allow the OPC to conduct proactive inspections: on-demand requests to examine accountability and compliance with PIPEDA. This recommendation implements consultation proposals 10 and 11. It is closely connected to shifting the OPC model from ombudsperson to regulator and to empowering the OPC to issue binding orders and financial penalties, given that inspections are necessary to determine instances of non-compliance and their gravity. This power is essential to ensuring the OPC’s ability to better enforce individual rights in PIPEDA. It may otherwise be difficult to obtain enough information to ensure compliance: relying only on complaints makes it difficult to see where deficiencies exist.Footnote 223 It also connects to the recommended discretion about which complaints to pursue, suggested above, as proactive inspections may be costly. The power to conduct proactive inspections may be particularly relevant for maintaining adequacy status, as it enables enforcement.Footnote 224

This approach is consistent with other Canadian regulatory regimes mentioned in section 2 (such as employment standards, food and safety, health, securities) and with the OPC’s powers under the Privacy Act.Footnote 225 The OPC deemed proactive inspections to be (i) “not extraordinary” and (ii) “far more likely to achieve compliance with PIPEDA” than the current ombudsman model when coupled with demonstrable accountability.Footnote 226 Moreover, it is also consistent with the approach taken by provinces and other jurisdictions.Footnote 227 Other privacy authorities with powers to review and give direction include, nationally, the OIPC in AlbertaFootnote 228 and British Columbia;Footnote 229 and internationally, Singapore’s Personal Data Protection Commission,Footnote 230 Ireland’s Data Protection Commission,Footnote 231 and Korea’s Data Protection Commission.Footnote 232 It also accords with Bill 64.Footnote 233

b. Design

A crucial element of accountability, as well as the human rights-based framework, is design. PIPEDA should require organizations to design for privacy and human rights by implementing appropriate technical and organizational measures that implement PIPEDA principles prior to and during all phases of collection and processing. These measures should be implemented with a view to ensuring the respect of privacy and other fundamental rights. This recommendation relates to accountability, given that design is crucial for individuals to assert their privacy rights.Footnote 234

This approach has been recommended by the House of Commons’ Standing Committee on Access to Information, Privacy and Ethics. The Committee recommends that designing for privacy be based on a user-centric approach that prioritizes individuals’ interests and frames the approach as positive, rather than zero-sum (meaning that “there should be no trade-offs with other features to achieve this goal”).Footnote 235 This addition is consistent with modern legislation. For example, the right is contained in the GDPR,Footnote 236 the UK Data Protection Act,Footnote 237 and Bill 64.Footnote 238 This idea received support in the consultation.

A unique term for PIPEDA, such as ‘privacy and human rights by design’, is the best option for this right. The reference to human rights supports the recommended human rights-based approach. Using a unique term makes it clearer that PIPEDA would incorporate its own standard consistent with such approach, and that requirements are not necessarily identical to requirements in the ‘privacy by design’ term, which identifies specific practices that are sometimes more demanding. Employing a unique term is also consistent with international best practices, such as the GDPR and the UK Data Protection Act: both tailored the term to reflect their respective project’s purpose by titling it ‘data protection by design and by default.’Footnote 239 A number of consultation respondents from private industry similarly supported using a unique term.

PIPEDA should phrase the right as mandating organizations to “implement appropriate technical and organizational measures.” To provide certainty to organizations, PIPEDA should clarify what constitutes appropriate organizational and technical measures to design for privacy and human rights. Doing so frames the obligation in a technologically neutral way. By phrasing the right in such a way, PIPEDA would establish a standard that maintains technological neutrality, as technological changes would eventually make any design-based rule outdated. Such a standard would be consistent with the approach in the GDPR,Footnote 240 the UK Data Protection Act,Footnote 241 and, within Canada, Bill 64.Footnote 242

PIPEDA should incorporate a risk-utility standard. A risk-utility standard requires manufacturers to adopt design precautions proportionately to the magnitude of expected risk.Footnote 243 A product, such as an AI system, fails this standard when the foreseeable risk of harm posed could have been reduced or avoided by a reasonable alternative design, and the omission of the alternative design renders the product unreasonably risky. Such a standard is in line with the human rights-based framework recommended in this report and addresses the private industry preference for a less prescriptive legislative approach, expressed in the OPC consultation feedback. This standard provides flexibility with a risk-based approach while ensuring human-rights protection. In this way, it strikes a similar balance to other branches of private law under provincial jurisdiction, such as consumer protection and tort law.Footnote 244

To provide greater clarity, PIPEDA should provide non-exhaustive examples of design precautions that provide different means to comply with the risk-utility framework, such de-identification, encryption, confidentiality by default, and opt-in consent.Footnote 245 To further refine the standard and its interpretations, this clarification should be complemented by rules in an OPC binding instrument. This idea emerged from responses to the consultation.Footnote 246 Greater clarity would facilitate compliance and limit possible abuses of discretion, which can arise when guidelines are too broad. The rules provided in such OPC instrument should be tailored to context and specific technologies. For example, what constitutes appropriate design measures may differ from a desktop website to a website integrated on wearable technology.

PIPEDA should clarify that mandating privacy and human rights by design implies prohibiting deceptive design. This implication of privacy and human rights by design mitigates the failures of privacy policies acknowledged by privacy law scholarship,Footnote 247 as empirical literature shows that design affects people’s privacy choices.Footnote 248 It thus protects meaningful consent (when not subject to an exception),Footnote 249 which is a key aspect of PIPEDA. This aspect of designing for privacy and human rights might also improve public trust in private sector privacy, as promises embedded in design are more visible than privacy policies, and people may react to them more than they do to privacy policies.Footnote 250 Deceptive design mechanisms, such as dark patterns, are also increasingly being discussed by civil society.Footnote 251 This interpretation makes privacy enforcement in Canada more consistent with Canadian consumer protection and with U.S. practices, where the Federal Trade Commission’s most effective and frequently used regulatory tool in privacy law is its authority to protect against deceptive practices.Footnote 252 Other jurisdictions are similarly taking steps to regulate deceptive design.Footnote 253 Established theories of preventing deception already used by regulators and courts in Canada, such as fraud and negligent misrepresentation torts, can serve as a basis for applying this idea.Footnote 254

PIPEDA should clarify that mandating privacy and human rights by design implies recognizing individuals’ reasonable privacy-related expectations generated by design choices. Design can often be perceived as a promise to follow a certain information practice.Footnote 255 Recognizing privacy and human rights by design entails holding organizations accountable to these promises. As with the prohibition against deceptive design, the recognition of promises embedded in design mitigates the limits of privacy policies acknowledged in privacy law scholarship. By protecting individuals’ reasonable expectations, it promotes meaningful consent (when not subject to an exception) and promotes autonomy by avoiding deception and abuse. Likewise, it also improves public trust as promises embedded in design are more visible than privacy policies, and people react to them more than they do to privacy policies.Footnote 256 Lastly, promissory design is consistent with principles of Canadian contract law (for example, when there is no contract, promissory estoppel allows for enforcement of promises that were detrimentally relied upon).Footnote 257

c. Privacy Impact Assessments

Privacy impact assessments offer a complementary way to implement accountability and designing for privacy and human rights. However, PIPEDA should avoid a blanket obligation to conduct privacy impact assessments—it should only do so when the exceptions recommended above are invoked. This is because privacy impact assessments can present high administrative costs with uncertain benefits in terms of human rights. Instead, PIPEDA should mention that privacy impact assessments are one compelling means to demonstrate accountability, as specified below. Similar considerations are warranted for implementing accountability and designing for privacy and human rights through third-party audits. Third-party audits should not be mandated for all cases. Such a measure would be highly costly for organizations and was consequently opposed by responses to the consultation. This costly measure risks becoming ineffective at protecting human rights if it develops into a situation of third-party auditors “rubber-stamping” practices. Instead, PIPEDA should include third-party audits as one way in which organizations can demonstrate accountability, as specified below.

Instead of mandating administrative measures such as privacy impact assessments and third-party audits under threat of sanction, PIPEDA should implement the privacy impact assessment and third-party audit framework through a standard, in view of accountability and designing for privacy and human rights. PIPEDA should establish that organizations will be held in breach and sanctioned, not if they fail to conduct a privacy impact assessment or third-party audit, but rather if they fail to take preventive measures and harm to individuals occurs. Privacy impact assessments or third-party audits would thus be preventive measures that organizations could choose to adopt. PIPEDA can implement this obligation by including language such as: ‘Organizations must protect the interests of individuals when processing their personal information by implementing measures that reduce risk of harm and by processing data in a way that is proportionate to the risks involved, to balance the benefits of activities with risks to individuals’ human rights. Ways to ensure this are privacy impact assessments, third-party audits, enforceable codes of conduct, voluntary certification mechanisms, and demonstrable design for privacy and human rights. If organizations fail to take measures, and material or non-material harm occurs, they are liable for such harm subject to private rights of action from individuals.’ This wording ensures that organizations can choose to save the costs of such measures for activities that they believe are unlikely to produce harm.

The idea above connects with the recommendations on enforcement while incorporating feedback to the consultation, as it is significantly less invasive than mandating privacy impact assessments or third-party audits. Crucially, it allows for more flexibility in compliance, while it protects human rights by using the two measures as tools for accountability and for human rights explicitly. This approach will be significantly less costly for organizations than the alternative contained in statutes of mandating privacy impact assessments or third-party audits for all processing.Footnote 258 At the same time, it protects individuals in a way that is consistent with a human rights-based approach. It articulates privacy impact assessments in a way that is commensurate with the level of privacy risk identified.

This approach borrows from a well-functioning model in the Canadian Business Corporations Act.Footnote 259 Such model establishes fiduciary obligations and includes different, non-mandated ways in which a board of directors can satisfy such obligations. Such a voluntary third-party audit system, coupled with private rights of action, would be sufficient to supplement investigatory powers of the OPC, as the need for mandatory third-party audits is moderated by incorporating proactive inspections. Moreover, modern statutes support this approach. For example, the UK guidance on AI states that the principle of accountability requires organizations to consider the “risks to the rights and freedoms of individuals” arising from their processing of personal data.Footnote 260 In terms of comparable legislation, this approach sits between statutes that contain no third-party audit or privacy impact assessment obligations, such as British Columbia’s Personal Information Protection Act and the CCPA, and those that mandate one, such as Bill 64 does for privacy impact assessments.Footnote 261 The GDPR, similar to the recommended approach, suggests third-party audits as a possible measure to demonstrate accountability.Footnote 262

d. Small and Medium Enterprises

PIPEDA should lighten the administrative costs of the recommended measures for SMEs by providing them with flexibility while keeping key human rights protections, as the OPC proposed.Footnote 263 This aim is achievable by adopting scalability measures for SMEs that lighten recordkeeping obligations under the accountability principle, unless they are engaged in activities that carry significant privacy risks for individuals. In particular, two measures should be taken. First, SMEs should be exempt from active recordkeeping. “SMEs typically have simple planning and control systems with informal rules and procedures. They also tend to have less standardization of work processes.”Footnote 264 This type of flexibility measure is compatible with modern statutes, particularly the GDPR.Footnote 265 Second, SMEs should be exempt from any privacy impact assessment requirements, such as when de-identified information is handled. Privacy impact assessment requirements pose an administrative burden, so exempting them is proportionate to the reduced risk that SMEs generally present.Footnote 266

These two exceptions are fitting for, and should be conditional to, a modified definition of SMEs that includes their risk to human rights, instead of only their number of employees. The modified definition is necessary to prevent this added flexibility for SMEs from turning into a back door to compliance, where organizations can engage in large amounts of data trading and risky practices with reduced oversight by keeping the number of employees nominally below 250. For this reason, PIPEDA should move from the definition of SMEs as having a maximum of 250 employeesFootnote 267 to a modern, risk-based definition. One example of such an approach is contained in the CCPA.Footnote 268 Unless “SME” is carefully defined, some organizations (with few employees but high revenues and high-risk processing) would be erroneously included in this category, while others (with several employees but low processing, revenue, and risk) would have disproportionate compliance costs. PIPEDA should consider an organization to be an SME if it: (i) has an annual gross revenue of less than $25 million; (ii) buys, receives, sells, or shares personal information of fewer than 50,000 individuals or devices annually for commercial purposes; (iii) less than half of its annual revenue comes from processing or selling personal information; and (iv) does not control, or is controlled by, a business that meets above criteria and shares its branding. This approach can account for a risk-based framework and human rights approach while providing added flexibility to small businesses.

Beyond these two exceptions, flexibility is afforded to SMEs in the proposals above. For example, SMEs can consider the cost of implementing technical and organizational measures when deciding how to ensure privacy and human rights by design and traceability. This approach is similar to the GDPR, which gives controllers the ability to consider costs of implementation, the scope and purposes of processing, risks, and more when determining which measures to implement.Footnote 269 Other benefits for SMEs are included in different flexibility measures recommended in this report, such as those pertaining to consent, de-identified information, and purpose specification.

6. Conclusion

AI is the most promising technology of our time. It has immense potential to improve well-being, efficiency, pandemic responses, and environmental sustainability. The changes in technology seen in the last 20 years, and particularly novel uses of AI, are of extreme value to Canada. But AI also poses serious risks to Canadians privacy and human rights. In particular, it impacts people’s daily lives in fields as critical as healthcare, finance, housing, hiring, and incarceration.

Such impact on rights produces an urgent need for law reform in Canada. Canadian law must provide a fertile environment for AI innovation while reducing the risk and impact on Canadians’ rights. The rules developed twenty years ago are simply not well equipped to do this. If regulated responsibly, AI can greatly benefit society, as contact-tracing shows. Canada should neither ban technologies that can be beneficial nor allow their implementation without oversight. Instead, it must develop regulations to curb their dangers while reaping their benefits.

These recommended modifications, which reflect years of work by the OPC, feedback from dozens of stakeholders, and work from dozens of academic experts on the topic, would lead Canada towards a modern private-sector privacy act. They would protect individuals’ human rights while setting the stage for a thriving AI industry in Canada.

The recommended modifications also increase interoperability between PIPEDA and the laws of Canada’s key trading partners, such as the US and the EU. Interoperability helps facilitate and regulate those exchanges and benefits organizations by reducing compliance costs. Such interoperability, for example, ensures that Canada maintains its adequacy status with respect to the EU, which is fundamental for promoting commerce.

The recommendations outlined above complement each other. Implementing the above recommendations in view of a rights-based approach is essential to protecting privacy and human rights while effectively adapting to and encouraging AI innovation.

Annex

List of recommendations

  1. PIPEDA should take an explicit rights-based approach reflected in a preamble and in an amended purpose clause.
  2. PIPEDA should establish rights and obligations rather than provide recommendations.
  3. The definition of personal information in PIPEDA should be amended to make clear that personal information includes collected personal information and inferences about individuals.
  4. PIPEDA should empower the OPC to issue binding orders and financial penalties with enforcement discretion. It should shift the OPC model from an ombudsperson to a regulator with the ability to issue binding guidelines.
  5. PIPEDA should incorporate private rights of action for breaches of statutory obligations.
  6. PIPEDA should incorporate a consent exception when collection or processing serves the public good.
  7. PIPEDA should add flexibility to the collection, use, and dissemination of de-identified information for research purposes while protecting privacy and human rights.
  8. PIPEDA should allow for greater flexibility for further processing of information not initially identified in original purpose when the new purpose is compatible with the original purpose.
  9. PIPEDA should mandate safeguards when exceptions or flexibility measures to consent are invoked: (i) balancing tests and (ii) privacy impact assessments.
  10. PIPEDA should not define AI and should maintain technological neutrality.
  11. PIPEDA should incorporate a right for individuals to obtain a meaningful explanation when subject to automated decision-making and using their personal information.
  12. PIPEDA should create a right to contest decisions that are made about an individual using his or her personal information based on automated decision-making.
  13. PIPEDA should incorporate a right to demonstrable accountability for individuals, including a right to data traceability.
  14. PIPEDA should require organizations to implement appropriate technical and organizational measures for designing for privacy and human rights prior to and in all phases of collection and processing, including the prohibition of deceptive design.
  15. PIPEDA should implement the privacy impact assessment and third-party audit framework by establishing that companies will be held in breach of PIPEDA and sanctioned if organizations fail to take preventive measures and harm to individuals occurs.
  16. PIPEDA should exempt small and medium enterprises from active recordkeeping and from any third-party audits and privacy impact assessment requirements.
Date modified: