Language selection


Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence

Seeking views on the OPC’s recommendations to Government/Parliament

Deadline : March 13, 2020


The Office of the Privacy Commissioner of Canada (OPC) is currently engaged in legislative reform policy analysis of both federal privacy laws. We are examining artificial intelligence (AI) as a subset of this work as it relates specifically to the Personal Information Protection and Electronic Documents Act (PIPEDA). We are of the view that PIPEDA falls short in its application to AI systems and we have identified several areas where PIPEDA could be enhanced. We are seeking to consult with experts in the field to validate our understanding of how privacy principles should apply and whether our proposals would be consistent with the responsible development and deployment of these systems.

We are paying specific attention to AI systems given their rapid adoption for processing and analysing large amounts of personal information. Their use for making predictions and decisions affecting individuals may introduce privacy risks as well as unlawful bias and discrimination.

It is clear that AI provides for many beneficial uses. For example, AI has great potential in improving public and private services, and has helped spur new advances in the medical and energy sectors among others. However, the impacts to privacy, data protection and, by extension, human rights will be immense if clear rules are not enshrined in legislation that protect these rights against the possible negative outcomes of AI and machine learning processes.

The June 2019 G20 Ministerial Statement on Trade and Digital Economy committed to a human-centered approach to AI, recognizing the need to continue to promote the protection of privacy and personal data consistent with applicable frameworks.Footnote 1 As well, a 2019 report by Deloitte cautions that “business and government may not have much time to act to address the perceived risks of AI before Canadians definitively turn against the new technology.”Footnote 2

Based on our own assessment, AI presents fundamental challenges to all foundational privacy principles as formulated in PIPEDA. For instance, the data protection principle of limiting collection may be incompatible with the basic functionality of AI systems. Some have pointed out that AI systems generally rely on large amounts of personal data to train and test algorithms, alleging that limiting some or any of the data could lead to reduced quality and utility of the output.Footnote 3

As for another example, some have observed that organizations relying on AI for advanced data analytics or consequential decisions may not necessarily know ahead of time how the information processed by AI systems will be used or what insights they will discover.Footnote 4 This has led some to call into question the practicality of the purpose specification principle, that requires on the one hand “specifying purposes” to individuals at the time of collecting their information and, on the other, “limiting use and disclosure” of personal information to the purpose for which it was first collected.Footnote 5

To echo the words of the late Ian Kerr, former Canada Research Chair in Ethics, Law, and Technology, and former member of Canada’s Advisory Council on Artificial Intelligence, “we stand on the precipice of a society that increasingly interacts with machines, many of which will be more akin to agents than mere mechanical devices. If so, our laws need to reflect this stunning new reality.”Footnote 6

To this end, we have developed what we believe to be key proposals for how PIPEDA could be reformed in order to bolster privacy protection and achieve responsible innovation in a digital era involving AI systems. In our view, responsible innovation involving AI systems must take place in a regulatory environment that respects fundamental rights and creates the conditions for trust in the digital economy to flourish.

We view our proposals as being interconnected and meant to be adopted as a suite within the law. To facilitate a robust discussion with experts on these matters, we pose a number of questions to elicit feedback on our suggested enhancements to PIPEDA. We welcome any additional feedback experts would like to share to help shape our work in this regard.

Proposals for Consideration

Proposal 1: Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI

PIPEDA is technologically neutral and is a law of general application. As such, it does not include definitions relating to AI, automated decision-making or automated processing. However, as suggested in other proposals found in this document, there may be a need for specific rules to cover certain uses of AI, which would support defining it within the act to clarify when such rules would apply.

The OECD Principles on Artificial Intelligence, adopted in May 2019 by forty-two countries, including Canada, defines an AI system as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”Footnote 7 However, the Institute of Electrical and Electronics Engineers’ (IEEE) Global Initiative on Ethics of Autonomous and Intelligence Systems takes the view that the term AI is too vague and uses instead “autonomous and intelligent systems.”Footnote 8

The EU’s General Data Protection Regulation (GDPR) explicitly addresses AI by referring to automated decision-making and profiling in Article 22. In its 2017 guidance, Big data, artificial intelligence, machine learning and data protection, the UK Information Commissioner’s Office (ICO) distinguishes between the key terms of AI, machine learning and big data analytics, noting they are often used interchangeably but have subtle differences.Footnote 9 For example, the ICO refers to AI as a key to unlocking the value of big data, machine learning as one of the technical mechanisms that facilitates AI, and big data analytics as the sum of both AI and machine learning processes.

Discussion questions:

  1. Should AI be governed by the same rules as other forms of processing, potentially enhanced as recommended in this paper (which means there would be no need for a definition and the principles of technological neutrality would be preserved) or should certain rules be limited to AI due to its specific risks to privacy and, consequently, to other human rights?
  2. If certain rules should apply to AI only, how should AI be defined in the law to help clarify the application of such rules?

Proposal 2: Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights

Paul Nemitz, Principal Adviser on Justice Policy at the EU Commission, aptly captures why AI requires special legal attention and the significance of checking against human rights in the rule of law:

AI will in many areas of life decide or prepare decisions or choices which previously were made by humans, according to certain rules. If thus AI now incorporates the rules according to which we live and executes them, we will need to get used to the fact that AI must always be treated like the law itself. And for a law, it is normal to be checked against higher law, and against the basic tenants of constitutional democracy. The test every law must go through is whether it is in line with fundamental rights, whether it is not in contradiction with the principle of democracy, thus in particular whether it has been adopted in a legitimate procedure, and whether it complies with the principle of the rule of law, thus is not in contradiction to other pre-existing law, sufficiently clear and proportional to the purpose pursued.Footnote 10

The purpose of the law ought to be to protect privacy in the broadest sense, understood as a human right in and of itself, and as foundational to the exercise of other human rights. This human rights based approach is consistent with the recent 2019 Resolution of Canada’s Federal, Provincial and Territorial Information and Privacy Commissioners, which notes that AI and machine learning technologies must be “designed, developed and used in respect of fundamental human rights, by ensuring protection of privacy principles such as transparency, accountability, and fairness.”Footnote 11

Likewise, the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) resolution on AI (2018) affirms that “any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values.”Footnote 12 The ICDPPC’s recent Resolution on Privacy as a Fundamental Human Right and Precondition for Exercising Other Fundamental Rights (2019) reaffirms a strong commitment to privacy as a right and value in itself, and calls for appropriate legal protections to prevent privacy breaches and impacts to human rights given advancements of new technologies like AI.Footnote 13

In order to ensure the protection of rights, we are of the view that PIPEDA should be given a rights-based foundation that recognizes privacy in its proper breadth and scope, and provides direction on how the rest of the Act’s provisions should be interpreted. Such an approach would be consistent with many international instruments, including the GDPR, which has incorporated a human rights-based approach to privacy within the EU’s data protection legislation. Through recitals, the GDPR makes repeated references to fundamental rights of individuals in relation to data processing.

The need to firmly embed and clarify rights in PIPEDA is ever more pressing in a digital context where computers may make decisions for and about us with little to no human involvement.

Discussion question:

  1. What challenges, if any, would be created for organizations if the law were amended to more clearly require that any development of AI systems must first be checked against privacy, human rights and the basic tenets of constitutional democracy?

Proposal 3: Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions

If we are to meaningfully protect privacy as a human right in a digital context involving AI systems, one such right that needs to be considered is the ability to object to decisions made by computers and to request human intervention. A number of jurisdictions around the world include in their laws a right to be free from automated decision-making, or an analogous right to contest automated processing of personal data, as well as a right not to be subject to decisions based solely on automation.

For example, Article 22 of the GDPR grants individuals the right not to be subject to automated decision-making, including profiling, except when an automated decision is necessary for a contract; authorized by law; or explicit consent is obtained. Article 22 also contains the caveat that where significant automated decisions are taken on the basis of a legitimate grounds for processing, the data subject still has the right to obtain human intervention, to contest the decision, and to express his or her point of view.

Note that Article 21 of the GDPR permits individuals the right to object to any profiling or other processing that is carried out on the basis of legitimate interests or on the basis of a task carried out in the public interest or official authority.Footnote 14 If a right to object to such processing is exercised, it may continue only if it can be shown that there is a compelling reason to continue the processing that overrides the individual’s interests, rights and freedoms or for the establishment, exercise or defence of legal claims.

Article 21 also allows the individual the right to object to having their personal information processed for direct marketing purposes, and any related profiling and processing must stop as soon as the objection has been received.Footnote 15 There are no exemptions or grounds to refuse an individual’s objection towards direct marketing.

We support incorporating a circumscribed right to object in PIPEDA, similar to that found in the GDPR.

Currently, Principle 4.3.8 of PIPEDA provides that an individual may withdraw consent at any time, subject to legal or contractual restrictions and reasonable notice. We view integrating a right to object and to be free from automated decisions as analogous to the right to withhold consent.

Discussion questions:

  1. Should PIPEDA include a right to object as framed in this proposal?
  2. If so, what should be the relevant parameters and conditions for its application?

Proposal 4: Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing

Transparency is a foundational element of PIPEDA’s openness principle and a precondition to trust. However, as currently framed, the principle lacks the specificity required to properly address the transparency challenges posed by AI systems, as it does not explicitly provide explanation rights for individuals when interacting with or being subjected to automated processing operations.

The Council of Europe’s consultative committee suggests in their Guidelines on Artificial Intelligence and Data Protection that: “Data subjects should be informed if they interact with an AI application and have a right to obtain information on the reasoning underlying AI data processing operations applied to them. This should include the consequences of such reasoning.”Footnote 16

In Europe, there is debate about the interpretation of the GDPR with respect to whether the law requires explanation of system functionality or the rationale for the logic, significance and consequences of specific decisions.Footnote 17 France and Hungary are among the EU Member States that guarantee a right to legibility/explanation about algorithmic decisions in their national data protection legislation.Footnote 18 For instance, the law in France provides that data subjects have the right to obtain from the controller information about the logic involved in algorithm-based processing.Footnote 19

The Government of Canada has expressed its support for algorithmic transparency.Footnote 20 In its PIPEDA white paper, the federal department of Innovation, Science and Economic Development Canada (ISED) proposes amending the law to provide for more meaningful controls and increased transparency to individuals as it relates to AI. They suggest that a reformed PIPEDA should include “informing individuals about the use of automated decision-making, the factors involved in the decision, and where the decision is impactful, information about the logic upon which the decision is based.”Footnote 21

We believe the openness principle of PIPEDA should include a right to explanation that would provide individuals interacting with AI systems the reasoning underlying any automated processing of their data, and the consequences of such reasoning for their rights and interests. This would also help to satisfy PIPEDA’s existing obligations of providing individuals with rights to access and correct their information held by organizations.

In addition to this, we would possibly support enhancing transparency requirements under the law to mandate:

  • The conduct and publishing of Privacy Impact Assessments (PIAs), including assessments relating to the impacts of AI processing on privacy and human rights. The published content would be based on a minimum set of requirements that would be developed in consultation with the OPC.
  • Public filings for algorithms, similar to U.S. Securities and Exchange Commission filings, with penalties for non-disclosure and non-compliance. Member of Parliament, Nathaniel Erskine-Smith, raised the issue of mandating filings for algorithms at the Standing Committee on Access to Information, Privacy and Ethics (ETHI). He noted: “if we are serious about that level of transparency and explainability, it could mean a requirement for algorithmic impact assessments in the private sector akin to an SEC filing where non-compliance would come with some sanctions if information is not included.”Footnote 22

Discussion questions:

  1. What should the right to an explanation entail?
  2. Would enhanced transparency measures significantly improve privacy protection, or would more traditional measures suffice, such as audits and other enforcement actions of regulators?

Proposal 5: Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection

Internationally, there are a number of legal and non-binding instruments that instruct organizations to design their products, systems or programs in a manner that avoids possible adverse consequences on privacy, human rights and fundamental freedoms. In its Guidelines on Artificial Intelligence and Data Protection, the Council of Europe’s Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data states that:

In all phases of the processing, including data collection, AI developers, manufacturers and service providers should adopt a human rights by-design approach and avoid any potential biases, including unintentional or hidden, and the risk of discrimination or other adverse impacts on the human rights and fundamental freedoms of data subjects.Footnote 23

“Data Protection by Design and by Default” is the meaningful title of Article 25 of the GDPR, which applies more broadly than only to AI systems. Article 25 discusses a number of elements of this obligation, including putting in place appropriate technical and organizational measures designed to implement the data protection principles and safeguard individual rights and freedoms. Article 25 further indicates that “an approved certification mechanism” may be used to demonstrate compliance.Footnote 24

The Treasury Board of Canada Secretariat’s Directive on Automated Decision-Making requires the Government of Canada, before launching into production, to develop processes to test for unintended data biases and other factors that may unfairly impact the outcomes.Footnote 25 The Directive also requires the completion of an algorithmic impact assessment prior to the production of any automated decision system. The assessment must also be updated when system functionality, or the scope of the Automated Decision System changes in order to continuously monitor for and prevent such negative impacts.

We find each of these texts instructive and respective requirements worthy of incorporation into PIPEDA.

Discussion questions:

  1. Should Privacy by Design be a legal requirement under PIPEDA?
  2. Would it be feasible or desirable to create an obligation for manufacturers to test AI products and procedures for privacy and human rights impacts as a precondition of access to the market?

Proposal 6: Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective

The Information Technology Association of Canada has conveyed to the ETHI Committee that “having access to broad and vast amounts of data is the key to advancing our artificial intelligence capabilities in Canada.”Footnote 26 This objective is in tension with the important legal principles of purpose specification and data minimization, which apply to the development and implementation of AI systems under the current PIPEDA.

It may be difficult to specify purposes that only become apparent after a machine has identified linkages. For example, the Information Accountability Foundation argues that since “the insights data hold are not revealed until the data are analyzed, consent to processing cannot be obtained based on an accurately described purpose.”Footnote 27 Without being able to identify purposes at the outset, limiting collection to only that which is needed for the purposes identified by the organization, as required by PIPEDA, is made equally challenging.

Some data protection authorities argue that purpose specification and data minimization are still applicable in the AI context. For example, in discussing data minimization techniques in AI systems, the UK Information Commissioner’s Office (ICO) notes that “the fact that some data might later in the process be found to be useful for making predictions is not enough to establish its necessity for the purpose in question, nor does it retroactively justify its collection, use or retention.”Footnote 28 The UK ICO further notes that data can also be minimized during the training phase based on the assumption that “not all features included in a dataset will necessarily be relevant to the task.”Footnote 29 The Norwegian Data Protection Authority suggests that proactively considering data minimization supports the desirable goal of proportionality, which requires consideration of how to achieve the objective of the AI processing in a way that is the least invasive for the individual.Footnote 30

Canadian Parliamentary Committee reporting validates the merits of the principle of data minimization in the context of ethics and AI. Specifically, in June 2019, the ETHI Committee recommended that the government modernize Canada’s privacy laws and commit “to uphold data minimization, de-identification of all personal information at source when collected for research or similar purpose and clarify the rules of consent regarding the exchange of personal information between government department and agencies.”Footnote 31

Purpose specification and data minimization remain complex issues and the potential challenges in adhering to these legal principles in an AI context merit discussing whether there is reason to explore alternative grounds for processing.

Discussion questions:

  1. Can the legal principles of purpose specification and data minimization work in an AI context and be designed for at the outset?
  2. If yes, would doing so limit potential societal benefits to be gained from use of AI?
  3. If no, what are the alternatives or safeguards to consider?

Proposal 7: Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable

The concept of consent is a central pillar in several data protection laws, including the current PIPEDA. However, there is evidence that the current consent model may not be viable in all situations, including for certain uses of AI. This is in part due to the inability to obtain meaningful consent when organizations are unable to inform individuals of the purposes for which their information is being collected, used or disclosed in sufficient detail so as to ensure they understand what they are being invited to consent to. As noted in our Guidelines on Obtaining Meaningful Consent, clear purpose specification is one of the key elements organizations must emphasize in order to obtain meaningful consent.

In other laws, such as the GDPR, consent is only one legal ground for processing among many.Footnote 32 Alternative grounds for processing under the GDPR include when processing is necessary for the performance of a task carried out in the public interest, and when the processing is necessary for the purposes of the “legitimate interests” pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject (in particular where the data subject is a child).

We believe there is a continued role for consent in the use of AI when it can be meaningful, and, to that extent, we would support efforts by the federal government to explore incentivizing new business models that promote innovative consent models. For example, emerging consent technologies and personal information management systemsFootnote 33 offer important opportunities to preserve human agency and meaningfully inform individuals about the development and deployment of AI systems. These approaches should be maximized to facilitate consent whenever possible.

That said, and as outlined in our Report on Consent,Footnote 34 we acknowledge that alternate grounds to consent may be acceptable in certain circumstances, specifically when obtaining meaningful consent is not practicable and certain preconditions are met. In our Report we proposed that Parliament consider amending PIPEDA to introduce new exceptions to consent to allow for socially beneficial activities that the original PIPEDA drafters did not envisage. Such alternative grounds would not be intended to relax privacy rules but rather to recognize that consent may not be effective in all circumstances and that more effective measures must be adopted to better protect privacy.

In assessing how a future PIPEDA should appropriately deal with consent, particularly in the AI context, we propose that meaningful consent should be required in the first instance for transparency and to preserve human agency. Alternative grounds for processing such as those found in the GDPR and outlined in our Report on Consent should be available in instances where obtaining meaningful consent is not possible and prescribed conditions, such as demonstrating that obtaining consent was considered and impracticable and that a PIA was conducted and published in advance, are first met.

The use of non-identifiable data, such as through the application of de-identification methods, could also be a factor in determining whether certain other grounds for processing such as legitimate or public interest should be authorized under the Act.

A new consent exception of this nature would necessarily have to be contingent on stronger enforcement powers that would authorize the privacy regulator, where warranted, to assess whether the use of personal information was indeed for broader societal purposes and met the prescribed legal conditions.

Discussion questions:

  1. If a new law were to add grounds for processing beyond consent, with privacy protective conditions, should it require organizations to seek to obtain consent in the first place, including through innovative models, before turning to other grounds?
  2. Is it fair to consumers to create a system where, through the consent model, they would share the burden of authorizing AI versus one where the law would accept that consent is often not practical and other forms of protection must be found?
  3. Requiring consent implies organizations are able to define purposes for which they intend to use data with sufficient precision for the consent to be meaningful. Are the various purposes inherent in AI processing sufficiently knowable so that they can be clearly explained to an individual at the time of collection in order for meaningful consent to be obtained?
  4. Should consent be reserved for situations where purposes are clear and directly relevant to a service, leaving certain situations to be governed by other grounds? In your view, what are the situations that should be governed by other grounds?
  5. How should any new grounds for processing in PIPEDA be framed: as socially beneficial purposes (where the public interest clearly outweighs privacy incursions) or more broadly, such as the GDPR’s legitimate interests (which includes legitimate commercial interests)?
  6. What are your views on adopting incentives that would encourage meaningful consent models for use of personal information for business innovation?

Proposal 8: Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification

De-identification is achieved through processes that remove information that can identify individuals from a data set so that the risks of re-identification and disclosure are reduced to low levels. Importantly, however, in fact, there always remains a risk—even if remote—that re-identification may be possible.

There are divergent approaches internationally on whether de-identified information falls within the scope of data protection laws. Many jurisdictions view de-identified or anonymized data as non-personal information falling outside the purview of law. For example, Australia’s Privacy Act 1988 will not apply to information that has undergone de-identification so long as there is no reasonable likelihood of re-identification occurring.Footnote 35 Similarly, Hong Kong’s privacy law will not consider data that is anonymized personal so long as the individuals concerned cannot be directly or indirectly identified.Footnote 36

Japan’s regime differs substantially in that its Act on the Protection of Personal Information applies to the category of “anonymously processed information,” and sets out obligations for organizations that anonymize data and/or use anonymized data (including notice).Footnote 37 Under this Act, consent is not required for use or disclosure of anonymously processed data.

Given there always remains a risk of re-identification, we believe that PIPEDA should continue to apply, but that there could be flexibility to use de-identified information (or information rendered non-identifiable) under a new Act. With this flexibility, certain PIPEDA principles (such as consent) could either not apply or their application could be relaxed. As mentioned, de-identification could be a factor in deciding whether alternative grounds for processing, such as legitimate interests, should be authorized.

We would also support including in the law penalties for negligence or malicious actions resulting in re-identification of personal information from de-identified datasets. This approach to financial consequences for re-identification is in line with other jurisdictions. For example, Japan’s data protection legislation specifically forbids the re-identification of de-identified data with a potential penalty of imprisonment or a fineFootnote 38 and Australia’s proposed Privacy Amendment (Re-identification Offence) Bill 2016 includes criminal offences and civil penalty provisions for the re-identification of de-identified personal information or the disclosure of such information.Footnote 39

Discussion questions:

  1. What could be the role of de-identification or other comparable state of the art techniques (synthetic data, differential privacy, etc.) in achieving both legitimate commercial interests and protection of privacy?
  2. Which PIPEDA principles would be subject to exceptions or relaxation?
  3. What could be enhanced measures under a reformed Act to prevent re-identification?

Proposal 9: Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle

A requirement for algorithmic traceability would facilitate the application of several principles, including accountability, accuracy, transparency, data minimization as well as access and correction. Indeed, several international organizations take the position that being able to trace the source of AI system data is both possible and highly desirable. For example, the OECD Principles on Artificial Intelligence state that “AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.”Footnote 40

The Institute of Electrical and Electronics Engineers (IEEE) notes that:

Technologists and corporations must do their ethical due diligence before deploying A/IS [Artificial Intelligence Systems] technology… Similar to a flight data recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to questionable or dangerous behaviors. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.Footnote 41

Several data protection authorities have addressed this issue. For example, the Personal Data Protection Commission of Singapore has recommended implementing both “data lineage” and “data provenance records” in A Proposed Model Artificial Intelligence Governance Framework.Footnote 42 It explains “data lineage” as “knowing where the data originally came from, how it was collected, curated and moved within the organisation, and how its accuracy is maintained over time. Data lineage can be represented visually to trace how the data moves from its source to its destination, how the data gets transformed along the way, where it interacts with other data, and how the representations change.” It explains a “data provenance” record as allowing “an organisation to ascertain the quality of the data based on its origin and subsequent transformation, trace potential sources of errors, update data, and attribute data to their sources.”

France’s data protection authority (the Commission nationale de l'informatique et des libertés—CNIL), has recommended the development of a “national platform” for algorithmic auditing.Footnote 43 This proposal is in line with the proposed Algorithmic Accountability Act (AAA), which would give the US Federal Trade Commission (FTC) new powers to require companies to assess their machine learning systems for bias and discrimination.Footnote 44 Regulations to be adopted by the FTC within two years of the coming into force of the law would require organizations to conduct automated decision impact assessments and data protection impact assessments, “if reasonably possible,” in consultation with third parties, including independent auditors and independent technology experts.

Private sector consultancy PwC Australia has also made recommendations about AI governance, taking the position that “AI plans should (…) start with a clear picture of where data has come from, how reliable it is, and any regulatory sensitivities that might apply to its use, before being approved. Data preparation and data ‘labelling’ processes should be traceable. That is, it should be possible to show an audit trail of everything that has happened to the data over time, in the event that there is a later audit or investigation.”Footnote 45

Legal experts Danielle Citron and Frank Pasquale argue that “aggrieved consumers could be guaranteed reasonable notice if scoring systems included audit trails recording the correlations and inferences made algorithmically in the prediction process. With audit trails, individuals would have the means to understand their scores. They could challenge mischaracterizations and erroneous inferences that led to their scores.”Footnote 46

As well, ISED’s PIPEDA reform paper recommends “Ensuring the accuracy and integrity of information about an individual throughout the chain of custody by requiring organizations to communicate changes or deletion of information to any other organization to whom it has been disclosed.”Footnote 47

In considering these expert views and given the importance of being able to trace, analyze and validate AI system outcomes for individuals to be able to avail themselves of existing access and correction rights and also improved human rights protections under a reformed PIPEDA, we recommend the inclusion of an algorithmic traceability requirement for AI systems.

Discussion question:

  1. Is data traceability necessary, in an AI context, to ensure compliance with principles of data accuracy, transparency, access and correction and accountability, or are there other effective ways to achieve meaningful compliance with these principles?

Proposal 10: Mandate demonstrable accountability for the development and implementation of AI processing

Shortcomings with the current framing of the principle of accountability in PIPEDA have lead us to conclude that a more robust conception of accountability should be included in a modernized Act. While Principle 4.1 of PIPEDA requires organizations to be accountable for the personal information under their control, we propose that the principle be reframed to require “demonstrable” accountability on the part of organizations. Demonstrable accountability would require organizations to be able to provide evidence of adherence with legal requirements on request. The ability for an organization to demonstrate accountability becomes even more important in cases where consent is not required, and organizations are expected to close the protective gap through accountability.

There are a variety of methods by which demonstrable accountability could be achieved, such as requiring traceability, explanation rights, and privacy and human rights impact assessments, as previously discussed.Footnote 48 A record keeping requirement would also be necessary to facilitate the OPC’s ability to conduct proactive inspections. Such inspection powers currently exist in the UK and several other countries globally and are an essential mechanism for effective enforcement in favour of protecting rights and preventing harms. As the International Technology Law Association’s Responsible AI: a Global Policy Framework explains, “beneficial AI demands human accountability. General principles, even if well intended, are useless without enforceable accountability regimes and without efficient governance models.”Footnote 49

The Privacy Commissioner has the authority under section 37 of the Privacy Act to carry out investigations at his discretion in order to ensure a government institution is compliant with specific sections of the Act. The addition of such a provision in PIPEDA, where the OPC could proactively inspect the practices of organizations, would move the law towards a model of demonstrable accountability.

We propose that the law also require independent third-party auditing throughout the lifecycle of the AI system. Auditors could be subject to financial penalties if they act negligently by signing off on practices that are in fact not compliant.

We would also favour introducing into PIPEDA incentives for organizations adopting demonstrable accountability measures, such as giving consideration of these measures as mitigating factors during an investigation, or the imposition of financial penalties for non-compliance.

Finally, we are of the view that true accountability should lead to liability for humans, not machines. As such, demonstrable accountability should be strongly linked with fault-finding and liability for design failures that lead to privacy incursions. The International Technology Law Association’s Responsible AI: a Global Policy Framework aptly captures why humans must remain responsible:

even if AI might force us to reconsider the accountability of certain actors, it should be done in a way that shifts liability to other human actors and not to the AI systems themselves (…) Holding AI systems directly liable runs the risk of shielding human actors from responsibility and reducing the incentives to develop and use AI responsibly.Footnote 50

Discussion questions:

  1. Would enhanced measures such as those as we propose (record-keeping, third party audits, proactive inspections by the OPC) be effective means to ensure demonstrable accountability on the part of organizations?
  2. What are the implementation considerations for the various measures identified?
  3. What additional measures should be put in place to ensure that humans remain accountable for AI decisions?

Proposal 11: Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law

The significant risks posed to privacy and human rights by AI systems demand a proportionally strong regulatory regime. To incentivize compliance with the law, PIPEDA must provide for meaningful enforcement with real consequences for organizations found to be non-compliant.

The need to legislate for stronger enforcement as a privacy protective measure in the digital age was echoed in the Council of Europe’s 2017 Study on the Human Rights Dimensions of Automated Data Processing Techniques (In Particular Algorithms) and Possible Regulatory Implications, which advised that “privacy, as the exercise of other human rights, requires effective enforcement.”Footnote 51

Canada’s privacy laws have unfortunately fallen significantly behind those of trading partners in terms of the enforcement. At the same time, most Canadians believe their privacy rights are not respected by organizations. Such a sentiment is not conducive to building consumer trust, and is undesirable from both an individual and organizational perspective. The law should provide for enforcement mechanisms that ensure individuals have access to a quick and effective remedy for the protection of their privacy rights, and that create incentives for broad compliance by commercial organizations.

Among the improvements required to PIPEDA is to empower the Privacy Commissioner of Canada to make binding orders and impose consequential penalties for non-compliance with the law. Giving these powers to a first level authority rather than requiring individuals to wait until a court, several years after an alleged violation, upholds a complaint, is a much more effective way to ensure the timely enjoyment of rights.

In other jurisdictions within Canada and abroad, privacy and data protection regulators have the authority to issue binding orders and impose financial penalties. The range of order making powers includes the ability to require an organization to stop collecting, using or disclosing personal information, to destroy personal information collected in contravention of the legislation, and more generally to order the application of such remedial measures as are appropriate to ensure the protection of the personal information, among others. Regarding financial penalties, in Europe, for example, the GDPR allows for the issuance of “administrative fines”. Organizations in breach of the GDPR can be fined up to 4% of annual global turnover or 20 Million (whichever is greater).

True order-making powers and financial penalties would lead to quicker resolutions for Canadians and provide them with reassurance to be able to confidently participate in the digital marketplace. Ultimately, enforcement mechanisms should result in quick and effective remedies for individuals, and broad and ongoing compliance by organizations and institutions. Without effective enforcement, rights become hollow and trust dissipates.

Discussion questions:

  1. Do you agree that in order for AI to be implemented in respect of privacy and human rights, organizations need to be subject to enforceable penalties for non-compliance with the law?
  2. Are there additional or alternative measures that could achieve the same objectives?
Date modified: