Language selection

Search

Privacy Protection in the Era of “Big Data”: Response to Office of Privacy Commissioner’s Discussion Paper on “Consent and Privacy”

Colin J. Bennett (University of Victoria)

October 2016

Note: This submission was contributed by the author to the Office of the Privacy Commissioner of Canada’s Consultation on Consent under PIPEDA.

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.


Summary

The discussion paper “Consent and Privacy” raises a number of issues about the “consent model” upon which PIPEDA is based. Pressure on this model is more acute than it has ever been, as a result of what some have termed the “Big Data” revolution. I would like to offer some reflections on the Big Data revolution in response to the analysis in the paper (pp. 6-7). We need to separate the reality from the hype in order to properly understand the nature of the challenge to the traditional consent model.

My overall position on the issues raised in the paper is that: 1) consent is, and should remain, the cornerstone of any privacy-protection policy; and 2) the system requires the implementation of all potential privacy instruments in the toolbox (regulatory, self-regulatory and technological). That was the central message of my 2006 co-authored book, “The Governance of Privacy” (Bennett and Raab, 2006). It is obvious that Canadian privacy protection needs: better and more transparent privacy policies that work across services; privacy by default and by design; better de-identification and standards for de-identification; the encouragement of privacy management frameworks and accountability; and better use of codes of practice, technical standards and privacy trustmarks. All are necessary, and none is sufficient. I am also persuaded that the Commissioner needs stronger enforcement powers.

For this response, however, I would like to focus on the questions addressed under the section on Ethical Assessments (pp. 22-24), and consider in what ways we might enhance and broaden privacy impact assessments to embrace the wider range of risks produced by Big Data analytics. The response is adapted from a recent chapter produced for a report in the Netherlands (Bennett and Bayley, 2016).

A considerable literature already exists on PIAs and on their development and implementation in different countries (Wright and de Hert, 2012). They are now institutionalized under many data protection regimes and will become, in some contexts, mandatory under the new EU General Data Protection Regulation (GDPR) (EU, 2016). In the main, however, these methodologies were developed before the challenges posed by big data analytics, and tend not to incorporate assessments of the broader discriminatory impacts of these practices.

Do existing PIA methodologies need to be revised to enable the evaluation of risk in the context of big data? What tools might be developed to assess the broader social risks of excessive surveillance and categorical discrimination? There are proposals for Surveillance Impact Assessments (Wright and Raab, 2012), and for more unified ethical frameworks, developed to guide data scientists (IAF, 2014; 2015). The paper questions whether the integration of existing PIA methodologies into a broader ethical frame is a critical condition for the mitigation of individual and social risks in this new era of big data analytics.

Finally, what other regulatory solutions have been proposed, both now and in the past, that could offer ways to allow the promise of big data analytics, and at the same time, to protect individual privacy rights? Is it really necessary to give up on the central tenet of privacy protection law and philosophy in order to permit big data analytics to realize their potential? I do not think so. On the contrary, I would argue that the current debate tends to rest on a false dichotomy and a fundamental misunderstanding about the theory of information privacy that developed 40 years ago, and the data protection policies that it generated (Bennett, 1992).

The full submission is available in the following language(s):

English (PDF document)

Note: As this submission was provided by an entity not subject to the Official Languages Act, the full document is only available in the language provided.

Date modified: