Language selection

Search

Consent and Personal Information Protection

Avner Levin (Professor and Director, Privacy and Cyber Crime Institute, Ryerson University)

October 2016

Note: This submission was contributed by the author to the Office of the Privacy Commissioner of Canada’s Consultation on Consent under PIPEDA.

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.


Summary

The principle of consent has devolved from its role as a lynchpin of the privacy protective regulatory system a generation ago to a façade, which offers us today no more than the appearance and illusion of control over our personal information, while enabling in reality widespread corporate commercial data processing. Hastened along toward its demise by rapid technological development and new social and political paradigms of information sharing, the idea of consent, and the overarching principles of individual choice and control over personal information which it serves, can still be salvaged through a new regulatory approach. This approach should focus on the retention of consent in meaningful instances which have significant implications for individuals – such as the health-care, employment, and education contexts. The OPC must be equipped with enforcement and order-making powers comparable to other jurisdictions, and further regulation must protect privacy by leveraging the power of technology to come up with hybrid regulatory/technological solutions along the lines of the “Google Spain” decision and the DMCA. If we could find a way to protect IP for strong commercial interests despite technological developments, surely we can find a way to do the same for privacy.

Full submission:

Note: As this submission was provided by an entity not subject to the Official Languages Act, the full document is only available in the language provided.

Introduction

I am grateful for the opportunity to submit another short paper in response to another excellent OPC discussion paper, this time on the topic of consent. The paper addresses several of the topics of the discussion paper and it is based in parts on a talk I delivered recently to McGill University’s graduate law student conference. It focuses on what I call the façade of consent and on the steps that should be taken by the OPC if we are to retain meaningful protection of personal information in Canada.

The Façade of Consent

In contemporary Canada, and in the commercial context to which PIPEDA applies, there is no escape from the conclusion that the consent regime has failed. It has failed since it does not offer individual Canadians the means of obtaining the overarching goal of PIPEDA and other similar data protection statutes, which is to offer individuals control over the processing of their personal information, and through that control, the protection of their personal information. Control is not possible since the idea of consent, and the manner in which it is obtained, are stuck in time. They are the product of a “Golden Era” of privacy which was at its heyday (such as it was) in the 1970s and 1980s.

This Golden Era of privacy saw governments establish fundamental principles, pass legislation, and create regulatory frameworks, which would oversee, first, government, and subsequently, private sector, processing of personal information. These fundamental principles were known in the US as FIPPs, for Fair Information Practice Principles. The Americans identified five original principles (in 1973) of Notice, Choice, Access, and Security and Enforcement. The Europeans and the OECD then added more principles in the 1980s and expanded these five principles into eight and now, in Canada, (since 2001) ten principles are enshrined in Canada’s private sector PIPEDA. Equally important to the Golden Era was the regulatory framework that developed around the enforcement of the fundamental principles, in the form of independent Commissions or Authorities to which government departments, agencies and ministries would answer, and subsequently (in Europe and Canada) the private sector would answer as well.

The Golden Era came to be not only because of all of these actions, but because these actions had meaning and significance at the time. The privacy principles had real meaning – following them literally changed information management and provided individuals with real control over their information and over who else had it. Control became the essence of personal information protection, and in FIPPs language it was captured by the principles of choice, consent and, to a lesser degree, notice. The Germans developed the idea of “informational self-determination” – the ideology that control over your information allowed you to determine and shape your identity, your sense of self, and that this should be an individual right, rather than a government dictate. (It is sadly obvious to see how the Germans, learning the lessons from the Second World War, would want to wrest control over the identification of individuals out of the hands of government for good.)

Technologically, what allowed members of society to exercise control over their personal information was the feeble (in modern terms) processing and storage powers of computers at that time. For example, the first hard drive was sold only in 1980. Manufactured by IBM, it had 1GB of storage, and it cost $40,000. Furthermore, personal information was not collected continuously by government or by the private sector. Instead, it was collected in a series of discrete interactions. We were able to make separate decisions whether we wanted to provide information and what information we would provide, on a variety of issues, such as when we filled out our income tax returns, or our government census, or applied for a passport, or for a driver licence, or, when we shopped, whether we would provide the store with our postal code or telephone number. All these interactions depended pretty much exclusively on us to be the source of information about us, and so we were able to decide whether we wanted to interact, what information we would share in the interaction, and under what conditions.

We had, in the language of modern-day principles, a meaningful opportunity both to consent to the collection of our information and to understand the purposes (as in the examples just mentioned) for which this information would be put to use. Moreover, the information collected about us and processed about us was stored in discrete stand-alone proprietary databases both in the private sector as well as in government. Special effort was required in order to share and transfer (i.e., disclose) information about us between government departments.

The combination of all of these created the Golden Age of privacy. We felt, largely correctly, that we were in control. We felt that if we decided not to provide information to the government or to a business about us then government or that business would not know or have access to that personal information. We felt that the purposes for which our information were used were well-defined and limited, we felt that we knew, or could know if we wanted to, what information was stored about us. We felt, in other words, that privacy principles genuinely offered us protection. Then slowly, gradually, incrementally, everything changed. And today, instead of real protection we have the façade of consent.

Why is contemporary consent a façade? According to estimates, in 2016 every minute of the day sees close to a million Tinder swipes, 2.5 million Instagram posts ‘liked’ and seven million Snapchat videos watched. That is a lot of personal information that is processed, every minute of the day, and an exponential leap in our technological capacity to handle data from that first hard drive. Yet we maintain “control” over it through the same legal tools and principles that were devised forty and fifty years ago. Further, our control over information has loosened not only because of the increase in our technological capabilities, but just as much because of the change we have undergone in our socializing. Social media is here to stay, and we may be interested in controlling our information but we are just as interested in gossip and in information sharing. We are human beings and we do as humans would, whether offline or online. Among the many implications for privacy is this – personal information about us no longer originates exclusively with us. Others can be a rich source of information about us through their activities, and both government and the private sector can deduce, generate if you will, personal information about us, through analysis of so-called meta-data, and by other means. We have no control, or ability to meaningfully consent, to such personal information processing.

Now add to these developments the advocacy to bring about normative change, to diminish the value of privacy, whether in the name of national security or in the name of profit. These pushes for societal changes are also accompanied by technological changes, not all of them social-media related. As noted by the discussion paper, the Internet of Things is based upon a proliferation of sensors that capture our information in the public domain, such as cameras, cell-phone cameras, body cameras on police and drone cameras. All of these erode privacy and erode our control over our personal information, and our ability to decide what happens with it.

The privacy principles of yesteryear are no longer powerful, meaningful or relevant. They are no longer up to the task of ensuring individual control over personal information. Instead, we have the illusion of control, created through contracts of adhesion known colloquially as “terms of use” or “privacy policies” to which we fictitiously indicate our acceptance by clicking on a virtual button or ticking off a virtual box. In fact, these contracts only serve the corporate commercial interests of the corporations that drafted them, and have transformed the principle of consent from an idea of individual information protection into a licence of unfettered processing for commercial purposes. It should be noted that these are neither uniquely Canadian problems nor are they uniquely personal information protection failures. In fact, data protection regimes are facing similar challenges internationally, as noted by the discussion paper, and contracts of adhesion create numerous consumer and legal problems, not only information-related ones. Yet neither fact consoles those seeking the protection of privacy in Canada and the resurrection of consent as a meaningful idea or its rejection in favour of other substantive protection.

A New Hope?

So, what, if at all can be done? The legal/regulatory answer is clear – we need a new set of privacy principles, and of course there have been quite a few proposals of such principles. What is perhaps not as clear is that we need new ways of enforcing our privacy principles, since the regulatory framework that worked for us, of agencies and commissions, may perhaps need to evolve and take on new roles.

First, a short discussion of the shape that such new principles could take. There have been many initiatives in recent years that could be characterized as either conservative or radical, from the revised OECD principles, to the new EU General Data Protection Regulation (GDPR) with its intriguing inclusion of new principles such as Privacy by Design, Privacy by Default and the Right to be Forgotten.

This short discussion will skip the above, however, which are all worthy of their own devoted essays, to focus on a rogue group of academic and industry leaders that came together a few years ago through collaboration mainly between Microsoft and Oxford’s Internet Institute. Their radical proposal was to suggest that it is perhaps time to abandon the principles of notice and consent and to move away from them towards principles that restrict and limit use and processing of information.

Recall the 2016 data processing estimates above, in order to perhaps understand why the Oxford-Microsoft group believes that notifying and asking people to agree to the processing of their data—in the manner it is currently done—does not offer individuals meaningful protection and control over their information. Instead, it offers corporations a fig leaf of legality by way of privacy policies and terms of use to legitimize their extensive, and continuous, data processing. Put differently, the act of agreeing is a discrete, singular act, whereas the processing of data is continuous.

What the Oxford-Microsoft group suggested was that meaningful protection in our era for our privacy and our personal information will only be found by tightening the constraints over the uses and purposes for which information can be processed, and by focusing on processing that has significant implications for individuals, such as an admission decision to a university, or insurance coverage, or employment, hiring and disciplinary decisions or health-care provision. Many, many other commercial purposes and processing—for example for advertising and marketing—would not be restricted at all according to this proposal.

It is easy to see why the Oxford-Microsoft proposal is both attractive and horrifying simultaneously. Does it offer us a brave new hope? Or does it simply surrender the battle over privacy? It may offer us some hope, but only if we change the way in which we currently enforce our principles of privacy protection, which brings us to the second point of this short discussion.

So, second, how do we provide meaningful privacy protection in this day and age and perhaps even for the foreseeable future? Canada and other countries need to find a way to continuously offer individuals control, choice and infuse new meaning into those other Golden Era principles. And it is no surprise that we will need technology to do that. In fact, we will need to combine regulatory and technology responses and we will need regulatory and legal decisions to directly determine and dictate technological privacy protective measures. We will need, in other words, more decisions along the lines of “Google Spain”, or perhaps, if we cast our net a bid more broadly and earlier in time, we need more DMCAs (the US Digital Millennium Copyright Act).

What the DMCA did legally was to establish legal liability for corporations that could be seen to facilitate intellectual property infringement, unless they could demonstrate their IP protective actions. What the DMCA achieved technologically was the creation of an interface, largely automated, through which IP rights could be pursued and protected. As a result, when a person searches for the latest episode of popular shows such as Mr. Robot through popular search engines or on popular content providers, such as YouTube or Google such protected intellectual property cannot easily be found.

Note that it is not the case that it cannot be found at all – it may be available for streaming or downloading or torrenting on some other, more obscure online location. But it is a fair assumption that most members of society, with their average technological skills (or lack thereof), would conclude that if they cannot find it easily on YouTube or Google then it is nowhere to be found on the internet. And that conclusion, of course, is of vital importance to privacy, and is the beauty of the Google Spain decision as well. For the significance of that decision is not only in its confirmation of a right to be forgotten, but also in Google’s decision in its aftermath to create a technological interface, similar to the IP interface, that would allow individuals to submit privacy requests easily and efficiently. It is not a perfect process, and there is much to improve, but it is a start.

There are many ways in which technology could leverage a regulatory decision. To note a couple, police body camera video feeds could be encrypted by default, with judicial approval required in order to decrypt the images. And drone manufacturers could be required by law to geo-fence their devices so they could only be flown in open spaces, or face legal liability. We need many more “Google Spain” legal and regulatory decisions, and we need to provide the private sector and the public sector with the right incentives, both positive and punitive that would encourage them, nudge them and if necessary force them to come up with more such solutions. Inescapably, in the Canadian context, this leads to the continued call for greater enforcement and order-making powers for the Privacy Commissioner of Canada that would place the OPC on a level plain with other data protection and privacy enforcement authorities worldwide, and ensure that the private sector views the OPC as a significant regulator.

Summary and Conclusion

The principle of consent has devolved from its role as a lynchpin of the privacy protective regulatory system a generation ago to a façade, which offers us today no more than the appearance and illusion of control over our personal information, while enabling in reality widespread corporate commercial data processing. Hastened along toward its demise by rapid technological development and new social and political paradigms of information sharing, the idea of consent, and the overarching principles of individual choice and control over personal information which it serves, can still be salvaged through a new regulatory approach. This approach should focus on the retention of consent in meaningful instances which have significant implications for individuals – such as the health-care, employment, and education contexts. The OPC must be equipped with enforcement and order-making powers comparable to other jurisdictions, and further regulation must protect privacy by leveraging the power of technology to come up with hybrid regulatory/technological solutions along the lines of the “Google Spain” decision and the DMCA. If we could find a way to protect IP for strong commercial interests despite technological developments, surely we can find a way to do the same for privacy.

Date modified: