Language selection


A Return to First Principles for Privacy at the Cutting-Edge: But Which Principles?

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Remarks at the Pathways to Privacy Research Symposium

Ottawa, Ontario
February 26, 2015

Address by Patricia Kosseim
Senior General Counsel and Director General, Legal Services, Policy, Research and Technology Analysis Branch

(Check against delivery)


Thank you for your invitation to be here today.  We are delighted that you responded to our call for proposals to host the Pathways to Privacy (P2P) Symposium this year and we congratulate you on putting together such a stellar program on such a foundational theme.

As you know, the OPC Contribution Program celebrated its 10-year anniversary this year, and as required, we recently completed the second 5-year review of the program.  The evaluation report is up on our website for anyone interested in reading it, but in short, I am happy to relay a few key findings:

The evaluation concluded that the Program continues to address a demonstrated need given its unique niche in promoting privacy issues and the universal support of those consulted; it is aligned with OPC’s role and responsibilities and priorities, meets the intent of PIPEDA, and is directly aligned with the OPC’s priorities.  The Program is enhancing both the production and sharing of privacy knowledge and best practices and since the last evaluation, there has been a greater diversity of projects funded across priority areas and recipient groups as well as more knowledge translation activities.  The Program is seen to be efficient with a high level of project output for the funds invested, producing good value for money.  Among the recommendations going forward are to continue to translate research into results and encourage new innovative partnerships to extend the reach of the Program.

It’s precisely in that spirit that we initiated the P2P Symposium three years ago, with the following objectives in mind.  To:

  1. Showcase the results of research funded by the OPC and other relevant funders;
  2. Facilitate dialogue between those who do the research and those who apply it; and
  3. Provide an opportunity for researchers to network and build future partnerships.

After having organized and hosted it the first year, we thought, who best to carry forward this initiative and improve upon the model, than the research community itself?  And since then, we have dedicated a special portion of our research funding envelope for this annual research symposium.  Last year, the successful applicant was the University of Toronto, and this year, here we are, at the University of Ottawa.

Going forward, we are considering expanding the call for P2P proposals to invite not only original ideas for future symposia, but other creative modes of research translation and dissemination as well.   We certainly welcome your thoughts on that.

A Return to First Principles for Privacy at the Cutting Edge

As for today’s theme, “A Return to First Principles for Privacy at the Cutting Edge”, I could not have imagined a more timely, compelling and apropos topic of discussion.   You are absolutely right to encourage this kind of thoughtful reflection in a time of such rapid, game-changing technological developments.   For some time now, we have also been thinking about this conversation that needs to happen and we are delighted, my OPC colleagues and I, to be able to take part in your discussions today.

To get things started, let me share with you a few musings of my own.   When we think of first principles, a good starting point is always the Supreme Court of Canada decision in R. v. Dyment, [1988] 2 SCR 417.  The Court there adopted a categorization of privacy claims from the 1972 Department of Justice Task Force Report on Privacy and Computers which continues to be a helpful analytical tool in parsing out the underlying principles and interests at stake.  The Court spoke of three realms or zones of privacy: territorial privacy, personal privacy and informational privacy.

This third realm, “informational privacy”, was the subject of a more recent Supreme Court of Canada decision in R. v. Spencer, [2014] 2 S.C.R. 212.    The Court recognized three conceptually distinct — though overlapping — understandings of informational privacy as they’ve evolved over the past two and a half decades of Supreme Court jurisprudence.

The first conception of informational privacy is concerned with confidentiality or secrecy which inheres in the nature of a relationship between parties, such as the doctor-patient relationship.  Such was the concept of informational privacy at the heart of McInerney v MacDonald, for example.

The second is Alan Westin’s concept of control underlying the individual’s claim to determine for themselves when, how and to what extent information about them is communicated to others.  This claim is based on “the assumption that all information about a person is in a fundamental way his own, for him to communicate or retain for himself as he sees fit.”  The Court cited here as examples its earlier decisions in R. v Dyment and R. v. Duarte.

The Court then went on to articulate a third notion of informational privacy as anonymity, where information is willingly communicated to others, but on the basis that it will not be identified with the person communicating it. Though not a novel concept, anonymity was described as particularly important in the age of the Internet.  “Anonymity permits individuals to act in public places but to preserve freedom from identification and surveillance.”  The court referred to its earlier decision in R. v. Wise on the ubiquitous monitoring of vehicles on public highways, although it also seemed to have planted the seed for this analysis more recently in United Foods and Commercial Workers Union, when it recognized that individuals do not automatically forfeit their privacy interests when they step out in public.

R. v. Spencer was a high watermark for privacy in Canada, tying together years of past jurisprudence into the most coherent and nuanced understanding of informational privacy yet — moving the yardstick significantly forward in the modern age of the Internet.

However, given the theme of today’s symposium urging us to “Return to First Principles for Privacy at the Cutting Edge”, I’d like to posit that even this most recent expression of informational privacy may not yet be complete and that looming ahead is a further conception of informational privacy just waiting to be explored and articulated.

To make my point, let me take you there intuitively through the use of several examples:

  1. Online reputation
  2. Big Data
  3. Wearable Computing

Online Reputation

The longevity of the personal information we ourselves post online, or others post about us, for anyone to see, for any purpose and forever, is driving us to see and think about privacy in different ways.  The implications of this new phenomenon are starting to settle in, with jurisdictions struggling to consider the emergence of new claims for the right to be forgotten.  One could argue that underlying the right to be forgotten is still the concept of informational privacy as control, essentially, an individual’s claim to determine for themselves when, how, and to what extent information about them is  communicated to others.  But is it really? 

The recent Google Spain decision of the European Court of Justice is instructive in this regard.  The case involved a complaint made by a Spanish national to the Spanish Data Protection Authority against a Spanish newspaper, La Vanguardia, as well as Google.

The individual complained that when he searched his name using Google’s search engine, he obtained links to two newspaper articles published in 1998 concerning a forced sale of property he owned in order to recover unpaid social security debts. He maintained that the issue of his unpaid debts had now been fully resolved and the information contained in the articles, while accurate, had become irrelevant with the passage of time. The complainant requested that the newspaper remove the articles and that Google be required to prevent the articles from being returned in its search results.

In a decision rendered May 13, 2014, the European Court of Justice did not order the newspaper to remove the original articles from its website, but did order Google and other search engines to remove links from search results upon request, where the information, even if originally accurate, had become inadequate, irrelevant or excessive.   Such requests could be refused however depending on the sensitivity of the information and the public interest in having access to it. 

When viewed from our own legal construct of informational privacy, what privacy claim lies at the root of this decision?   Certainly not confidentiality since the forced property sale had been originally publicized pursuant to an order of the Ministry of Labour and Social Affairs.  Anonymity neither it would seem, since there was no request for a publication ban.  Privacy as control does not seem to be completely operative either given that there was never any dispute over the accuracy of the personal information at issue.  It was what it was and the Court did not accept the complainant’s claim for complete control over the destiny of the information by deleting the original articles per se.  What then was the operating principle that led the European Court of Justice to find that it was the search engine’s relentless recall and dissemination of the personal information, as opposed to its original publication, which offended the complainant’s privacy?

Big Data

Probably the biggest test to first privacy principles has come with the growth of Big Data.  Big data is the massive aggregation of data, combined with powerful algorithms and computing power, to identify patterns, draw correlations and make inferences that can predict future behavior.  Big data challenges our known conceptions of informational privacy in fundamental ways.  Proponents of big data claim that informational control, in the form of consent, is not meaningful or even possible given the sheer volume of data that is amassed, and that having to specify or limit future purposes to obtain informed consent is inimical to the very benefits big data are intended to realize.  They also argue that data are drawn from public sources or online activities which are non-confidential in nature and are often non-identifiable.

An earlier precursor to some of these big data questions came to our Office a few years ago in the form of a complaint against an organization that produced and sold customized consumer lists to third party businesses for direct marketing purposes, based on ‘select’ criteria.  Their business model at the time was to combine publicly available, white pages information about individuals (last name, sometimes first name or initial, address and telephone number, with full postal codes) with aggregate geo-demographic data about the surrounding neighborhood (or census dissemination area) obtained from Statistics Canada (average age, type of home, single or multi-dwelling), likelihood of home ownership, estimated home value, estimated income).  Moreover, using name tables and proprietary models based on genealogical regression analyses, the company would impute gender, ethnicity and religion to individuals based on their last names.

After quite an in-depth investigation of the technology and methodology used at the time, the then Assistant Commissioner found that the white pages data, even when combined with other geo-demographic data about the surrounding neighborhood’s general characteristics which may or may not be accurately attributed to the individual, did not convert the publicly available information into personal information subject to consent requirements.

In this case, the data clearly were not confidential or anonymous.  Moreover, given the publicly available nature of the information, the complainant’s claim of privacy as control which would have required consent for use and disclosure of the individuals’ personal information could not hold and the complaint was not well-founded.   Notwithstanding this finding, there remained a strong intuitive unease with the company’s practice of imputing gender, religion and ethnicity to individuals based on their names, and all the associated assumptions and inferences therewith, which we simply could not get at using existing legal constructs.

A more recent example helps press the point further.  According to a 2013 study led by Harvard University Professor Latanya Sweeney, on “Discrimination in Online Ad Delivery”, racial bias was found in ads connected with certain search terms used in Google and Reuters.  When searching black-identifying first names (such as DeShawn, Darnell, and Jermaine), a higher percentage of ads offering services for criminal record checks appeared than was the case when searching white-identifying names like Brad, Jill, or Emma.  Even conceding momentarily that control, confidentiality and anonymity are not operative privacy interests in this context because the practice does not involve personal information per se, what other privacy interest or claim — if any — is engaged by the intuitive feeling we get about its morally reprehensive nature?   Why are we offended not only by the inferences being drawn from the racial profiles of others, but also by the assumptions being attributed to our own interests and intentions based on our search queries?

Wearable Computing

The advent of wearable computing and sensors has opened up vast opportunities to extract and transmit personal information about our bodies on an ongoing basis, and link it with other online or offline information about us.  Early adopters of this technology have enthusiastically vested themselves with such devices for medical, recreational, convenience or personal safety purposes.   But these devices also represent highly lucrative opportunities to commercialize and sell our bodily data to third parties for marketing or even insurance purposes.   Even assuming individuals’ consent is truly informed and meaningful, at what point do our bodies become so commodified and objectified, that control over the information begins to give way to another more operative privacy interest? 

There are analogous instances in other domains where we have established legal and ethical limits to consent and we have readily accepted such outer limits.  For instance, we have, as a society, agreed to ban commercial surrogacy, prohibiting any form of payment other than for reimbursement of expenses.  Ethical frameworks we have constructed to regulate research activities in Canada likewise prohibit payment of research participants, other than for reimbursement of expenses.

Perhaps these examples have been rightly justified in the interest of preserving bodily integrity.  That’s true. But even if we assume that wearable computing and body sensors do not physically interfere with our bodily integrity per se, does the constant emission of our body-related data to others for their commercial exploit not begin to intuitively offend some other underlying principle?


What I think these and several other examples at the cutting edge of technology drive us to consider is a possible fourth conception of informational privacy looming in the background.  Increasingly, the concept of “dignity” is entering the lexicon of privacy debates, but has yet to be recognized and fully articulated as a principle of informational privacy.

With the advent of the Internet, big data and wearable computing, our traditional conceptions of informational privacy are being stretched and it may be time to consider maybe intuitively at first, and then more systematically, what other underlying claims or interests are being engaged in this new age of information technology. 

I am conscious of legitimate concerns raised by some American authors who guard against privacy policy becoming “muddled with peripheral or even conflicting less important even if theoretically distinct from core privacy interests.”

Though I wonder whether these concerns are really at their source a matter of preserving conceptual integrity, or whether they are rooted in a longstanding cultural perspective that have led to a particular way of seeing and understanding privacy.  Comparative law scholars like James Whitman have gone to great lengths to contrast the American understanding of privacy based on values of liberty and autonomy, with the European experience that has tended to approach privacy as a matter of dignity, honor and respect.  As typical Canadians with both common law and civilist traditions, we tend to straddle somewhere in between, though we will have important legal and policy choices to make at the crossroads in the coming years.

The emergence of new technologies is pushing our traditional conceptions of privacy more than ever, and you are very wise to have conceived today’s theme in such a way as to encourage us to return to first principles for privacy at the cutting edge. 

And with these few opening remarks, I look forward to hearing your thoughts and taking part in discussions over the course of the day, and perhaps we can look forward to seeing some research proposals on this fundamental question next year!

Thank you.

Date modified: