Language selection

Search

Privacy In The Digital Age: Three Reasons For Concern, And Three Reasons For Hope

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Alessandro Acquisti
Carnegie Mellon University

The paper was commissioned by the Office of the Privacy Commissioner of Canada as part of the Insights on Privacy Speaker Series

April 2011

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.

Related video: Insights on Privacy: Christena Nippert-Eng and Alessandro Acquisti


After my visit to Ottawa in March 2011, and the exciting discussions I had there with the staff of the Office of the Privacy Commissioner and with Christena Nippert-Eng, I kept wondering about one of the themes that had emerged during the Speaker Series event: what does the future hold for privacy?

I must admit that, since I started doing research on privacy a few years ago, I have become more, not less, concerned about our individual and collective abilities to maintain privacy in a world where most of our personal and professional lives unfold leaving trails of electronic data, and where powerful economic interests favor information availability over information protection. This is unfortunate: My basic premise — reflecting the results of previous work on the economics of privacy — is that a balance between information gathering/sharing and information protection may be of common, long-term interest of both data subjects and data holders; in fact, much more so than either unfettered access to individual data, or complete blockage of any flow of personal information.

However, an unprecedented amount of personal information is now in the hands of third parties, out of control (and knowledge) of the individuals that information refers to. If information is power, vast amounts of personal data accumulated by third parties (and the latter’s ability to mine that data for significant patterns) inevitably tilt the balance of power between data subjects and data holders. The long-term social and economic implications of this trend are not necessarily always benign.

Perhaps to soothe my own concerns, I then started looking for reasons for hope and optimism, too. This short note documents some of the things that concern me in what I have learned about privacy in recent years, but also some of the reasons for hope.

Some reasons for concern

A first reason for concern I was alluding to resides in the unprecedented access that third parties (firms, but also governments) have on aspects of our lives that used, up to not too long ago, to be private – but which are now either more or less surreptitiously monitored by data holders, or even publicly advertised by the individuals themselves. Firms and governmental bodies have always gathered personal information about customers and citizens, no doubt. What I find remarkable, today, is the amount and quality of that information, how pervasive is its collection, how invisible such collection is (often times) to the data subject, and what remarkably precise (and sometimes sensitive) inferences can be made out of that data. For instance, a few pieces of personal (but not necessarily identifiable) information can uniquely identify an individual, or allow the inference of more sensitive information about her. In a paper published in 2009, we showed how we could predict individuals Social Security numbers (in the US, highly sensitive information) from information gained from publicly available Internet sources.Footnote 1 As we explained in the article, we extracted birth information from Facebook profiles of students at a North American university. Then, we used simple statistical tools (such as regression analysis) to interpolate the information coming from the students sample with information coming from the so-called Death Master File (a database of deceased individuals’ Social Security numbers). Using this method, we were able to accurately predict with a single attempt the first 5 digits of the Social Security numbers for 6.3% of our sample. This result is merely one example among many of the increasing ability to predict highly sensitive data combining disparate databases, each of them not particularly sensitive.

Some argue that giving users more control on their data is a way to address the above (and similar) concerns over the gathering and analysis of personal information. I am, unfortunately, skeptical that more user control, alone, can be of help. First of all, users are often unaware of the extent to which information about them is gathered and sensitive inferences are possible. More importantly, while control is a normatively appropriate concept for privacy (that is, in terms of how we would like the world to be), the implications of control in positive terms (that is, in terms of how the world actually is) may be less benign. In a recent manuscript,Footnote 2 we investigated how control on the publication of personal information can affect individuals’ propensity to reveal sensitive details to strangers. Our conjecture was that control over publication of private information may decrease individuals’ privacy concerns, and therefore increase their propensity to disclose sensitive information - even though the objective risks associated with such disclosures were more significant. To test this hypothesis, we designed a series of experiments in which we asked subjects to answer sensitive and non-sensitive questions in a survey. Across the experimental conditions, we manipulated the participants’ control over information publication, but left constant (or manipulated in the opposite direction) their level of control over the actual access to and usage by others of the published information – arguably, the actual source of privacy harm. We found, paradoxically, that more control could lead to “less privacy,” in the sense that higher perceived control over information publication increased our subjects’ propensity to disclose sensitive information, even when the probability that strangers will access and use that information increased. This type of results shows how technologies that make us feel more in control over our personal information may, in fact, promote more sensitive disclosures. Therefore, these conclusions cast doubts over the hope that merely giving more control to users will help them achieve the desired balance between information sharing and information protection.

A third reason for concern that our research has highlighted relates to the impact of information about us on others’ judgments and behaviors. In a series of experiments, we tested the hypothesis that the impact of personal information with negative valence about an individual may tend to fade away more slowly than the impact of information with positive valence. This would happen not just because the immediate impact of negative information may be stronger (something already shown in the literature), but also because negative and positive information are actually discounted differently.Footnote 3 In our experiments, we manipulated the type of information referring to an individual that our subjects are exposed to (namely, either positive or negative information, such as the subject engaging in a good or in a bad deed); we also manipulated the time to which such information supposedly referred (that is, the time at which the event associated with the information ostensibly occurred: for instance, having engaged in a good/bad deed either in the recent, or in the distant, past). Then, we measured how other subjects reacted to such information. Specifically, we measured how the subjects judged the individual, as function of whether the information reported about them had positive or negative valence, and was presented as recent or old information. Our results confirmed that the negative effects on other people’s opinion of a person, based on personal information about that person with negative valence, faded away more slowly than the positive effects of information with positive valence. In other words: good deeds positively affected our subjects’ judgment of the individual only if they were reported as happening recently, and not in the past; instead, bad deeds negatively affected our subjects’ judgment of the individual regardless of whether they had been reported as happening recently or not in the past. The implication of these results for contemporary privacy is straightforward, and rather gloomy: Web 2.0 applications allow Internet users to share all sorts of information about themselves, both positive and bad (for instance, information that may be embarrassing or inappropriate when taken out of context); not only doesn’t the Internet allow that information to be “forgotten:” it also seems that our innate reactions often do not allow us to “forgive” bad information about others even when it is old.

Some reasons for hope

It turns out, however, that the sort of cognitive and behavioral biases that my research on privacy decision making investigates (and which raise concerns over our ability to optimally navigate issues of privacy in the digital age), also offer opportunity for optimism.

One first reason resides in the observation that, albeit modern information technology seems to privilege disclosure over privacy, both the need for publicity and the need for privacy seem to be innate: they seem to be part of human desires and drives across diverse times and cultures. Not only is there historical and ethnographic evidence of the quest for privacy across different societies, but there is also experimental evidence suggesting that the desire to disclose and the desire to protect can be, in fact, activated through subtle manipulations. In a recent set of studies,Footnote 4 we manipulated the salience of information revelation and the activation of the drive to disclose versus the drive to protect one’s privacy, resulting in profoundly different effects on disclosure. Our preliminary results do suggest that individuals face competing forces when deciding how to balance information protection and disclosure (the desire to divulge, and the desire for privacy). To understand variation in information revelation across situations, we must understand how both motives operate. This, in turn, suggests that the act of disclosing plenty of personal information online does not prove, per se, a lack of privacy concerns.

A second reason for hope is that research on the hurdles of privacy decision making can actually be used to develop policies and technologies that anticipate and counter those very cognitive and behavioral biases that hamper users' privacy decision making. Such approach is inspired by the behavioral economics literature on soft, or asymmetric, paternalism. As discussed in a recent paper,Footnote 5 research on soft paternalism suggests that lessons learnt about the psychological processes underlying behavior can be used to actually aid that behavior. Systems or laws can be then designed to enhance, or even influence, choice, without restricting it. The goal of these “nudging” interventions, in the privacy space, would be to increase individual and societal welfare, helping users make privacy (as well as security) decisions that they do not later regret. In doing so, this effort goes beyond privacy usability, and actually attempts to counter or anticipate the biases that lead individuals to make decisions that reduce their overall welfare or satisfaction. Under grants from NSF and from Google, we have been investigating how to develop nudging tools for online social networks, mobile applications, and location services, to achieve exactly that goal.

An additional reason for hope resides in the development of privacy enhancing technologies (PETs). As noted in a recent white paper I wrote on the economics of privacy,Footnote 6 PETs — at least in principle — could produce a non-zero sum economic game between the interests of data subjects and data holders. Information technologies are used to track, analyze and link vast amounts of data about an individual, but they can also be used to aggregate, anonymize, and ultimately protect those data in ways that are both effective (in the sense that re-identifying individual information becomes too costly, and therefore unprofitable) and efficient (in the sense that the desired transaction – such as an online payment, or even targeted advertising - can still be completed, even though a class of individual data remains unavailable to the data holder, the merchant, or the third party). Indeed, much cryptographic research (in areas such as homomorphic encryption, secure multi-party computation, or blind signatures) could, hopefully soon, be leveraged to satisfy both needs for data sharing and needs for data privacy. Protocols to allow privacy preserving transactions of all types (payments, browsing, communications, advertising, and so forth) have been developed. My hope is that research in this area will not stop, but in fact accelerate, so that those protocols will progress to the point where they can be cost-effectively deployed and resiliently operate at a massive scale: a future with privacy by design and by default, so to say.

Achieving that goal will require more than self-regulation and technological ingenuity, however. It will require direct policy intervention, and will rely on our society’s collective call for a future in which the balance of power between data subjects and data holders is not so dramatically skewed, as current technological and economic trends are suggesting it may be.

Date modified: