Language selection

Search

Connected things, privacy and public space: Approach to a taxonomy

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Adam Greenfield
Urbanscale

The paper was commissioned by the Office of the Privacy Commissioner of Canada as part of the Insights on Privacy Speaker Series

July 2011

Disclaimer: The opinions expressed in this document are those of the author(s) and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada.

Related video: Insights on Privacy: Adam Greenfield and Aza Raskin


My name is Adam Greenfield; I’m the founder and managing director of a New York City-based practice called Urbanscale, which is concerned with the design of systems, services and interfaces for networked cities and citizens.

Our work is driven by a deep belief that everyday urban life can be improved, for all residents and other users of a city, by the conscious and careful deployment of networked information-gathering/processing/transmission technologies, and the equally thoughtful design of interfaces to same. Since we’re obviously pretty heavily invested in the appearance of these more-than-occasionally opaque technologies in public space, we think it’s incumbent upon us to be particularly clear about:

  • what benefits we think these technologies offer;
  • what costs we believe they may impose, both in the short term and over the longer stretches of collective experience;

and particularly,

  • when we feel a given deployment of informatic technology either does not benefit the public, or, worse, actively undermines the quality of urban life, the autonomy of citizens, or their ability to move and act freely in urban space.

As a result, I want to get a better handle on how particular ensembles of sensing, display and actuation technologies condition that quality the sociologist Henri Lefebvre thought of as “the right to the city,” which I understand as the citizen’s agency and, equally importantly, sense of their own agency, over the place they live. My feeling is that the complexity of this terrain is such that abstract principles and so-called “best practices” aren’t particularly likely to be useful in guiding us toward better decisions unless they’re firmly grounded in a concrete consideration of some present-day actualities. In my own attempt to think more clearly about these issues, therefore, I started by constructing a taxonomy of data-gathering objects in public space, and that’s what I’d like to share with you here.

To further clarify the issues at stake, I’ve arrayed these systems along a spectrum from a sensor I think of as self-evidently non-threatening to something I think the privacy community particularly — and advocates of high-quality public space in general — ought to be deeply concerned by.

1. Prima facie unobjectionable

The very first system I want to consider here is a traffic sensor called Välkky, developed by the Finnish company Havainne. Though it’s potentially deployable anywhere on earth, Välkky clearly responds to the particular circumstances of the Finnish night and the traffic conditions that tend to obtain in the long dark. Designed to be mounted on existing traffic-sign posts, the system couples a grid of locally-networked motion detectors to an array of high-intensity blue and white LEDs; when properly positioned at a crosswalk or intersection, it alerts oncoming traffic to the presence of otherwise invisible pedestrians and bicyclists.

Let’s break down what’s going on here:

  • Välkky’s motion sensor captures physical activity in the immediate vicinity. A pedestrian appears at a crosswalk, and the corresponding LED array is illuminated.
  • The source of the activating stimulus is neither measured by any other sensor, nor characterized by the system in any other way. Välkky makes no attempt (and indeed, is not equipped) to quantify, identify or otherwise flesh out its understanding of the person or other entity activating it. It could be triggered by a reindeer, and nobody would be any the wiser for it.
  • The information gathered is not stored, and does not persist. Both Välkky’s response to activation and de-activation are immediate; as far as I can tell from Havainne’s technical documentation, no record of system activations over time is maintained, although it would certainly be easy enough to do so. In effect, any awareness of a triggering event disappears from the world as soon as the value returned from a sensor drops below the threshold at which it has been programmed to register the presence of a pedestrian.
  • The information gathered is not transmitted remotely. While all of the Välkky sensors in a given intersection are linked via a short-range radio network, to ensure that a single activation triggers the entire local array of LEDs, none are connected to the global IP (Internet Protocol) network. Among other implications, this means that no other system has access to a Välkky unit or the patterns of fact it’s capable of registering.
  • No analytics are applied to sensor activations. Välkky doesn’t attempt to build any more elaborate model of the world than the extremely simple one reflected in the equation pedestrian = flashing light. No higher-order inference, whether one with political and economic implications or otherwise, is enabled by the presence and operation of the sensor.

While manufacturer Havainne has expressed a clear intent to enhance the (perceived) value of their product — and perhaps muddy our case — by reversing some of the above measures, these are the facts at present, as they pertain to a basic Välkky installation. And there is one final, highly salient fact about Välkky that helps us place it at one pole of our spectrum of concern:

  • There is a clear and inarguable public good directly served by the presence of the data-collecting object.

That is to say that, while nothing exists to prevent some sufficiently skeptical party from lodging an ROI-based argument against a deployment of Välkky sensors, it can clearly be argued that such a deployment would benefit public health and safety. Havainne claims a measured reduction in driving speeds of 4-5% at intersections where a Välkky array has been installed, the upper value of which has been correlated in previous tests with a 20% reduction in fatalities and a 10% reduction in the rate of vehicular accidents resulting in serious injury.

These facts help establish a few basic parameters around the performance of information-gathering systems in public space, and begin to furnish us with a set of desiderata for their ethically responsible design. I would argue that a system whose collection of information has strictly local and immediate effect, serves some public good, and in the operation of which any trace of the information collected expires and leaves the world immediately, need not be of terrible concern to us here. This doesn’t mean that all such systems are necessarily innocuous — it would be relatively easy to devise a system that met all of these criteria, yet which nevertheless affected the environment in undesirable ways — but they can at least be said to conserve privacy.

From Välkky, however, it gets significantly harder to defend the right of systems to collect information from public space, at least as classes rather than particular instances.

2. Innocuous but with some underlying capacity for collection

A transitional case is presented by a 2009 interactive advertisement for Amnesty International, designed by Hamburg’s Jung von Matt agency, and apparently only ever deployed at a single bus shelter in Berlin. The advertisement consisted of an image of a young couple — to all appearances loving and happy — displayed on a video screen, which was in turn all but imperceptibly equipped with an eyetracking camera. The latter was trained on the people waiting in the shelter; as long as anybody happened to be watching the screen, the couple maintained their placid pretense, but when the camera failed to report anyone paying active attention, the onscreen image slowly morphed to one in which the husband was beating his cowering wife. (This, of course, was immediately replaced with the happier image the moment the camera registered that anybody had noticed.)

This ad was no doubt effective in reinforcing the message that domestic abuse is something that happens when no one is watching, and indeed the ad won a Silver Lion at Cannes that year. So it’s tempting to conclude that this collection of data, too, is untarnished by any cause for concern. Even when we consider the Amnesty ad through the lens of the criteria we’d established with Välkky, at first blush the two systems seem to have a great deal in common: both rely on a local collection of data to accomplish a local effect, no record of which is kept, transmitted to any other system, or used to build a potentially problematic inferential model of behavior.

But there is — to my mind, anyway — a meaningful distinction between the two. Unlike Välkky, there is no public benefit directly associated with the ad’s data collection. The cause espoused may well enjoy an unimpeachable claim to moral correctness, but the context nevertheless remains that of advertisement. Replace the Amnesty content with something less capable of furnishing a fig leaf of respectability through a socially beneficial message, and we can more clearly see the true outlines of the situation.

3. Mildly disruptive and disrespectful

Here’s where we start to get into murkier territory. The next information-gathering artifact on our spectrum of concern is just such a thing: an advertisement for Nikon that appeared in the corridors of the Seoul Metro’s Sindorim Station in 2010. This ad, developed by the Cheil Worldwide agency, coupled a physical red carpet to an adjacent lightbox image featuring a pack of paparazzi (wielding Nikon cameras, naturally). The carpet disguised the presence of a motion detector which, when tripped, “alerted” the paparazzi, all of whose cameras flashed at once, in a silent flare of light.

The parameters of information collection are identical with the Amnesty ad: a sensor registers the presence of a passer-by and activates a local response, with no record maintained, stored or transmitted. But in the absence of even mildly justificatory content, the ad’s erosive effect on the quality of public space becomes clearer. While it ostensibly, in the words of the advertising agency, “made people feel as though they became superstars, walking down a red carpet for a luxurious award,” contemporary documentation makes it clear that the actual reaction of pedestrians exposed to the ad was one of puzzlement, verging on dismay. As common sense would lead us to suspect, the experience of having one’s thoughts interrupted by a fusillade of camera flashes is somewhat less than enjoyable.

To be honest, though, that’s about the worst that can be said for this ad. While I, personally, may not appreciate its impact on my experience, that impact may not breach community norms to the degree that regulation is an appropriate response. And as we’ve seen with the earlier examples, any potential hazard to privacy is contained by the ad’s strictly local domain of effect and inability to store or leverage the information it collects.

4. Problematically predictive and normative

The same things cannot, however, be said of our next example, the Acure vending machine, designed by industrial designer Fumie Shibata for JR East. On its surface, the Acure simply appears to be a clever updating of the conventional beverage-vending machine for a digital age; fronted by a bright and attractive 47-inch touchscreen display, it displays high-resolution imagery of the products it offers, as well as a variety of “attract mode” animations appropriate to the season, ambient temperature and time of day.

More problematically, however, the cleverness runs deeper: as with the Amnesty ad, as you’re watching Acure, it is watching you back. But where the Amnesty ad used its camera to ascertain focus of attention, unmoored from any identificatory attribute, Acure is interested in learning certain specific things about you, namely, your age and sex. It uses these facts not merely to predict the kinds of beverage a demographic profile suggests you will be interested in — which are then displayed preferentially — but to tailor certain aspects of the display and to refine the demographic model maintained on the central Acure server.

If you happen to be a statistical outlier, you will be very much less likely to find your tastes reflected in the selection of beverages presented to you. And because demographic data collected from every Acure machine in the network is used to determine stocking levels for subsequent orders — and potentially, in the fullness of time, even manufacturers’ choices of what to produce and bring to market — that distinction is progressively reified and made ever more concrete. The consequence of this system design is that one’s action, and the degree to which it conforms with the pre-existing demographic model, constrains the variety of choices made available to yourself and others, here and elsewhere, now and in the future.

What strikes me as troublesome about this may simply be a consequence of my not being Japanese. Having lived in Tokyo for three years, I recognize that the assertion that one’s personal preferences follow from one’s membership in a class or category, or ought to, is straightforwardly less problematic in the context of Japanese culture than it might be in North America. Being North American, though, I recoil at the notion that as a 43-year-old man, I “should” desire a beer at five in the afternoon...and I despair at the thought that, eventually, the only choices that might be made available to me are those known to appeal to a statistically median person of my age and gender. (Anyone familiar with the autistically reductionist logic of technological development — or the ruthless pressure toward systemic optimization inherent in late capitalism — understands this is not nearly as ad absurdum a scenario as it might sound.)

5. Surreptitious collection with inherently predictive and normative potential

Acure’s troublingly predictive, normative and reifying capabilities are a direct consequence of its ability to gather information from users, in the absence of either caution or consent. Not coincidentally, this same dynamic is responsible for much of the apprehension I have about the service that currently stands at the far pole of our spectrum of concern, the VidiReports video-analytics software package offered by the French concern Quividi. (Appropriately enough, this is Latin for “he who watches.”)

But where the vending machine confines itself to the collection of information from people who have at least made an active choice to use it, VidiReports gathers information from both those engaged with it and from people who have made no such choice. It’s software designed to be installed at the back end of so-called “bi-directional” video billboards mounted in public space — billboards, that is, that are equipped with outward-facing cameras. Since such cameras are, again, intentionally designed to be hard to see, the imagery they gather (whether of active viewers or of unwitting passers-by) is of people who are almost certainly unaware that they are being captured on video.

And if most of those whose image is captured in this way would be surprised to learn of it, they would likely be astounded by the degree of analysis to which such imagery is subjected. VidiReports isolates individuals in its field of vision and, via a real-time consideration of some 80 indexical facial features, determines their sex, age (within four bands) and ethnicity. A “glance counter” included in some versions of the software — working off the index of light reflected from watchers’ eyeballs — develops an attention metric that is closely correlated to both this underlying demographic profiling and a frame-by-frame tracking of the video content being displayed.

That is to say, in plain language, that VidiReports claims to be able to tell its users specifically who watches what, and for how long. By extension, it also helps its users develop a reasonably sophisticated understanding of which content does not engage attention.

In a perfect world, this would be immaterial. But in our world, it’s cause for concern along several axes:

Firstly, there are the reasonably foreseeable implications of this fine-grained awareness for the texture of public space.

We know that, to the marketer’s mind, not all potential viewers are equal; people belonging to certain broad demographic categories are perceived to enjoy and dispose of a greater share of income, making their attention differentially more desirable and more worth pursuing.

Coupled with the power VidiReports offers to determine precisely what imagery reliably entrains the attention of individuals of known age and sex, the implications are clear. If, for example, the grandeur is perceived to reside in “Men 18-34,” then an ever-greater percentage of imagery presented on screens in public space will be calibrated to appeal to that demographic — and, as the capability for precision eyetracking is incorporated into the software, that imagery will cover an ever-greater percentage of the screen itself.

This raises the odd prospect of entire districts of a city whose performance in the visual register is calibrated to appeal to one and only one constituency, understood as a notional demographic bloc with no precise real-world correlate. If you want a vision of the urban future, imagine one enormous Girls Gone Wild commercial, projected at a human face — forever. The consequences for those who, for whatever reason, do not find such content appealing (or simply don’t find it so at this very moment) will be predictably unpleasant.

Secondly, while data collections such as those made by VidiReports are invariably characterized by their promoters as safely and reliably “anonymous,” subsequent analysis by more sophisticated (and less interested) parties indicates that under certain circumstances, more than enough information is retrieved by such systems to permit the reasonably ready identification of a particular individual.

Latanya Sweeney, director of the Laboratory for International Data Privacy at Carnegie Mellon University, points out that this is the predictable consequence of an age of rapid online relational analysis, when correlations may trivially be made between information harvested from a point collection and pre-existing, widely available networked data sets. Cross-indexing a single VidiReports collection set against a sufficiently powerful analysis of public Flickr imagery, for example, would result in a surprisingly good chance of determining an individual’s name and social graph.

Sweeney argues strongly, therefore, that nominal de-identification is no longer a particularly viable privacy-conservation strategy. But this message either has yet to reach the marketers, or, what is as likely, is being deliberately discounted and downplayed. Nevertheless, the inherent potential for abuse — especially when such identification is coupled to real-time location, to say nothing of millisecond-resolution data about an individual’s habitual patterns of attention — is obvious.

Thirdly, there are of course information-security considerations in any such collection. While the thought that potentially personally identificatory data may be vested in the stewardship of a single private concern is troubling enough, when that data has been garnered without my notification or consent, worse still is the prospect that Quividi (or any organization in its position) may in time lose control over its assets.

Without being hyperbolic about matters, we can fairly regard the escape of a database correlating unique (and hard to conceal or alter) biometric signatures with the known presence of the person bearing those signatures at a given place and time as a master key capable of unlocking the sensitive information enfolded in many another ostensibly anonymized datasets.

Finally, it should be clear that any system with a business model like that the commercial viability of VidiReports is predicated upon represents a unidirectional and involuntary transfer of value from individuals moving in public space to private concerns unknown to them.

Since a null response is still a meaningful finding so far as the creators and sponsors of content are concerned, every person who walks by a VidiReports-equipped billboard and does not react to the imagery presented thereupon furnishes clients with a valuable insight. This has the effect of leaching economic advantage off of the streetscape — again, advantage derived from the unconscious, everyday actions and behaviors of citizens entirely unaware that they are generating it, none of whom have been asked to contribute, and none of whom stand to benefit in even an indirect way from the analysis performed on their activity.

Frankly, I have a hard time imagining a set of circumstances in which billboard owners/operators might effectively and in real time (a) notify passers-by of their intention to use VidiReports (or any bi-directional display technology) to collect information from members of the public; (b) seek consent to that collection; and (c) differentially disable sensor acquisition if consent is denied — if for no other reason than that the product’s entire value proposition relies crucially on the notion that the subject of analysis is unaware that they are any such thing.

Taken together, these issues make VidiReports (and the many similar products, with names like Immersive Media, intuVision, Cognovision and AIM) something that ought to concern any public agency charged with the protection of citizen privacy. We’re a long way from Välkky now, and have completely inverted its proposition of taking little and giving much in return:

  • Where Välkky has a strictly local area of effect, VidiReports causes changes to occur in the world that are both arbitrarily remote physically, and arbitrarily displaced in time, from the site of collection;
  • Where Välkky only registers the presence or absence of a moving object that may not even be human, VidiReports brings multiple regimes of sensing and analysis to bear on individuals, in an attempt to characterize them robustly and derive economic value from that characterization;
  • Where the “knowledge” generated by Välkky expires immediately, the information gathered by VidiReports is stored indefinitely, and persists on the network for as long as any single copy of a record containing it is extant;
  • Where some clear public good is served by Välkky, VidiReports not merely offers no such justification, but actively undercuts an individual’s reasonable expectation of anonymity in public, and otherwise damages the common spatial domain.

6. Conclusion and recommendations

At the zenith of the aspirations inscribed in something like VidiReports is the ambition to personally identify pedestrians in motion and serve them with advertising tailored not merely to a demographic, nor even to them individually, but to their observed reactions in real time — as their gaze is seen to travel and drift, as their desire for now this, later that experience registers as one or another discernible biometric trace.

Thankfully, a system with such capabilities exists nowhere yet but in fictions like Minority Report and the dreams of marketers. I don’t, however, think it’s safe to be complacent on the question; as UCSD researcher Kelly Gates makes clear in her 2010 Our Biometric Future, technologies of remote facial detection, pattern-matching and recognition are already well advanced, with automated expression analysis not far behind. (In the United States, particularly rapid progress in the domain is a likely consequence of continuing infusions of funding from DARPA and the Department of Homeland Security. As has perennially been the case, since at least the First World War, technologies first perfected on overseas battlefields and the bodies of the safely Other are sooner or later brought to bear on the domestic environment and the people living in it, in a dynamic known as blowback.)

Nor should we underestimate the difficulty of effectively conveying the privacy risks of these and similar technologies to the people exposed to them. In truth, the power/knowledge relations in all the above scenarios are artificially easy to prise apart and understand, associated as they are with discrete macroscopic artifacts that can be seen, touched, experimented with, blocked, or regulated. When these qualities obtain, countermeasures are relatively simple to devise and put into effect. But most scenarios where privacy is at risk from embedded, networked information-gathering technologies are not likely to be so straightforward. It is much harder to understand what is at stake when invidious power/knowledge relations are simply immanent in a given location — whether because these are primarily instantiated in and by software, or because they exist only as an emergent phenomenon of a group of objects which are innocuous in and of themselves, and only constitute a threat to privacy when yoked into a functional ensemble.

It’s for all of these reasons that I believe the regulatory community has such an important role to play in shaping our collective encounters with such technologies. It’s not for me as a New Yorker or a citizen of the United States to tell you what I think it most appropriate to permit and what sharply limit in your cities. But I do strongly believe that anyone who understands these technologies and their implications now has the obligation to explain them to the ordinary, everyday, nonspecialist people exposed to them: to delineate how technical systems achieve their effects, to lay out what is at stake, and to help individuals and communities formulate appropriate responses to them. My hope is that I’ve been able to do so effectively here, and it will be of use to you in your effort to devise the right fit between emergent technical potentials and Canadian values. I thank you for your time and attention.

REFERENCES

Havainne Välkky traffic sensor

Amnesty International eyetracking advertisement

Nikon “Sensory Lightbox” advertisement

Acure vending machine

Quividi VidiReports software

General resource on so-called “bi-directional” displays

Date modified: