Language selection

Search

Body Tattoos: The Indelible Ink of Personal Data Associated with the Human Body

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Address given to the Access and Privacy Conference 2016

Edmonton, Alberta
June 17, 2016

Address by Patricia Kosseim
Senior General Counsel and Director General, Legal Services, Policy, Research and Technology Analysis Branch

(Check against delivery)


Introduction

Good morning and thank you very much to my friend and colleague, Wayne, the Faculty of Extention, and all of you for inviting me to speak on the fascinating theme of "The Body as Information".

I’ve chosen to name this presentation "Body Tattoos: The Indelible Ink of Personal Data Associated with the Human Body" because much like physical tattoos, our personal information increasingly exposes us to others — inside and out — and is very difficult, if not impossible, to remove.

As technology advances in a range of areas — from wearable computing, biometrics, genomics, artificial intelligence and the Internet of just about (Every)Thing — the personal data associated with, and emanating from our human bodies has become more permanent, more revealing, more sensitive, and — dare I say — more intimate, than ever.

From Body Tattoos to Digital Tattoos and Back Again

The tattoo metaphor is not new. Others have coined the phrase "digital tattoos" in reference to our permanent online image that remains associated with us forever as information technologies track our Internet activities, search queries, opinions, interests, social circles, purchases, location, etc.

For example, in a short TED Talk called "Your online life, permanent as a tattoo" that I would recommend to you, Juan Enriquez borrows from Greek mythology to draw lessons for managing our personal information online.

In using the tattoo metaphor today, I want to turn back from the digital image to the actual physical body in order to focus more specifically on the convergence of inward-looking technologies — namely, nano, bio, information and cognitive technologies — otherwise known as NBIC technologies — and, as Dr. Rinie van Est puts it, the "explosion of privacy issues, putting the integrity of the body and the soul at stake."

So what are some of these new technologies, and their "explosive" privacy implications? Let me introduce a few examples.

A face is worth a thousand words

A painting called "Boy with a Puppet", by Giovanni Francesco Caroto inspired Dr. Harry Angelman to name a neurodevelopmental disorder he discovered in 1965 the "Happy Puppet Syndrome".

The pejorative term — since replaced by "Angelman’s Syndrome" — describes a genetic condition which results in intellectual and developmental disability, and is outwardly characterized by an inability to communicate, jerky movements, frequent laughter, and a happy demeanour (hence the original name).

The condition is further manifested by a number of characteristic facial features, some of which are physically apparent to the naked eye, namely: pale skin and eyes; prominent mandible, wide mouth, wide-spaced teeth, and a flat back of the head.

Imagine a world where many more human developmental disorders are detectable through technology capable of recognizing facial features beyond even the trained human eye. Researchers at Oxford University are taking up this very challenge.

In October 2015, The EconomistFootnote 1 reported the development of facial recognition software capable of identifying medical conditions expressed through facial characteristics. The researchers achieved a 93% detection accuracy rate for eight common disorders. As testing continues, the software could be used to eventually diagnose thousands of other conditions not perceivable by the human eye.

From a medical perspective this is quite fascinating — a computer equipped with a camera and a software algorithm could diagnose illnesses earlier than a physician could, leading to the possibility of earlier treatment and support systems.

However, links to phrenology — the 19th century pseudoscience that claimed to be able to judge someone’s character based on cranial features and measurements — are more than a bit creepy. While phrenology has long been discredited, its main premise is very similar — various data points from the head are collected and analysed and compared to what we believe to be true at the time — not unlike what programmers tell software algorithms to look for today.

Limiting this new facial recognition software to the health context, for diagnostic purposes, subject to rigorous, well-proven scientific standards is one thing. Deploying it in the criminal, employment or commercial contexts on widely-available photographs tagged through social media or captured through videosurveillance cameras, and using it to draw right (or wrong) inferences about individuals’ ability, competence, insurability, employability, and (yikes!) even criminal predisposition would be quite another.

The mannequins may be faceless, but shoppers are not

Next we move from the technology labs of the University of Oxford to the new Saks 5th Avenue location in downtown Toronto, where the mannequins may be faceless, but shoppers certainly aren’t.Footnote 2 The store uses high resolution networked cameras with sophisticated facial recognition technology to identify potential thieves. They do this by capturing faces of individuals on camera, converting them into a biometric template, and comparing them with a database of past shoplifters for a possible match.

But even more, such cameras (as in the Eye See Mannequins) come equipped with complex biometric analysis software that can also track customers inside the store, capable of detecting age range, gender, ethnicity, frequency of visits, and dwelling time in order to develop more targeted marketing strategies.

When combined with geolocation data enabled by "free" in-store wifi, a highly detailed portrait of customers begins to emerge. Add to this facial recognition software, and identifiable customers’ whereabouts and purchases on any given day can now be further combined with individualized preferences and past purchases available through a store’s loyalty reward programs, drivers’ licenses, email, home addresses, and phone numbers they collect at point of sale.

Now if you looked at these mannequins straight in the eye, would you even know you are being watched? Would you know that your facial image or other biometric data about you are being collected and analyzed to offer more targeted marketing?

Based on the terms of settlement between the FTC and Nomi Technologies in the US — a case involving in-store tracking without facial recognition — informing customers in a privacy policy is not enough. The retailer must provide an in-store mechanism for informing customers about the tracking and provide them with a meaningful opportunity to opt out.

Now, if facial recognition capability were enabled so as to identify individual customers and combine in-store data with other personal information — both on and offline — I imagine the discussion here in Canada would turn quickly towards the need for opt-in consent and maybe even call into question the very appropriateness of such pervasive data collection.

Our Genomes, Spaceships and Hammers

Speaking of body as information, we’ve all heard of spit-kits and online direct-to-consumer genetic testing companies of course. Well, here’s a new twist to an old theme.

Helixa Genome Storage was developed by a California-based start-up called Guardiome in December 2015. For slightly over $3000, customers receive their genome in a secure desktop device, called Helixa, which resembles a four-legged spaceship. It has a touchscreen on top so users can see their data. Users also receive a USB stick that plugs into Helixa, containing a variety of apps that assist the user in exploring their ancestry or cancer risk, for example.

Unlike other direct-to-consumer genetic testing companies, these results are not connected to the Internet and, partly to demonstrate its commitment to privacy (and partly as a joke, I suppose), the company sends the device through secure mail, along with a hammer in case users need to destroy the device and delete their data in an emergency.

Perhaps this arrangement at first blush seems more privacy protective than some other online genetic testing options, although what the company does exactly with your data and samples at their end would need to be further examined.

We are What We Wear

How many people are wearing a Fitbit or other fitness tracker right now? How many have activated your Apple Health or Google Fit on your smartphone to monitor your health and fitness level? How many are wearing a smart vest or a watch that can do a similar thing?

While many of us use these for fun or curiosity, for general weight management, or just because we received one as a birthday gift, wearable devices are being increasingly used to help manage serious health conditions.

To cite an example, Epiwatch, an app that runs on iPhone and Apple Watch, is being used by John Hopkins University to conduct a study on predictors for onset of epileptic seizures. Through the Epiwatch app, the study collects heart rate data; while an accelerometer monitors movement, and a gyroscope monitors orientation in space to measure and record movements and falls during seizures. It also collects metrics and dynamic user feedback by the individual or a caregiver to track real-time biometric measurements and cognitive data before, during, and after a seizure.

And of course, we all remember the contact lenses developed by Google that can read glucose levels from tear fluid in order to help users manage diabetes.

Wearables and other health trackers can help motivate us to adopt healthy behaviours, manage our health treatment plans, provide our health care provider with real time data about our health status, and maybe even reduce costs to our health care system. However, we also know wearable technologies are not always as transparent as they should be. They have the potential to aggregate more information than is necessary to fulfil stated purposes and use it for other purposes. They also sometimes lack the mechanisms needed for individuals to access and challenge the accuracy of their personal information, particularly where it may have important ramifications for them.

Technology Can Cut Both Ways

While many of these new technologies intimately linked to our bodies can seem privacy-invasive, some of them are actually intended to be privacy-enhancing, in other words, technology can cut both ways.

Wearing Your Heart on Your Sleeve

For example, a company called Nymi offers a wearable band that can be used for authentication purposes. Our heart sends out unique electrical impulses each time it beats. This pattern is captured by an electro-cardio-gram which is used, along with other information, to authenticate users. From its website, the company seems mostly targeted at the workplace but is open to technology partnerships and is working towards authenticating payments as well.

Sorry, not without your Genome

Even out genetic information can serve as a means of authentication.

One third-party developer with the handle "Offensive Computing" developed a code called Genetic Access Control. He then uploaded the code onto GitHub — a popular code sharing site. He designed Genetic Access Control to use gender, ancestry, disease susceptibility, among other potentially sensitive and unique genetic traits, as a means of authenticating users before granting them access to a given website.Footnote 3 Users must consent to having their genetic data on the 23andMe platform be accessed for this purpose.

The code was relatively easy to develop and only took a few hours to do. It was used just three times before 23andMe blocked access to its data on the grounds that the developer had violated its Application Programmimg Interface (API) policy. While the Genetic Access Control program currently lacks the necessary genetic information to work now, it has proven it can be done. The code for Genetic Access Control remains on GitHub and can potentially resume in the future, assuming they can regain access to 23andMe’s, or some other source of, genetic data.

I suppose we might be able to imagine situations where this type of application may be socially and ethically appropriate, subject to the informed consent of users of course. The developer suggested that the code could be used to create safe spaces online for women only. But, one can just as quickly see potentially inappropriate uses by groups who wish to purposely exclude others based on race or other grounds (i.e., white supremacist groups), opening up a Pandora’s box of potential online discrimination and segregation.

Think Quick and Click!

Brainwave scanners may be another unique means of identifying and authenticating potential users. In 2015, researchers from Binghamtom University in New York recorded the brain activity of 45 people wearing an electro-en-ceph-a-lo-gram (EEG) headset. After exposing individuals to acronyms, they found that the participants’ brains reacted differently enough that a computer system was able to identify each volunteer’s ’brain print’ with 94% accuracy.

Pig Pen was Right All Along

The Economist (Oct 3, 2015) reported recent research findings demonstrating that even bacterial traces can be used to identify persons. Though the study mused that there could be new ways to identify criminals at crime scenes the study itself was done in a tightly controlled and sterilized environment.

Are our bacteria our personal information? I won’t even venture there. We’re probably far away from developing and applying en masse the type of technology that could constantly monitor the effusive bacteria we leave behind in order to identify and authenticate us with speed and accuracy…

But each of us already has something else we carry, around with us, capable of capturing other bodily information that can be used to identify and authenticate us — our smart phones…

Who’s Trustin’ Who?

Just a few weeks ago, Google announced a new project to move away from passwords on all Android devices. Passwords would be replaced by a "Trust API" that uses a number of sensors in the device itself to collect a user’s unique body metrics such as "facial image, the way a user types, swipes, and even walks".

Based on these metrics, the device builds a "trust score" and will not authenticate a user, unless his or her score passes a certain threshold, failing which they need to provide further information.Footnote 4

Now all of this seems quite paradoxical doesn’t it?

First, in order to protect our privacy, we seem to be giving up more and more of our privacy. While I suppose this has the appealing advantage of doing away with frustrating passwords, we are moving far away from the bland type of authentication questions, such as, What’s your favorite color? Or where you went to high school? This can certainly be positive if it enhances fraud detection and prevention. However, we’d be remiss if we didn’t stop to notice that these new biometric technologies are collecting more and more of our most sensitive information directly and automatically from our bodies to unlock access. And with increased collection, comes increased risk of breach, and increased risk of other unrelated uses we may not always know about or fully understand.

Second, it is interesting to note that some of these technologies seem to flip the human-machine trust relationship on its head. While we spent a great deal of time testing machines in tech labs to make sure they are reliable and meet specifications before they can be deployed and used in the marketplace — our machines must now trust us before we can use them! Humans become untrustworthy, while our machines are trusted by default. And the information from our bodies becomes the key to unlock the very machines we have become so dependent on.

It’s hard to predict which of these technologies will fail and never see the light of day, and which ones will become so integrated into our day-to-day lives that we’ll come to forget, or at least ignore, the downstream costs to our privacy. Lest we forget — or ignore — we must remain vigilant, inquisitive, and informed. We must focus not only on the appropriateness of allowing our bodies to be used as information for different purposes — but also on the "how" — in order to maintain a careful balance between the social, health and economic benefits these innovative technologies have to offer, and the potential risks of hastily adopting them and giving up too much in terms of our bodily integrity, our intimacy, and our selves in the process.

OPC: Body as Information

So with that bigger picture in mind, let me now turn to what the OPC is doing in order to advance understanding and privacy protection in this area.

As many of you may know, in 2015, and after extensive consultation with stakeholders and focus groups across the country, our Office identified four strategic priorities to help focus our efforts and our resources over the next five years or so.

One of the priority areas that emerged is the Body as Information, and we established as our long term goal "to promote respect for the privacy and integrity of the human body as the vessel of our most intimate personal information".

Leading up to our priority-setting exercise, and as a way of better understanding emerging technologies connected with the body and the privacy issues at play, we conducted and published research on Automated Facial Recognition in 2013 and another report on Wearable Technologies in 2014.

Continuing on our research track this past year, we published a related research paper on Internet of Things, with a particular focus on the home and retail environments.

To further advance our priority work in the area of Body as Information we recently completed an Environmental Scan of Digital Health Technologies, including wearables and fitness trackers, targeted at consumers. Using publically available materials, we looked at 78 smartphone applications, wearables, and other consumer health devices. Of the 78 products reviewed, only 51 (or 65%) had privacy policies. Of those that did have one, it was not uncommon to find very generic types of policies.

We also noted that over 50% of the products had to be connected to something else in order to work. Most wearables had to connect to apps, for example, increasing the possibility of bringing together bits and pieces of information from different places, via a single interface.

We also observed that digital health technologies collect a wide range of information to fulfill their product claims. Depending on the product, this could range anywhere from location, height, weight, date of birth, heart rate, food consumed, when users are active, movements or sound made during sleep, whether your baby was nursed on the left or right breast (and when), temperature, symptoms, or journal notes that users may choose to enter.

We noted transparency lacking in several respects. For example, the fact that location data might be used to calculate distance walked was not always apparent to users. We also found it difficult to identify whether companies were using personal information for marketing purposes. While there may have been a reference to marketing in privacy policies, it was not always clear what information was being used, and whether information was being used for internal marketing purposes or being shared with, and/or pulled by, external third parties, and if so, who.

The results of this environmental scan helped inform the types of consumer fitness and health devices we selected to sweep during our participation in this year’s GPEN Sweep that took place April 13, 2016, on the theme of Internet of Things and Accountability.

During the sweep, we replicated the consumer experience by assessing the nature and extent of information about the devices and actually used the devices to better see things from their perspective. We looked for whether:

  • privacy communications adequately explain how personal information is collected, used, and disclosed
  • users were fully informed about how personal information collected by the device is stored and whether the company implemented safeguards to prevent loss of data, and
  • whether privacy communications included contact details for individuals wanting to contact the company to follow up about a privacy-related matter.

The complete results of this year’s Sweep are being compiled and analyzed and we expect to make them public by the fall. I can say that there were some positive things we found, but also lots of room for improvement.

Our next step, following the Environmental Scan and GPEN Sweep results, is to conduct a network analysis of a select number of consumer products in our technology lab, including health and fitness related devices. By looking "under the hood", so to speak, we hope to gain a deeper and even more sophisticated understanding of what technically happens to our personal information and where it goes when we connect our bodies to these devices.

Parallel to this, we are also conducting an in-depth technological analysis of privacy enhancing technologies, including biometrics.

Drawing from the sum of all these initiatives, we intend to eventually develop guidance, particularly for SMEs and app developers, on how to build privacy protections into their new wearable products and services, while avoiding certain "no-go" zones altogether. Through education and outreach efforts, we will help inform Canadians about the privacy risks associated with wearable devices and offer options for protecting themselves.

Further related to our strategic priority on Body as Information is our work on direct-to-consumer genetic testing. As direct-to-consumer genetic tests are being sold over the Internet, for a range of purposes, we committed to raising awareness of their potential privacy risks. In collaboration with the privacy oversight offices in BC, Alberta, and Quebec, we will soon be releasing a direct-to-consumer genetic testing fact sheet that will suggest a list of questions for individuals to ask when considering ordering an online test in order to be more fully informed of the related privacy implications.

Some of you will be happy to know that we have recently issued an updated version of our identification and authentication guidelines first released in 2006. In light of new technological advancements particularly in the area of biometrics, we felt it was time for a refresh. While biometrics can be strong identifiers, they are not a full-proof authentication method and, as we discussed a little earlier, automated recognition systems can introduce a few privacy and security risks of their own.

On the public sector side, we continue our active role as ex officio member of the National DNA Databank Advisory Committee. Much of this past year was devoted to mitigating privacy risks associated with planned implementation of the new indices that will be added to the national forensic databank pursuant to legislation passed in 2014 — namely, DNA profiles of missing persons, the relatives of missing persons, volunteers, victims and human remains, and the privacy implications that arise when you start mixing humanitarian purposes with law enforcement purposes.

Lastly, I would be remiss if I left you all with the impression that we do this alone. We certainly do not. Much of our work is done with provincial and territorial colleagues, many of whom are here today. As well, we fund some of the best researchers in the country, including stellar scholars from the University of Alberta, right here in Edmonton, to conduct arm’s length research in our strategic priority areas, such as direct-to-consumer genetic testing, wearable computing, etc.

Thank you very much for allowing me the great honor and privilege of speaking with you today.

Date modified: