Share this with a Friend: Ethical and Privacy Implications of Online Recruitment Strategies for Research

Alternate versions

Keynote address at the CAREB National Conference 2012

April 27, 2012
Toronto, Ontario

Opening remarks by Patricia Kosseim
Senior General Counsel and Director General, Legal Services, Policy and Research Branch

(Check against delivery)


(slide 1) It’s an enormous privilege for me to be here today and on behalf of the Privacy Commissioner of Canada, I thank you sincerely for your invitation. My speech will be in English today, but I would be pleased to answer any questions in both official languages.

(slide 2) I’d like to begin with a few remarks about the bigger context. We are entering a global era of “Big Data”. This is a recently-coined term to describe the emergence of astronomical amounts of digital information never before seen or imagined in history. Joe Hellerstein, Computer Science Professor at U C Berkley has called it “the industrial revolution of data”. According to a Special Report of The Economist called “Data, Data Everywhere”, “data are becoming the new raw material of business: an economic input almost on par with capital and labour”. Given these vast amounts of raw data, combined with increasing computing power and more sophisticated algorithms, there is potential as never before to gain new insights, derive patterns and connect dots in ways that would have not have been possible in the past.

The technological ability to aggregate, store and analyze data, along with plummeting costs are certainly key drivers. As is the vast increase of “digital exhaust” created by consumers themselves, i.e., data created as a by-product of other online activities such as search terms, queries, purchases, chats, blogs, inquiries, photos, videos, location data, contacts, etc. In a May 2011 Report called “Big data: The next frontier for innovation, competition and productivity”, the McKinsey Global Institute suggests that “we are on the cusp of a tremendous wave of innovation, productivity and growth” and “the scale and scope of changes that big data are bringing about are at an inflection point, set to expand greatly, as a series of technology trends accelerate and converge”.

The report estimates that enterprises and consumers together stored more than 13 exabytes of new data just in 2010 – that’s more than 52,000 times all the information stored in the US Library of Congress, or put another way, more than twice the amount of human language ever spoken throughout history! It predicts that the most successful companies and governments will be those that move early to ride this big data wave and be first among competitors to capture and leverage its economic value.

More and more businesses and governments are tapping into big data. For example, they aggregate detailed performance data on people and processes to improve quality and efficiency; they create highly-specific segmentations according to inferences about peoples’ status, interests and needs, to better customize their products and services aimed at target populations; they collect product and location data from built-in sensors real time (e.g. GPS data on where people go, facial recognition data on who stops how long to view which ads, smart navigation data on how people drive their cars) all in an effort to improve advertising strategies, price insurance or enhance the next generation of products or services.

Almost every sector -- both public and private -- is on to this, and the pharmaceutical industry is no exception. Larry Risen, President of Patient Recruiters International Inc., recently bemoaned the fact that patient enrollment per clinical site has been decreasing by 10% annually for the past decade and more than 80% of all clinical trials experience significant delays due to patient recruitment costing companies more than $35K per day, per trial. He and others are looking to new sources, including social media, to help expand the pool of available research subjects. While still in its infancy, social media is nonetheless seen as having great untapped potential. This is particularly so given the sharp increase in smart phones and portable devices that’s predicted to occur over the next few years, when more and more people will be directly reachable, anytime, more and more of the time.

(slide 3) Tim Benjamin, founder of a UK patient recruitment firm called Treatment Trials agrees. He blogs about how companies can create online recruitment strategies that work. He describes how awareness can be enhanced through more targeted Facebook ads that can go live in minutes, get directed to relevant users by matching interest and geo-location, yield concrete indicators like click-through rates to measure ad efficacy -- and all for relatively low cost. He also expounds on the benefits of You Tube. Companies can post web-based videos on YouTube, explaining details of clinical trials to potential participants. Videos can be loaded onto mobile devices that potential participants can take into quiet areas and view and re-view as many times as they wish, cutting down significantly on time and effort of research staff.

(slide 4) If social media has all this promising potential for patient recruitment, Canada is well-poised to exploit it. According to eMarketer estimates, Canada ranked first globally in social media penetration in 2011.

(slide 5) Already in 2010, comScore found Canadian internet users watched more YouTube than anyone in the world (71% well above the worldwide average), out-tweeted Americans (13.7% compared to 11.3%) and visited Facebook significantly more than U.S. online users (83.1% versus 71.5%).

Given this vivid interest in enhancing patient recruitment rates online, what are some strategies we see emerging in the field? For illustrative purposes, I will go through a series of examples, in what I think is increasing order of privacy intrusiveness.

Example 1: Online ads for research studies

(slide 6) In this first example, we see an ad on KIJIJI looking for children between the ages of three and five interested in participating in a “fun activity” to study the relationship between self-awareness and memory. Researchers will provide free parking and a small gift, and interested parents are asked to contact them. We are also assured that the research was approved by the Carleton University Psychology Research Ethics Board.

In essence, there is nothing qualitatively different between this ad and a traditional ad in a newspaper or magazine. Essentially it is the same recruitment method many REB members are familiar with, simply disseminated through a new medium. Basic ethics principles would need to be respected: valid parental consent with minimal risk to the child; absence of undue influence or inducement going to voluntariness, etc.

Example 2: Online research surveys

(slide 7) In this second example, we see a Facebook ad being served up for a research company that’s in the business of conducting consumer surveys online. For a modest sum, Facebook directs the company’s ads to Facebook users matching their profiles with enrollment criteria, thereby reaching more potentially eligible respondents for each survey. A further benefit of online research of course is the cost containment of administering questionnaires through digital form rather than traditional mail or phone methods.

Some ethical issues to address may include: degree of invasiveness of the questions being asked; potential sample bias given lack of certainty as to who actually sees and responds to the ads; capacity issues that cannot be readily assessed in a virtual world; whether users’ profiles actually match their identity (respondents may not be who they say they are).

Example 3: Targeted ads for research

(slide 8) In this third example, we see a more targeted ad approach. This particular ad campaign for Vicks Behind-the-Ear-Thermometers used a combination of Google flu trends data and user information collected through mobile apps. Let me explain.

Aggregate patterns of Google search queries for flu-related information have been found to correlate remarkably with actual influenza trends documented by public health surveillance data (for U.S., slide 9 for Canada, slide 10). Here, Google flu trends data were used to identify regions likely to be experiencing influenza, hence needing a thermometer. Within that geographic area, certain potential customers were further targeted according to basic demographic data gathered through their mobile apps, in this case, Pandora music service. Users of Pandora provide basic demographics, including age, gender, ZIP code, and parental status, and through their mobile device, can reveal their approximate location at any given time. Combining all of this information, Vicks was able to target its ads for Behind the Ear Thermometers to only those Pandora listeners who were mothers, living in a high-risk flu area, within two miles of retailers selling the product (including Walmart, Target, and Babies “R” Us).

While this is a documented example of how a targeted ad was used to sell a product, one can easily extrapolate how pharmaceutical or other companies might use similar “smart” methods to target potential research participants matching eligibility criteria for study enrolment. This example gives rise to yet another range of ethical issues having to do with the privacy of app users in this case -- whether they were aware that their personal information would be used for targeted ads by third parties and whether they were given meaningful opportunity to opt out of it.

Example 4: Secondary use of online data

(slide 11) A fourth example of how recruitment strategies could be enhanced is through secondary use of information posted or sought online for other purposes. Direct-to-consumer genetic testing sites, health information websites or patient discussion fora offer rich opportunity for identifying potential research participants matching specific eligibility criteria and contacting them for enrolment into a given study. Many of these sites contain highly sensitive personal information.

Whether personally identifiable information about clients or members can be used or disclosed for research purposes depends on the terms of use. What were users told were the conditions for joining or signing on to the site in the first place? The ethical inquiry then necessarily turns to whether the terms of use or privacy policies of these websites clearly laid out research as a potential use of personal information, in what form (identifiable or not), and by whom (website owners or third parties); whether the information was sufficiently clear to serve as a valid basis for inferring informed consent; and, whether the proposed research actually aligns with users’ reasonable expectations.

Example 5: Listening to patient self-reported data online

(slide 12) An interesting example of how online listening can be used for research purposes is described in this study published in Nature Biotechnology in 2011. The researchers collected data from PatientsLikeMe.com, a patients’ online forum that allows individuals suffering from serious diseases to share their “real-world health experiences.” In this particular case, the study examined patient self-reported data describing the effects of using lithium carbonate to treat ALS. Ultimately, the study found that the use of the treatment in question had no effect on disease progression over a 12-month period, an outcome which effectively mirrored results of parallel randomized trials.

While the authors (employees and shareholders of PatientsLikeMe) acknowledged that observational studies of this nature can never be a true substitute for the gold standard of double-blinded randomized trials, the study nonetheless shows the potential of online patient forums to provide an observational environment for monitoring disease progression and treatment efficacy. Provided, of course, people know and understand this and consent to be a part of it -- and provided, respectful and sensitive consideration is paid to the particular vulnerability of online patient groups in these types of situations.

Example 6: Online observational studies

(slide 13) In 2005, the online game World of Warcraft provided critical insights into how humans might behave during a real-life epidemic. As a result of a programming error, the virtual game company, Blizzard Entertainment, created an environment which closely resembled that of an uncontrolled plague. Players took the game very seriously and demonstrated a wide range of likely human responses to real threat of infectious outbreak. Some chose to help others; some fled in a selfish effort to save themselves; some broke quarantine measures, deliberately setting out to infect others.

Researchers at the Centres for Disease Control became very interested in this online game as a model for simulating human behaviour during times of outbreak and studying the impact on disease spread. They contacted Blizzard to use their data for study purposes. Epidemiologists are excited about the opportunity of using online gaming models such as this one to mimic real-life environments and study human behaviour in ways that cannot ethically be done otherwise. Clearly, you can’t set out to deliberately infect people in real life and then sit back to see what happens. But even in a virtual play-world, legal and ethical considerations still compel us to heed players’ reasonable expectations, respect the context in which information was created in the first place and seek consent before using potentially identifiable information for a different purpose.

What do Canadians think?

(slide 14) Given all these new and exciting online strategies for enhancing recruitment in research studies, one has to ask: how do Canadians feel about this new digital age?

In a 2011 public opinion poll commissioned by our office, HarrisDecima found that Canadians do worry about their privacy and security online.

  • 65% of Canadians said the protection of personal information will be among the most pressing issues confronting Canadians in the next decade.
  • 40% of Canadians polled had general privacy concerns about the Internet or computers (up from 26% in 2009).
  • 55% expressed privacy concerns related to social networking sites

    (slide 15)

  • Only 21% said they either always (7%) or often (14%) read the privacy policies for Internet sites they visit.
  • 28% said they sometimes read privacy policies, while half either rarely (25%) or never (25%) do.
  • More than half found online privacy policies to be either somewhat vague (35%) or very vague (18%).

Underlying privacy concerns

What then are some of the privacy issues at play?

At the root of much of the privacy debate is the malleable concept of personal information. Many of the online practices we see rely on the assurance that the information aggregated, used and/or disclosed to third parties is non-identifiable. However, given the scope and scale of the information collected, the powerful means now available to combine and analyze disparate pieces of data and the increasingly personalized nature of targeting strategies, there will often be a serious possibility that information could be linked to an individual. AOL Research provided a famous example of this when it posted what it thought was anonymized, individual-level search queries which, as it turned out, when linked up across user accounts and matched with public phone listings, became personally identifiable. In light of rapidly-evolving technology, it is more and more important to take a broad and contextual approach to interpreting personal information.

Another commonly held misconception is that information posted by individuals online is necessarily publicly available. Many databrokers take the position that if information is available online, it’s “fair game” for anyone to use, no matter how personal and for whatever purpose. But that is not the case. Let me give you an example:

In May 2010, it was discovered that someone from Nielsen Co. had infiltrated PatientsLikeMe, an online patient discussion forum. Nielsen, a U.S. privately-held media-research firm, harvests Web information to learn what consumers are saying about specific products or topics and provide this information to its clients – otherwise known as “listening services”. Using sophisticated software, Nielsen scraped all messages from the online discussion forum, containing highly sensitive personal information of members. This activity was clearly in contravention of the terms of use of the online forum and should not have happened. The intention, and indeed reasonable expectation, of members who joined, was to find a common place where they could share deeply personal accounts with others suffering a similar disease. To their credit, Nielsen stopped the practice of its own volition, but only after hundreds of “creeped-out” members quit the PatientsLikeMe website out of anger and loss of trust.

This leads to another privacy issue Canadians worry about online and rightly so-- the lack of transparency about behind-the-scene practices of many websites. Terms of use and privacy policies are notoriously vague – Canadians said as much in our public opinion survey I showed you earlier. It’s often difficult, if not impossible, to discern from them what will be done with personal information posted online. What other purposes will it be used for? Will it be shared with or sold to third parties? In what form? Identifiable or not? And for what purposes? What is stated (or not stated) up front will of course affect the validity of users’ consent.

In an online world, of interconnected companies, website developers, and advertisers, it is growing increasingly difficult to understand the dataflows and speak of informed consent in any true sense of the word – for adults, let alone for youth or children. This is particularly the case online where privacy settings are not available, or if they do exist, are rarely the default, and generally very difficult to find. A related challenge is how to ensure continuing consent in an online world where it is difficult, if not impossible, to withdraw consent and permanently delete personal information on the internet. As you’ve no doubt heard, “the net never forgets”.

Role of REBs

Where does all this leave you as REB members? Well, in the unenviable position of having to contend with all this complexity, for one. But also, in a very strategic position at the front end of research proposals when many of these ethical and privacy issues could be raised and there is still time to reflect upon them before projects begin. Nobody expects you to have all the answers, but we could only hope you ask the right questions.

Annual CAREB conferences like this, and year-round tutorials offered by the Inter-Agency Secretariat, are important opportunities to gain insight into the emerging ethical and privacy issues, and offer you support to help address them. It may seriously be time to strengthen the composition of your boards with the addition of members having requisite expertise and capacity in the area of information technology who could help you navigate through these questions. It may also be useful to draw on the wealth of resources, guidelines, fact sheets, available through the information and privacy commissioners offices across the country. In fact, maybe it’s time for the REB and data protection community to work a little more closely together.

(slide 16) As you begin to see research proposals riding the Big Data Wave, experimenting with these and other online recruitment strategies, remember this: “The Computer does not impose on us the ways in which it should be used; it simply has certain technological capacities; it is up to us to use those capacities for decent human purposes.” -- George Grant, Lament for a Nation.

(slide 17) Thank you, merci.

Date modified: