Contributions Program projects underway
On August 28, 2023, the Office of the Privacy Commissioner of Canada (OPC) announced funding for a new round of independent research and knowledge translation projects funded under its Contributions Program. These projects will be completed by March 31, 2024. The OPC will post a summary of completed projects, as well as links to their outcomes, once the projects are completed and reviewed by the OPC.
2023-24 Contributions Program funding recipients
Organization: York University (Ontario)
Amount awarded: $ 50,000
Project leader: Jonathan Obar
Organization: Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic – CIPPIC (Ontario)
Project title: Making Privacy More than a Virtual Reality: The Challenges of Extending Canadian Privacy Law to Extended Reality
Amount awarded: $ 49,450
Project leader: Vivek Krishnamurthy
Extended reality (“XR”) technologies – which include augmented, mixed, and virtual reality – promise to revolutionize many aspects of our lives. Yet, the nature of these technologies’ data collection poses profound new implications for personal privacy. This project will begin with a survey of current XR technologies and anticipated developments to contextualize the privacy concerns surrounding XR hardware. Following the survey, the researchers will undertake a comprehensive study evaluating whether PIPEDA and the proposed CPPA are up to the task of protecting Canadians’ privacy in XR environments. The researchers will explore whether alternative approaches may be required to protect the privacy of Canadians in immersive environments, offering suggestions for Canadian privacy laws to address challenges posed by XR technologies.
Organization: University of Ottawa (Ontario)
Project title: Benchmarking Differential Privacy and Existing Anonymization or Deidentification Guidance
Amount awarded: $ 47,370
Project leader: Rafal Kulik
Government and private industry, including official statistics organizations or health institutions, collect information from individuals and publish aggregate data to serve the public interest. Organizations have long collected information under a promise of confidentiality, on the understanding that the information provided will be used for statistical purposes only and that the release and sharing of information will prevent information from being traced back to a specific individual. Differential privacy provides a means of limiting the information that is released so that an individual’s contribution remains hidden from a statistical release of a single query (or a small number of queries).
Recently there has been a significant push to establish differential privacy as a standard in emerging AI technologies. Though the technique is starting to be widely used by tech companies and government agencies, there are challenges that must be overcome before we can see a full adoption of this technology when it comes to deidentification and anonymization. This project will aid in developing a framework necessary to implement differential privacy in practice, as well as help form a decision-making protocol in terms of other privacy technologies and current guidance.
Organization: Queen's University (Ontario)
Project title: Large Language Models and the Disappearing Private Sphere
Amount awarded: $ 50,000
Project leader: Catherine Stinson
This project will examine possible futures for large language models (LLMs) and privacy in the private sector in the age of immersive and embeddable technologies. The project will produce a report, guidelines for institutional review boards (IRBs), and a public website to increase knowledge and understanding about the actual and potential future implications of LLMs and the collection of data used to train them. The project will also examine the differential effects of LLMs on the privacy of marginalized Canadians and members of minority language groups.
The researchers will focus on five questions: 1. What is the de facto status of web scraping in Canada, according to IRBs? 2. How much data about individuals can be retrieved from LLMs? 3. Are marginalized groups and minority language groups more susceptible to privacy leakage from LLMs? 4. Are Canada’s privacy laws and regulations capable of dealing with these models and their privacy implications? 5. What changes to law and regulation might be needed?
Organization: University of Western Ontario (Ontario)
Project title: Identifying and Responding to Privacy Dark Patterns
Amount awarded: $ 49,717
Project leader: Jacquelyn Burkell
The aim of this project is to help minimize the impact of privacy dark patterns on Canadian youth by informing the development of effective regulatory frameworks and educational materials that will assist users to resist these tactics. Privacy dark patterns are interface design strategies intended to “nudge” users to reveal personal information, either directly, or by enabling (or failing to disable) privacy-invasive platform or profile settings. Teens are especially vulnerable to the effects of dark patterns on privacy choices, both as avid users of the Internet and social media and because of their awareness levels of commercial surveillance online.
The researchers will conduct focus groups with teen users of social media sites to determine whether they are able to identify privacy dark patterns and how they respond to these strategies. The researchers will also review current regulatory responses to privacy dark patterns, identifying both the variety of approaches and challenges to effectiveness. The results of this research will inform the development of educational materials in collaboration with MediaSmarts that teach teens how to resist privacy dark patterns on social media.
Organization: Option consommateurs (Quebec)
Project title: In the Matrix – Consumer Privacy in the Metaverse
Amount awarded: $ 49,758
Project leader: Alexandre Plourde
The advent of the metaverse—a three-dimensional virtual reality where users can interact with others—poses privacy risks for consumers. The metaverse’s immersive capabilities allow for the unprecedented collection of personal information. It is conceivable that the analysis of data generated in the metaverse could infer thoughts, emotions or other sensitive information about consumers. As a result, the scope of the metaverse’s data-collection abilities raises multiple issues related to the legal framework for personal information protection.
In this research project, Option consommateurs will outline the various metaverse models Canadians have access to, as well as those that could emerge in the coming years. Option consommateurs will also analyze the privacy policies, user agreements and informational content of a representative sample of three types of companies in the metaverse environment: metaverse developers, companies that are doing business in the metaverse and companies that create the devices used to access the metaverse. Lastly, Option consommateurs will look at what legislation applies to personal information protection in these new environments—in Canada and abroad.
Organization: The Centre for International Governance Innovation – CIGI (Ontario)
Project title: Hacking the Human Mind: Lessons for Canada’s Democracy
Amount awarded: $ 30,000
Project leader: Aaron Shull
Companies are now deploying technological, psychological, and sociological methods to get inside the minds of users, collecting data of millions of people, many of whom may not be aware of it. This approach embodies a form of behavioural manipulation that is threatening the right to freedom of thought and opinion and is an invasion of one’s mental privacy. Given this new reality, there is an urgent need to implement strategies to protect Canadians’ human autonomy.
This research project will draw on diverse subject matter perspectives to prepare an explainer video and policy brief for the Canadian public, exploring questions such as: How does privacy protect our inner freedom? In our data-driven world, where do we draw the line between legitimate influence and unlawful manipulation of thought? How are challenges to the freedom of thought present in the Canadian information ecosystem context? What are the risks to privacy and cognitive freedom posed by advances in neurotechnology? How can immersive and embeddable technologies enable individuals to thrive while protecting their privacy? How can we chart a path to effective protection? What should privacy legislation and policies address in response to the risks and challenges that arise from technologies?
Organization: Law Commission of Ontario (Ontario)
Project title: Privacy and Human Rights Impact Assessment in Canadian AI Systems
Amount awarded: $ 39,600
Project leader: Nye Thomas
Notwithstanding AI’s potential, private sector use of AI is often very controversial. There are many examples of private-sector AI systems that have violated privacy protections or proven to be biased or discriminatory. Privacy and human rights compliance are the foundations of trustworthy AI. Canadian AI systems must be both privacy-compliant and human rights compliant to ensure Canadians can “trust” AI systems and to unlock the extraordinary economic potential of this technology. To date, it appears initiatives that promote privacy and human rights in Canadian AI systems have developed on distinct tracks.
This project will produce a comprehensive report identifying law and policy reform recommendations discussing the relationship between privacy and human rights in AI governance and regulation. The project will also make recommendations considering whether an integrated “Trustworthy AI” impact assessment tool addressing both privacy and human rights is desirable, achievable, or practical.
Organization: University of Waterloo (Ontario)
Project title: A Pan-Canadian Data Governance Framework for Health Synthetic Data
Amount awarded: $ 48,875
Project leader: Anindya Sen
Health data, especially electronic medical records, are often stored in disparate systems and formats, rendering integration and standardization difficult. Researchers and developers often depend on de-identified or aggregated data to test theories, data models, algorithms, or prototype innovations, but it takes a substantial amount of time and resources to retrieve, aggregate, and deidentify relevant data before it can be used. One approach proposed to solve these challenges is the creation of realistic, high-quality, synthetic health datasets that capture as many of the complexities of the original datasets, but do not include any real patient data.
However, compared to other countries, health synthetic data have remained implicit in Canada’s regulations, and not received explicit treatment. In addition, there is no universal framework to govern health synthetic data and assess its impacts. This project aims to develop a data governance framework for health synthetic data, ethical guidelines for research involving health synthetic data, recommended policy changes, and a cost impacts framework.
Organization: University of Calgary (Alberta)
Project title: Mitigating Race, Gender and Privacy Impacts of AI Facial Recognition Technology
Amount requested: $ 49,772
Project leader: Gideon Christian
The increasing use of Artificial Intelligence Facial Recognition Technology (AI-FRT) in the private and public sectors has been plagued by issues of privacy as well as racial and gender bias, as the technology regularly misidentifies or fails to identify individuals of a particular gender or race.
This research project seeks to examine and identify the race, gender and privacy issues related mainly to the development of AI-FRT by the private sector in Canada, and its use by both the private and public sectors in Canada. Other objectives of the research include to develop a framework and guidelines to address race, gender and privacy impacts arising from the development and deployment of AI-FRT by private sector developers in Canada; to identify possible reform of the Personal Information Protection and Electronic Documents Act (PIPEDA) to legislatively address race, gender and privacy impacts arising from private sector development and deployment of AI-FRT; to collaborate with the Alberta Civil Liberties Research Center (ACLRC) to increase public understanding and awareness of race, gender and privacy impacts of AI-FRT through webinars, workshops and publication of research papers; and to strengthen research capacity in academia by training graduate students to research on race, gender and privacy issues related to AI-FRT.
Organization: Concordia University (Quebec)
Project title: Privacy Analysis of Virtual Reality/Augmented Reality Online Shopping Applications
Amount awarded: $ 35,454.50
Project leaders: Mohammad Mannan (principal investigator) and Amr Youssef
This project will investigate the ecosystem of virtual reality/augmented reality (VR/AR) e-commerce, retail apps and websites, including virtual try-on, virtual makeup and beauty apps or websites, and the technological tools that are used by developers of these systems. The researchers will design and implement a privacy and security analysis framework to find and analyze these apps and websites as well as a selected set of their software development tools and libraries.
The researchers will produce a public report summarizing the findings of their investigation and presenting recommendations for improving the security and privacy of VR/AR systems. The report will also include some easy-to-follow guidelines for Canadian shoppers who use these apps. The findings of this report will be provided online for free. The researchers will also produce a technical paper detailing their full methodology and results, as well as their technical recommendations.
- Date modified: