Contributions Program projects underway
On August 28, 2025, the Office of the Privacy Commissioner of Canada (OPC) announced funding for a new round of independent research and knowledge translation projects funded under its Contributions Program. These projects will be completed by March 31, 2026. The OPC will post a summary of completed projects, as well as links to their outcomes, once the projects are completed and reviewed by the OPC.
2025-26 Contributions Program funding recipients
Organization: University of Windsor
Project title: Driving Privacy Forward: Homomorphic Encryption and Oblivious AI in Smart Mobility
Amount requested: $80,012
Project leader: Mitra Mirhassani
Project summary:
Modern connected and autonomous vehicles (CAVs) incessantly gather and relay extensive personal information, including location history, driving patterns, biometric data, and infotainment choices, to enable AI-driven services like predictive maintenance, route optimization, and driver risk assessment. However, this rich data stream also exposes sensitive user information to potential misuse, unauthorized access, and breaches.
In Canada, regulatory protections lag behind the technology. PIPEDA provides only general guidance, underscoring a pressing need for privacy-by-design solutions that both uphold consumer rights and anticipate upcoming legislation, such as the proposed Consumer Privacy Protection Act (CPPA). The project aims to address these challenges by presenting a privacy-preserving architecture utilizing homomorphic encryption (HE) and oblivious AI to enable encrypted data analytics in connected vehicles.
In this approach, vehicle-generated data remains encrypted at all times, so cloud servers and external AI services can perform computations on the data without ever decrypting it. Even if a cloud platform is compromised, the data will remain unintelligible to attackers, hence minimizing the risk of privacy breaches. By integrating HE and oblivious AI, our framework allows connected vehicles to benefit from cloud-based machine learning and big data analytics while keeping all personal information fully encrypted. This represents a paradigm shift in automotive data handling: for example, a car can receive personalized insurance pricing or predictive maintenance alerts from cloud AI systems without revealing the driver’s raw data to the insurer or service provider. The system will be developed and evaluated for critical automotive applications and will be tested for a typical use case: privacy-preserving usage-based insurance purposes.
Organization: Toronto Metropolitan University
Project title: Losing your voice to AI: Privacy risks of health-related machine listening
Amount requested: $77,295.00
Project leader: Greg Elmer
Project team: Stephen Neville, Alexandra Borkowski
Project summary:
Smart speakers are the most popular smart home device in Canada and millions of Canadians use voice assistants like Alexa or Siri on a daily basis. However, smart speakers – and other voice-activated devices such as phones, wearables, and cars – do not only record conversations, they can also automatically produce biometric voice profiles of adults and children.
There is good reason to characterize voice data as sensitive health information as Big Tech corporations are planning to use voice data for health-related applications. When voice profiles are either tracked over time or correlated with statistical norms, this data can be used to make powerful inferences about the physical health, mental health, and wellness of Canadians. As voice data becomes increasingly processed by generative AI systems, there is a real risk of significant harm that could lead to unfair, unethical, or discriminatory treatment that contravenes human rights law – representing what the OPC labels as “no-go” zones.
The primary objective of this project is to map the array of applications for health-related machine listening to identify privacy risks and protect the privacy rights of Canadians. This will be achieved through multi-method research and an impactful digital literacy campaign. The first study maps the industry of health-related machine listening to identify information risks that fall under the scope of PIPEDA. The second study investigates the privacy policies that cover health-related voice data to see how the basic principle of consent is being fulfilled or not. And the final study identifies risks of significant harms related to health-related machine listening. Knowledge translation activities will target four segments of the public: citizens/consumers, digital literacy stakeholder organizations, privacy regulators and policy makers, and the private sector.
Organization: Automobile Protection Association
Project title: Evaluating Privacy Permissions and Consent Requested to Use a New Motor Vehicle
Amount requested: $43,005
Project leader: George Iny
Project team: Debbie Roberts Ph.D., Gilles Pilon, Ron Corbett
Project summary:
With the rise of the smart or connected car, the auto industry has become a leader in connected technology. Current vehicle models feature the internet of things (IoT); they incorporate complex systems of devices that communicate data about the driver and vehicle performance, experience and safety. These features include Global Positioning Systems (GPS), vehicle trackers, cameras and sensors, all of which can be connected to the cloud and are meant to enhance the driver experience as well as improve safety. Current vehicle technology benefits customers with hands-free calling and texting, navigation, infotainment, voice-assistance, sensors that detect drowsiness and alert drivers who are falling asleep, sensors that detect obstacles around the vehicle, and warning sensors that indicate service to the vehicle is needed or recommended. For many years, onboard data recorders have registered details of a vehicle’s operation in the seconds before a collision; recently, the depth of information collected has increased considerably and can include video of the vehicle in the minutes before a collision.
The purpose of this project is to collect and analyze the privacy permissions and releases automakers require of their Canadian customers in exchange for access to the onboard features and connected applications in their vehicles. Researchers will compile information using actual current-model-year vehicles, vehicle owner’s manuals, automaker websites, and information from auto dealers when it is available.
To the APA’s knowledge, this will be the first inventory undertaken in Canada to determine and understand the commitments that vehicle owners are making to fully use the connected capabilities of the vehicles they drive.
Organization: Vancouver Island University
Project title: Empowering Young Canadians in the Smart Device Era: A Privacy-by-Design Research and Public Engagement Initiative
Amount requested: $80,000
Project leader: Ajay Kumar Shrestha
Project team: Molly Campbell, Yulia Bobkova, Mohamad Sheikho Al Jasem, Trevor De Clark
Project summary:
This project aims to explore the unique privacy challenges and opportunities faced by older high school students and post-secondary learners (Ages 16–24) in Canada.
As smart devices become more embedded in everyday life, young Canadians are among the earliest adopters of these technologies. From AI-enabled learning platforms and gaming consoles to wearable health trackers and virtual assistants, young adults encounter a host of devices that collect, analyze, and sometimes share personal information, often with minimal transparency or user control. Recognizing the varied and sometimes hidden privacy risks these users face, this project adopts a comprehensive research approach.
First, a review of existing studies and best practices will map current knowledge on smart AI-embedded device privacy. Next, a mixed-method research design, combining surveys, focus groups, and technical audits of popular smart devices, will capture a diverse set of perspectives regarding privacy self-efficacy, perceived privacy risk, perceived privacy benefits, algorithmic transparency and trust, and privacy-protective behaviors. By focusing on the lived experiences of youth in both high school and post-secondary contexts, the research will shed light on the interplay between digital literacy levels, socio-economic backgrounds, and personal preferences around data ownership.
The insights gleaned from this multi-method study will inform the development of a privacy-by-design toolkit, tailored to the realities of young Canadians. This toolkit will offer practical guidelines and actionable recommendations for device manufacturers, educational institutions, policymakers, and families, fostering more transparent data practices and safeguarding personal information by default. In addition, interactive workshops, webinars, and a dedicated website will broaden community engagement, ensuring that the project’s findings reach not only researchers and policymakers but also young Canadians themselves.
Organization: Université de Sherbrooke
Project title: Analysis of the management of sensitive data by smartwatches: Privacy issues and recommendations for stakeholders
Amount requested: $80,000
Project leader: Pierre-Martin Tardif
Project team: Manon Ghislaine Guillemette, Aref Meddeb, Arthur Oulaï
Project summary:
The proposed project will examine the management of sensitive data collected by a wearable device, stored on a smartphone and transmitted to a cloud platform, taking a representative ecosystem as a use case: an Apple Watch connected to an iPhone via Apple’s HealthKit data aggregation platform.
The aim is to assess the technical mechanisms in place to ensure the confidentiality of sensitive information.
The results of this analysis will help identify the risks and best practices involved in managing sensitive data in these technological ecosystems.
Organization: Centre for Addiction and Mental Health
Project title: Smarter Privacy—A Service Design Approach to Public Engagement for AI Literacy of Smart Devices
Amount requested: $49,988
Project leader: Nelson Shen
Project summary:
As artificial intelligence (AI) systems become increasingly embedded in smart devices and daily life, individuals are increasingly making decisions that affect their privacy, autonomy, and rights. Despite this, public understanding of AI’s role in data collection, processing, and decision-making remains limited. Improving AI literacy, specifically digital citizenship, may empower individuals to responsibly make informed privacy decisions or effectively exercise their rights under the Personal Information Protection and Electronic Documents Act (PIPEDA).
While AI literacy programs currently exist, they are largely limited to formal education or professional settings. This creates a potential inequity as many may not have the ability, capacity, or motivation to access AI literacy courses. This project takes a human-centred service design approach to bridge this AI literacy gap by engaging the public in co-designing strategies to reach a broader population.
Organization: University of Ottawa
Project title: Connecting young women, but at what price? FemTech and privacy
Amount requested: $89,700
Project leader: Céline Castets-Renard
Project summary:
This research project aims to analyze the Personal Information Protection and Electronic Documents Act (PIPEDA) in the context of FemTech (female technology) mobile applications dedicated to wellness and the body, such as applications for tracking menstruation (Flo Health, Clue, Eve, Natural Cycle, etc.), pregnancy (e.g., Flutter Care) or breastfeeding (LactApp, Mylee, etc.). It will focus on Canadian women of childbearing age, particularly the youngest among these women, including minors, and aims to shed light on the negative impacts on their privacy. These applications, most of which are from U.S. companies, encourage women to provide a significant amount of sensitive and intimate data on their health. However, these companies’ general conditions of use and privacy policies are often vague when it comes to the measures being taken to protect personal information and comply with PIPEDA. This project will improve protection of the intimate personal information shared by women who use these applications. It will lead to recommendations to the legislator with a view to reforming PIPEDA following two unsuccessful attempts (Bill C‑11 in 2020 and Bill C‑27 in 2022).
Education and awareness‑raising initiatives specifically targeting young women will also be launched to raise awareness of their rights and the risks associated with FemTech applications to help these women better protect their personal data in a digital environment. Lastly, this research will fill a major knowledge gap, as while the extensive collection of personal information by FemTech applications has been criticized, there have been no systematic studies of the situation for women in Canada.
2024-25 Contributions Program funding recipients
Organization: Concordia University
Project title: Privacy concerns in social login ecosystems
Amount requested: $50,000
Project leader: Mohammad Mannan and Amr Youssef
Project summary:
Social login, which is a form of single sign-on, has become a ubiquitous feature on websites and mobile applications. It allows users to log in or sign up to these platforms using their existing social media credentials, such as Facebook, Google, LinkedIn, X/Twitter and Apple. While social login has advantages, such as simplifying login and reducing password fatigue, it also raises privacy and security concerns.
The aim of the project is to investigate, through a comprehensive and systematic technical/experimental measurement study, the ecosystem of social login on websites and in Android apps. Researchers plan to design and implement a privacy and security analysis framework to find and analyze these websites and apps. Using this framework, the researchers will compare the data sharing practices of real-world social login implementations to gain insights into the privacy implications for users. Based on the above, the researchers will produce a public report summarizing the findings of their investigation. The report will present recommendations for improving the security and privacy of these social login solutions. The report will include easy-to-follow guidelines for Canadians who use social logins on websites and in apps. The researchers will also produce a technical paper detailing the full methodology, results and technical recommendations.
Organization: Vancouver Island University
Project title: Safeguarding tomorrow’s data landscape: Young digital citizens’ perspectives on privacy within AI systems
Amount requested: $86,601.90
Project leader: Ajay Shrestha
Project team: Ankur Barthwal, Molly Campbell, Austin Shouli, Saad Syed
Project summary:
In the ever-expanding digital landscape where artificial intelligence (AI) plays a central role, it is crucial to address the privacy impacts of these emerging technologies. This research project aims to explore the complexities of AI’s privacy impacts with a focus on understanding the concerns of young digital users and protecting children’s privacy rights.
Surveys, interviews and focus groups will be used to gather insights from young users, educators, parents, AI developers and researchers to explore their perspectives on data control and factors influencing perceptions of privacy in AI applications.
By understanding how young users perceive and expect privacy in AI applications, the project strives to contribute to the responsible integration of AI technologies into the lives of young users, championing ethical AI use and ensuring privacy protection in the digital age. The research will also examine digital literacy levels and prior interactions with AI technologies. This will help develop guidelines aimed at addressing young users’ specific concerns.
The project will engage young digital citizens through workshops and participatory activities, which aim to empower young users and give them a voice in shaping the narrative around privacy in AI systems.
Organization: Internet of Things Privacy Forum
Project title: The machine-readable child: Governance of emotional AI used with Canadian children
Amount requested: $81,464.10
Project leader: Gilad Rosner
Project team: Andrew McStay
Project summary:
The research project will evaluate PIPEDA for its fitness to govern the use of emotional AI with children, highlighting gaps and offering suggestions where appropriate. The research will delve into the privacy challenges posed by using these technologies. The research project will also yield practical assistance for the makers, sellers, and assessors of child-focused emotional AI technologies. It will do so by developing modules for privacy impact assessments, creating Canada-focused guidelines for the commercial development, deployment and use of these products and services. The project will also develop best practices for fairness, accountability, and transparency of emotional AI systems that collect the data of Canadian children.
Emotional artificial intelligence (AI) is a subset of AI that measures, understands, simulates, and reacts to human emotions. It is an AI system that purports to determine an individual’s emotional state based on analyzing a facial image or other characteristics/information. Emotional AI is being increasingly used to understand and respond to psycho-physiological emotional reactions. With emotion sensing and emotional AI, we refer to technologies that use affective computing, AI and machine learning techniques to sense, learn about and interact with human emotional life. However, these technologies are relatively new and often not well understood by parents, children, school administrators, regulators and legislators. While there will certainly be benefits to these technologies, when used with children, emotion and mood sensing technologies become deeply problematic ethically and in may not be in the best interests of the child.
Organization: Université du Québec à Montréal
Project title: Dangerous games: protecting the privacy of children under 13 in mobile games
Amount requested: $89,906.00
Project leader: Maude Bonenfant
Project team: Sara Grimes, Thomas Burelli, Hafedh Mili, Alexandra Dumont, Cédric Duchaineau
Project summary:
Mobile gaming is on the rise among young Canadians, even among toddlers. At the same time, the global mobile gaming industry is growing exponentially. In this mobile industry, several business models exist, but one of the most profitable is the collection of personal data for targeted advertising.
While data collection in mobile games is governed by general terms and conditions of use, these are long, tedious to read and complex to understand, if not impossible to find when it comes to third parties. As a result, the terms are difficult to understand for young people and parents, who are obliged to accept the terms and conditions proposed by the game in order to gain access. This means that the youngest children are not as well protected as they should be.
This research will focus on analyzing mobile game applications and comparing them with the compliance criteria of the Children’s Online Privacy Protection Act (COPPA), in order to identify good and bad practices in protecting children’s privacy in the world of gaming.
Organization: University of Ottawa
Project title: Benchmarking large language models and privacy protection
Amount requested: $83,680.00
Project leader: Rafal Kulik
Project summary:
In the current digital age, the accelerated growth of data generated by individuals has fuelled advances in artificial intelligence (AI), particularly the development and deployment of large language models (LLMs). These sophisticated AI systems, capable of understanding, generating and interacting with human language in ways that mimic human thought processes, are becoming integral to applications ranging from personalized content creation to drug discovery. As these models become more deeply embedded in the daily functions of society, the need to protect individual privacy within these systems is crucial.
The rapid development of LLMs and the pace at which these tools are evolving present a significant challenge in defining current and practical guidelines that can effectively address the use and deployment of these systems. Given the unique capabilities and risks associated with LLMs, there is a growing need to establish robust privacy standards specifically tailored to these technologies.
This project will provide a practical introduction to LLMs and will explore privacy challenges for legal and policy experts and the role of privacy-enhancing technologies. Researchers will survey legal, policy and technical experts, as well as civil society groups to explore the benefits and opportunities of these technologies. They will also provide recommendations and public education materials.
Organization: University of Waterloo
Project title: Mitigating privacy harms from deceptive design in virtual reality
Amount requested: $58,708.00
Project leader: Leah Zhang-Kennedy and Lennart Nacke
Project team: Hilda Hadan
Project summary:
Deceptive design in virtual reality (VR) is a rapidly evolving privacy concern. This research will explore the effects of deceptive design on user information privacy in commercial VR applications. By identifying and classifying deceptive design patterns in VR that undermine and subvert users’ privacy, the researchers seek to develop countermeasures and guidelines to counteract their negative impact. The researchers also seek to increase awareness and provide design and policy guidelines and recommendations to VR developers, policymakers and government.
Researchers plan to evaluate VR application design to identify manipulative strategies and conduct a large-scale analysis of user perceptions and experiences with respect to privacy and deceptive design. They also plan to systematically document different deceptive practices and patterns in VR, note consequences for privacy and suggest mitigation strategies.
The researchers anticipate that their project will lead to opportunities to improve the design of VR applications and will lead to recommendations for privacy regulations to better protect Canadians. They plan to create a public repository, design guidelines and educational resources for the public, among other deliverables.
Organization: Toronto Metropolitan University
Project title: Generative AI, Privacy Policy and Young Canadians
Amount requested: $49,640
Project leader: Karim Bardeesy
Project team: Sam Andrey, André Côté, Tiffany Kwok, Christelle Tessono
Project summary:
In this project, researchers seek to understand the privacy implications of generative AI technologies in order to inform the application of current and proposed Canadian privacy legislation and privacy-preserving administrative policies and practices, with an emphasis on impacts on minors.
The project will consist of three core components. First, researchers will conduct interviews with privacy and artificial intelligence experts to help shape an understanding of the privacy consequences of AI in general and specifically for minors, and how best to mitigate them. Second, researchers will also undertake legal and policy analysis to evaluate both current and proposed privacy laws with respect to their capacity to effectively address the specific risks posed by generative AI. Third, researchers will conduct a comparative analysis of privacy and data protection laws in other jurisdictions and technical interventions (e.g., age gating, youth data collection bans, school board policies/bans, etc.). The comparative approach will allow the researchers to draw insights from other jurisdictions’ efforts to manage and mitigate AI-specific risks to privacy.
- Date modified: