Language selection

Search

OPC Submission to Innovation, Science and Economic Development Canada consultation on a renewed AI strategy

October 31, 2025

The Honourable Evan Solomon
Minister of Artificial Intelligence and Digital Innovation
and Minister responsible for the Federal Economic
Development Agency for Southern Ontario
Innovation, Science and Economic Development Canada
235 Queen Street
Ottawa, Ontario K1A 0H5

Dear Minister Solomon:

Subject: OPC Submission to ISED consultation on a renewed AI strategy

I would like to express my thanks for the opportunity to participate in Innovation, Science and Economic Development Canada’s (ISED’s) consultation on a renewed Canadian artificial intelligence (AI) strategy. As Privacy Commissioner of Canada, I am keenly aware of the importance of developing a national strategy to respond to the opportunities and challenges brought about by AI, and to do so in a manner that fosters public trust, promotes innovation and economic sovereignty, and respects fundamental rights such as privacy.

I was struck by a point you made in a recent keynote at the ALL IN conference in Montreal: “Technology moves at the speed of innovation. Adoption moves at the speed of trust.” I fully agree. Without trust, even the most advanced technologies will suffer from scepticism and low adoption. But where trust exists, innovation is embraced.

The key question of course is how to build such trust. The enclosed submission of the Office of the Privacy Commissioner of Canada (OPC) focuses on a theme that I believe is necessary to ensuring that advanced data-driven technologies such as AI are developed and deployed in a secure, responsible and trustworthy manner. That theme is prioritizing privacy.

Please permit me to make a few general remarks on this topic in advance of my office’s submission.

Privacy is not a barrier to innovation—it is a driver of innovation. When privacy considerations are built into the design and use of AI technologies from the outset, not only can AI systems and their outputs become more robust, accurate and interpretable, but individuals’ confidence and trust in the AI ecosystem grows, thereby creating a virtuous cycle of economic and ethical success.

For example, the appropriate use of privacy-enhancing technologies (PETs) can enable greater access to high-quality training datasets and model weights to support the development of AI tools, while increasing public trust by protecting the confidentiality of individuals’ personal information. In addition, exceptions to consent for the purposes of internal AI development may be appropriate, so long as they are accompanied by proportionate measures to safeguard individuals’ rights and freedoms.

These examples bring me to a final point.

Canada needs modernized privacy laws to fully realize privacy’s potential to be a driver of innovation and public trust. Our federal privacy laws predate the modern digital economy, while technologies and the demands of society—including our economy—continue to evolve rapidly.

I am encouraged by your recent remarks about the government’s plans to reintroduce privacy legislation modelled after parts of the former Bill C-27, with strengthened privacy protections. I support these efforts and will look forward to the next steps.

Modernizing Canada’s privacy laws is necessary to fully meeting the challenges of today’s increasingly AI-driven world—enabling Canadians to confidently reap the benefits of a digital society, while future-proofing businesses for success.

I look forward to continuing to work with you and your officials in the development of a renewed Canadian AI strategy.

Sincerely,

(Original signed by)

Philippe Dufresne
Commissioner

c.c.: Mark Schaan, Associate Deputy Minister, ISED
Alexandra Dostal, Senior Assistant Deputy Minister, ISED
Samir Chhabra, Director General, ISED

Encl: OPC Submission to ISED consultation on a renewed AI strategy

OPC Submission to ISED consultation on a renewed AI strategy

  1. The OPC welcomes ISED’s consultation on a renewed Canadian AI strategy. We commend the timely exploration of this issue and the Government of Canada’s efforts to transform Canada into a world leader in responsible and secure AI.
  2. The consultation consists of an online survey, with the option of providing comments via email. Our comments consist of responses to select questions from the survey that directly relate to the OPC’s mandate. In addition, consistent with ISED’s portfolio, our responses focus on ideas and issues relevant to private sector organizations.

Theme: Research and talent

How can Canada strengthen coordination across academia, industry, government and defence to accelerate impactful AI research?

  1. A promising mechanism that enables collaboration across sectors is regulatory sandboxes. Regulatory sandboxes are controlled environments where regulators provide support to organizations in the development, testing and validation of innovative products or services that may challenge existing legal frameworks. Regulatory sandboxes are particularly promising for the development of technologies with short innovation cycles, such as AI. When sufficiently resourced and integrated properly into existing legal frameworks, sandboxes can help advance innovation in socially responsible ways while ensuring ethical oversight.
  2. It is worth noting that the European Union’s (EU’s) Artificial Intelligence Act specifically authorizes regulatory sandboxes as a tool to promote more agile and participatory regulation, requiring member states to “ensure that their competent authorities establish at least one AI regulatory sandbox at national level.”Footnote1

Theme: Accelerating AI adoption by industry and government

What are the key barriers to AI adoption, and how can government and industry work together to accelerate responsible uptake?

  1. Recent public opinion research by the OPC found that concerns about privacy play a major role in shaping Canadians’ attitudes towards AI. According to our research:
    1. 83% of Canadians are at least somewhat concerned about their privacy when using AI tools, including approximately one third (34%) who are extremely concerned.
    2. Almost nine in 10 (88%) Canadians have some level of concern about their personal information being used to train AI systems, including 42% who are extremely concerned. Just 11% are not concerned about their personal information being used to train AI systems.
  2. These results suggest that trust in how data is handled is becoming a deciding factor in how individuals interact with businesses that use AI technologies. When Canadians have confidence that their data is protected and used responsibly, it fosters an environment where businesses can thrive, innovate responsibly and earn public trust.

Theme: Building safe AI systems and strengthening public trust in AI

How can Canada build public trust in AI technologies while addressing the risks they present? What are the most important things to do to build confidence?

  1. Of the 15 key recommendations the OPC made to strengthen the government’s most recent attempt to modernize private sector privacy legislation in Canada, two in particular stand out as important measures to address the risks posed by AI tools and services. These are: (1) Recognize privacy as a fundamental right; and (2) Create a culture of privacy by requiring organizations to build privacy into the design of products and services and to conduct privacy impact assessments (PIAs) for high-risk initiatives.
  2. Privacy is a fundamental right that reinforces the freedoms and trust that underpin our democracy and that unite us as Canadians. It reinforces democratic values by protecting freedom, trust, dignity and autonomy—the very things that make democratic and economic participation possible.
  3. A stronger recognition in the law of the importance of the fundamental right to privacy is therefore necessary to foster greater confidence in the digital economy and encourage responsible use of personal information by organizations in a way that supports innovation and economic growth. The OPC believes that the law can achieve both commercial objectives and privacy protection in the pursuit of responsible innovation.
  4. In addition, implementing privacy by design and conducting PIAs can help organizations demonstrate that they are accountable for personal information under their control, ensure that they are in compliance with the law and limit the risk of privacy breaches and harms in the use of AI tools and services.
  5. Privacy by design refers to proactively integrating privacy-protective measures into the very design of a product, service or initiative from the initial phases of development. A PIA is a risk management process which should be undertaken at the beginning of a new or modified initiative involving personal information. It can help organizations proactively comply with privacy law, identify the impacts on personal information and mitigate privacy risks, especially for higher risk activities, such as those involving sensitive information or high-impact AI systems.

What frameworks, standards, regulations and norms are needed to ensure AI products in Canada are trustworthy and responsibly deployed?

  1. Modernizing Canada’s federal private sector privacy law, the Personal Information Protection and Electronic Documents Act, or PIPEDA, is necessary to fully meet the challenges and opportunities of today’s data-driven world, to enable Canadians and Canadian businesses to confidently reap the benefits of AI and other technologies with strong, secure data protection.
  2. Through modern, predictable laws that share common standards with like-minded countries, it is possible to create a regulatory environment that makes it easier for Canadian businesses to engage and succeed in a global digitized world. Consistency in regulatory environments will benefit Canada’s trading relationships, Canadian businesses and Canadians by ensuring pragmatic and effective protections no matter where our citizens or their data may travel.

Theme: Education and skills

How can we enhance AI literacy in Canada, including awareness of AI’s limitations and biases?

  1. The OPC is committed to continuing to develop and publish educational materials on our website, especially those with a focus on AI’s limitations and biases. One example of this is our ongoing Privacy Tech-Know blog series, which has already provided discussions of several AI-related themes and technologies. These include:
    1. When worlds collide – The possibilities and limits of algorithmic fairness (Part 1), which discusses the three main definitions of algorithmic fairness, including an analysis of the advantages and disadvantages of each as well as the trade-offs between them;
    2. When worlds collide – The possibilities and limits of algorithmic fairness (Part 2), which discusses the practical side of algorithmic fairness, including an analysis of measures to help achieve algorithmic fairness and the limits of mathematical notions of fairness; and
    3. The reality of synthetic data, which analyzes the pros and cons of synthetic data as an AI-based privacy-enhancing technology.
  2. We have also published, as part of the Canadian Digital Regulators Forum, a paper entitled “Synthetic media in the digital landscape,” which discusses the risks to individuals when their personal information is used in the creation of synthetic media, such as deepfakes, among other topics.

What can Canada do to ensure equitable access to AI literacy across regions, demographics and socioeconomic groups?

  1. A key demographic in need of better AI literacy is children. The OPC’s Strategic Plan 2024–27 affirms a commitment to champion children’s privacy rights as a key strategic priority area. Through this commitment, the OPC intends to deepen its understanding and appreciation of privacy risks and issues that young people face, and affect positive changes among organizations, parents/guardians and youth to uphold children’s privacy rights, including in relation to AI.
  2. Some activities the OPC has undertaken to improve education and literacy about children’s privacy rights in relation to AI include:
    1. The G7 Data Protection and Privacy Authorities’ Statement on AI and Children;
    2. The OPC’s launch of an exploratory consultation into the development of a children’s privacy code aimed at strengthening the protection of young people’s personal information in the digital world; and
    3. The OPC’s development of a Youth Council to create a space for young people to share their insights, experiences and ideas on the privacy challenges that matter the most to them.

Conclusion

  1. We appreciate the opportunity to participate in the consultation and would be pleased to engage further with ISED on any of the issues or ideas raised in this submission.
Date modified: