Language selection

Search

Building trust: Privacy and AI governance - Remarks by the Privacy Commissioner of Canada to the Victoria International Privacy and Security Summit

March 5, 2026
Victoria, British Columbia

Keynote address by Philippe Dufresne
Privacy Commissioner of Canada

(Check against delivery)


Good morning. I would like to thank the organizers for giving me the opportunity to leave winter behind in Ottawa, however briefly, for the warmth of the West Coast.

This is my third day at the Summit and I am impressed by the level of engagement on some of the most important issues facing us all – regulators, policy makers, industry, civil society, and privacy and security professionals.

The topics on the agenda – digital sovereignty, artificial intelligence (AI) governance, agentic AI, children’s privacy, cybersecurity, fighting disinformation and misinformation, responsible innovation, and law reform – are topics that are of high priority for me and my Office.

AI governance is a particularly timely subject. The technology is evolving and advancing quickly. The need for guardrails, and the integration of data protection principles, to enable its safe and responsible deployment is increasingly evident.

The use of AI to generate realistic content, such as audio, images, and video, including deepfakes, is an example of an emerging risk that this technology poses.

On February 3, 2026, the second International AI Safety Report, led by Professor Yoshua Bengio, was released. The report, authored by more than 100 AI experts, is noted as “the largest global collaboration on AI safety to date.”

The report highlights the rapid advancements in AI capabilities and associated emerging risks, including documented and potential risks to individuals, organizations, and global markets.

It points out that “accessible AI tools have substantially lowered the barrier to creating harmful synthetic content at scale.” Many AI tools are free, or inexpensive, and make it easier for users to create images or other synthetic materials, such as voice clones, anonymously.

In January, I expanded my investigation into the social media platform X to include its Grok chatbot, following growing concern about the platform being used to create and share explicit images of individuals, including children, without their consent. The use of personal information without consent to create deepfakes, including intimate images, is a growing phenomenon that poses serious risks to individuals’ fundamental right to privacy.

Last week, I signed a joint declaration on AI-generated content with more than 50 other global data protection authorities brought together by the Global Privacy Assembly’s enforcement collaboration working group. In the statement, we note that certain “fundamental principles should guide all organizations developing and using AI content generation systems.”

These fundamental principles include implementing robust safeguards to prevent the misuse of personal information and ensuring meaningful transparency about AI system capabilities, safeguards, acceptable uses and the consequences of misuse. We also said that organizations should take steps to address specific risks to children.

In the International AI Safety Report, the authors state that “(m)any aspects of how general-purpose AI will develop remain deeply uncertain. But decisions made today – by developers, governments, communities, and individuals – will shape its trajectory.”

I have always said that it is not a zero-sum game between privacy and the public and economic interests of Canada. On Bills like C-8, with respect to cybersecurity, we can and must protect Canada’s critical infrastructure from threat actors while including appropriate thresholds and safeguards to protect privacy.

This is also true in the context of discussions about whether platforms should be required to disclose information to prevent tragedies like Tumbler Ridge.

We need to ensure that Canadians are protected from imminent harm, but we must do so in a way that protects Canadians’ privacy and includes appropriate thresholds and safeguards.

It is therefore essential that privacy, security, and data protection authorities and experts also play a key role in shaping how AI is deployed, used, and governed.

Technologies such as AI can bring economic, social, and public interest benefits, but the full value of this innovation will only be maximized if it is accompanied by trust.

The theme of this summit – Trust, Transparency and Transformation – offers the perfect backdrop for my remarks today, which will focus on the work that my Office is doing to help shape the future of AI.

Trust

Trust in how data is handled is becoming an important factor in how Canadians interact with government, businesses, and technology.

Last fall, the Honourable Evan Solomon, Canada’s Minister of Artificial Intelligence and Digital Innovation, said that the motto of his office is that “adoption moves at the speed of trust.”

A survey of Canadians about privacy issues that was conducted by my Office last year supports this perspective.

The survey found that nine in 10 Canadians are concerned about the protection of their privacy. It also found that trust in how personal information is handled is impacting people’s behaviour.

For instance, Canadians are taking action to protect themselves by refusing to provide personal information, changing privacy settings, deleting accounts, and walking away from companies that experience a breach.

As privacy becomes increasingly important to consumers, organizations that prioritize privacy will find that they enjoy a competitive advantage.

Just as data can be used to fuel innovation – to improve and tailor services, generate efficiencies, and evaluate results – innovation must also be used to protect data.

When individuals trust that their rights will be protected, they can feel confident about participating freely in the digital economy. This is good for Canadians, good for business and good for innovation.

For innovation to truly flourish, it is also important that organizations trust new technologies.

A 2026 survey report by PwC Canada on Trust in AI says that 61 percent of businesses surveyed cite unclear or evolving legal and regulatory requirements as a major challenge for implementing AI. Businesses feel that they risk either being left behind if they wait for clarity, or risk building the wrong thing if they move forward.

As technologies continue to evolve rapidly, and become increasingly integrated into personal and professional lives, it is the collective role – of regulators and policy makers – to ensure that privacy is protected for current and future generations.

Law reform in the public and private sectors would bring much needed modernization and would help provide clarity for organizations and better protect Canadians.

With respect to AI, I have recommended that Canada’s federal privacy laws recognize privacy as a fundamental right, and establish requirements to implement privacy by design and to conduct privacy impact assessments for high-impact data processing.

Personal information is at the heart of artificial intelligence, and therefore privacy legislation should, in my view, be at the heart of AI regulation.

Transparency

If we think about trust as a destination, transparency is an important vehicle to get us there.

Canada’s federal private-sector privacy law is consent-based, and transparency is what makes consent meaningful. It allows individuals to know what is being done with their information, and why.

Transparency means designing websites and apps with privacy in mind, including providing privacy-friendly default settings and making privacy information easy to find.

In 2024, my Office participated in a global privacy sweep of more than 1,000 websites and mobile apps. It found that many of them used deceptive design patterns that made it more difficult for individuals to protect their privacy online.

Emphasizing privacy options, using neutral language, clearly presenting privacy choices, and reducing the number of clicks for a user to find privacy information, log out, or delete an account are all ways in which organizations can be more transparent and help their users better protect their privacy online.

My Office, along with our counterparts in British Columbia and Alberta, also looked specifically at 67 websites and apps targeted at children.

We found that websites and apps aimed at children, more often than those targeted at the general population, used emotive language or nagging to lead users into making less privacy-friendly choices.

My Office has recently participated in another global privacy sweep focusing on children’s apps, the results of which will be released later this month.

Failure to obtain meaningful consent was an important issue of my investigation into TikTok, which I conducted with my counterparts in Quebec, British Columbia, and Alberta.

The TikTok investigation found that the measures in place to keep children off the popular video-sharing platform and to prevent the collection and use of their sensitive personal information for profiling and targeting purposes were inadequate. We found that TikTok was using more sophisticated tools for commercial purposes than it was to keep underage children off their platform.

Even though the company has stated that its platform is not intended for individuals under the age of 13, we found that hundreds of thousands of Canadian children access TikTok’s platform each year – and TikTok has been collecting and using their personal information.

Although the joint investigation was focused on children, it also found that TikTok did not adequately explain its data practices to teen and adult users, nor did it obtain meaningful consent for the collection and use of vast amounts of user data, including sensitive data of younger users, as required under Canadian privacy laws.

The investigation underscores how transparency is especially important when dealing with vulnerable populations, such as children. All organizations need to think about putting children’s best interests at the forefront of their services and activities.

In the case of TikTok, the impact of our investigation into this widely used platform went far beyond a report; it also enabled the company to implement improvements to its privacy practices in the best interests of its users, especially children.

TikTok agreed to improve transparency by strengthening its privacy communications to ensure that all users understand how their data could be used, including for targeted advertising and content personalization. TikTok also agreed to enhance its age-assurance methods to keep underage users off the platform and thus better protect them from privacy harms.

On the broader subject of age assurance, last year my Office held a consultation to gather input on how and when online services should confirm the age of a user in order to restrict children and youth from accessing certain content. The OPC is now developing guidance based on the feedback that that was received.

My Office is also developing a Children’s Privacy Code to create practical guidelines for organizations that handle children’s personal information. Established codes of practice and special protections contained in privacy legislation can empower children to exercise their privacy rights and protect them against potential harms as they navigate online spaces.

With respect to AI, it is key for developers, providers and implementers to embed privacy in the design, conception, operation, and management of their products and services and to consider the unique impact that these tools have on children as well as on groups that have historically experienced discrimination or bias.

Organizations that use AI should be transparent about this use, and accountable for any AI-generated decisions about individuals, such as whether to grant someone a loan or offer them job.

Organizations should be able to explain, on request, all predictions, recommendations, decisions and profiling that are made using automated decision systems.

Transformation

This brings us to transformation. There is no question that technology has changed, and will continue to change, the world in which we live, work, and play.

Whether we are early adopters or reluctant joiners, we must all transform ourselves and our organizations to rise to these new challenges.

While I continue to advocate for, and am optimistic about, law reform, I also want to ensure that we are doing everything that we can within my organization to adapt to the growing complexity of the digital environment and to make our services to Canadians as efficient as possible.

To that end, a year ago, I introduced a transformation plan aimed at streamlining OPC operations, making them more integrated, agile, and strategic in order to maximize the impact of our work for Canadians.

The changes have allowed my Office to respond more rapidly and effectively to emerging issues by engaging with organizations, proactively as well as reactively, to promote compliance.

The change is a recognition that not every matter requires a full, resource-heavy investigation. Sometimes, there are more efficient ways of achieving the desired result, such as through early engagement and resolution.

Our response to the breach at PowerSchool last year is an example of this new approach. My Office engaged with the company to achieve a timely resolution, by focusing on its response to the incident and its implementation of measures aimed at strengthening protection for the personal information of students, parents, and educators across Canada. My Office is continuing to monitor to ensure that all of PowerSchool’s commitments are fully met.

I want to ensure that we are using all of the tools at our disposal – and this includes AI.

Like many federal organizations, my Office has been exploring potential ways to use AI in our work. In doing so, we are also seeking to lead by example in demonstrating how privacy can enable safe, secure, and responsible innovation.

Embracing privacy by design principles, our technology team has developed an in-house AI called PrivIA that we began piloting across the Office last fall. This is important for many reasons, including deepening our understanding of this technology that we regulate, while also helping to optimize our work.

I am excited about our internal AI solution and the opportunities that this will enable.

Another important tool is collaboration, which is a central component of my tenure as Privacy Commissioner of Canada. Privacy applies to all aspects of our lives, and personal data is flowing across borders at unprecedented speed and scale.

I believe that collaboration with stakeholders at all levels – nationally, internationally and across regulatory jurisdictions – is essential to better protect and promote privacy.

It is why I launched a consultation on OPC guidance processes. It is still open and I would encourage you to share your input to ensure that the guidance and advice that my Office produces for organizations is as useful as it can be.

It is also the impetus behind the new OPC Youth Council – a bright group of students that I had the pleasure of meeting in person last month. A space for youth to share their insights, experiences, and ideas on the privacy issues that matter the most to them, the Council will play an important role in helping the OPC understand the impact that privacy issues have on youth.

In October, I became co-chair of the Federal, Provincial and Territorial Information and Privacy Commissioners and Ombuds group alongside Information Commissioner of Canada Caroline Maynard.

At our annual meeting last fall, we adopted a joint resolution on protecting the privacy of children and youth through responsible educational technologies.

Through the Canadian Digital Regulators Forum, I work closely with my cross-regulatory counterparts at the Canadian Radio-television and Telecommunications Commission (CRTC), the Competition Bureau, and the Copyright Board of Canada to strengthen information-sharing and collaboration on matters related to digital markets and platforms.

Last fall, we published a joint paper on “Synthetic media in the digital landscape,” which provides an overview of the global regulatory landscape as it pertains to content that is produced using AI or other automated technologies, and key considerations for individuals and organizations as the technology develops.

The OPC’s contribution focused on the ways in which personal information may be used to create synthetic media, such as deepfakes, which often rely on personal information to replicate images accurately.

On an international level, last June, I hosted the G7 Data Protection and Privacy Authorities Roundtable in the National Capital Region, in the context of Canada’s G7 presidency. The Roundtable adopted a joint statement affirming that prioritizing privacy throughout the lifecycle of a technology, from design to development to deployment, can allow organizations to unleash innovation and seize market opportunities in a cost-effective way. It followed up on our 2024 statement affirming that data protection authorities should play a key role in fostering trustworthy AI technologies as a way of leveraging their collective expertise to uphold privacy and ethical standards.

In September, I was honoured to be elected Chair of the Global Privacy Assembly, an international forum that brings together more than 130 data protection and privacy authorities from around the world.

My election as Chair is a recognition of long-standing Canadian leadership on the global privacy stage. Having Canada at the table of leading international privacy forums such as the GPA contributes not just to protecting privacy throughout the world, but also helps to promote Canadian interests in the global economy.

Recognizing the global nature of data protection, strategic cooperation among privacy and other regulators, along with engagement with external stakeholders including civil society, industry, and public institutions, will allow members of the Global Privacy Assembly to maximize our collective voice, impact, and capacity in providing global leadership on data protection in support of individuals and organizations.

Conclusion

During a podcast recorded with Yoshua Bengio in Davos in January, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute, said: “We can have almost everything that we’re excited about with AI... if we simply insist on having some basic safety standards before people can sell powerful AI systems.”

Privacy protection is a team sport. We must work together to build trust, to ensure transparency, and to transform our culture into one where the right to privacy is protected by design and by default.

We cannot lose sight of the fact that the right to privacy must be at the centre of everything that we do. When we protect the right to privacy, we protect individuals. Similarly, the approach to AI must be human-centric.

The question that we need to ask is the one that privacy authorities have always tried to answer: How do we give individuals control over how their personal information is collected, used, and shared?

I recognize the delegates here today as fellow privacy and security champions.

You play an important role, acting as advocates for individuals’ privacy and security, making sure that appropriate controls are available, helping to build a culture of privacy where data protection is seen as a strategic benefit.

Creating a culture that prioritizes responsible innovation and data privacy and security from the outset will protect current and future generations, foster innovation and growth, and promote long-term success.

Thank you again for inviting me to speak to you today.

Date modified: