Language selection


Keynote Remarks at the Big Data & Analytics Montréal Summit 2023

October 3, 2023

Montréal, Québec

Address by Philippe Dufresne
Privacy Commissioner of Canada

(Check against delivery)

Good morning,

It is my pleasure to be here today in my hometown of Montréal to participate in this important conference on big data and analytics, and the expanding role that artificial intelligence and generative AI are playing in these areas, in both the public and private sectors.

I see from the agenda and the distinguished list of fellow speakers that I am standing among some of Canada’s great innovators. For that, I am both humbled and heartened.

As leaders, we know that there is tremendous opportunity in harnessing data but there are also serious risks that must be identified and mitigated. I am so pleased that the organizers chose to anchor the discussions that will take place over the next two days with a talk about privacy and the protection of personal information.

All of you play an important role in building a path for a future that will leverage data in a way that protects Canadians fundamental right to privacy, while optimizing innovation.

As technology plays an increasingly central role in our world, our lives, and our economy, ensuring that we can benefit from the advances, innovations, and conveniences that it brings while protecting privacy will be critical to our success as a free and democratic society, and a key challenge for Canada’s institutions in the years ahead.

Since my appointment as Privacy Commissioner in June of last year, I have set my vision for privacy as one that reflects the reality that Canadians want to be active and informed digital citizens, able to fully participate in society and the economy, without having to choose between this participation and their fundamental privacy rights.

My vision has three pillars, which are:

  1. Privacy is a fundamental right, which I was happy to hear that Minister Champagne has agreed to make explicit in Bill C-27;
  2. Privacy supports the public interest and Canada’s innovation and competitiveness; and
  3. Privacy accelerates the trust that Canadians have in their institutions and in their participation as digital citizens.

We know that privacy matters to Canadians and that they are concerned about the impact of technology on their privacy. Our last survey of Canadians found that 93 per cent have some level of concern about protecting their personal privacy, and that half do not feel that they have enough information to understand the privacy implications of new technologies. Canadians want and need to trust that their privacy rights are being protected so that they can feel confident about participating freely in the digital economy, which in turn is good for businesses and governments alike.

We also know that organizations in both the public and private sectors are having to adapt to the scale and pace of technological advancements within a regulatory environment where there are many jurisdictions with different laws and standards, making compliance often seem complicated and costly. Nonetheless, they are working hard to operate and innovate in a manner that protects the personal information of Canadians, their customers, and their clients.

With that in mind, today I would like to discuss the importance of fostering a culture that prioritizes the protection of privacy in our data-driven world, and share some of the work that my Office is doing both domestically and internationally in this regard.

In particular, I would like to talk about our work in the areas of artificial intelligence, cross-border data flows, as well as what we can do to help enable, guide and support organizations in complying with applicable privacy laws now and in the future, and why that is not only necessary but also a smart investment to make.

Generative AI and Privacy

Addressing the privacy impacts of rapidly advancing technology is a key focus area for my Office. In 2023, that means keeping a close eye on developments in the world of generative AI, which advanced by leaps and bounds last year when ChatGPT brought the seemingly limitless possibilities of generative AI to the fingertips of anybody with an Internet connection. Just this week, we are seeing articles about Sam Altman and Jony Ive creating an AI device.

The use of massive information sets – which often include personal information – to generate content such as text, computer code, images, video, or audio in response to a user prompt is a game changer and generative AI holds incredible promise in advancing innovation, efficiency, and convenience.

But it also comes with important privacy concerns about the collection and use of personal information as training data, transparency and explainability of data sources and AI decision-making processes, consent mechanisms, accountability for system processes and outcomes, and the accuracy of decisions, including information generated through inferences and the risk of bias.

Some have likened generative AI to opening a Pandora’s box. Even Sam Altman, the CEO of the company behind ChatGPT, has urged caution and called for a coordinated global regulatory response to the technology.

Earlier this year, my Office announced that we had launched a joint investigation with several provinces into OpenAI, the company behind ChatGPT, to determine whether the organization’s practices comply with Canadian privacy law. This is a reminder that while our laws need to be modernized, they do currently apply is this space, as the Federal Court of Appeal confirmed in its Google decision of last Friday where the Court agreed with my Office’s position that PIPEDA applies to Google’s search engine.

That investigation into open AI is ongoing, and we are continuing to monitor these and other new technologies so that we can anticipate how they may impact privacy, recommend best practices to ensure compliance with privacy laws and promote the use of privacy-enhancing technologies.

For instance, I recently joined 11 other data protection authorities from around the world in issuing a joint statement that calls on social media companies to take steps to prevent unlawful data scraping. The statement urged companies like Facebook, TikTok and YouTube to implement controls to prevent, detect and respond when such activities are suspected.

The automated extraction of data from the web is a hallmark of many generative AI models and poses an important risk to privacy. Scraped personal information has been used for targeted cyberattacks, identity fraud, creating facial recognition databases, unauthorized police intelligence gathering, unwanted direct marketing and spam.

Indeed, in a recent OECD report on Generative AI, threats to privacy were among the top 3 risks selected by G7 members in achieving national and regional goals.

This is why generative AI was also the focus of a joint statement by G7 data protection and privacy authorities that my colleagues and I issued in Tokyo this past June.

Together, we called on developers and providers of generative AI to embed privacy in the design, conception, operation, and management of their new products and services. Potential privacy issues must be considered and mitigated in the foundational stages of any initiative. We welcomed the G7 Digital and Technical Ministers Declaration of April 2023 which reinforced the position that AI-related laws, regulations, policies and standards should be human-centric and based on democratic values, including the protection of human rights and fundamental freedoms and the protection of privacy and personal data.

Fostering a culture of privacy, that advances privacy by design, and adopts privacy by default, will further support and enable responsible innovation.

We also urged companies to consider globally recognized privacy principles in the development and delivery of products and services, such as data minimization, data quality, purpose specification, use limitation, security safeguards, transparency, rights for data subjects including the right to be informed about the collection and the use of their personal data, and accountability.

We ultimately reminded companies that existing privacy laws apply to generative AI products and services, even as governments around the world seek to develop laws and policies specific to AI.

I was pleased to see this statement included in the voluntary AI code of conduct unveiled by Minister Champagne last week. My international colleagues and I are continuing to work in this important area and I will be hosting a meeting of the GPA Working Group on Emerging Technology alongside my colleague from the German DPA in Ottawa this coming December.

Law reform and the regulation of AI in Canada

On the legislation front, there have been a number of developments in Canada over the last month or so with respect to the regulation of AI.

Just last Thursday, I welcomed the opportunity to share my views on Bill C-27, the Digital Charter Implementation Act, before the House of Commons Standing Committee on Industry and Technology. While I was able to share my opening remarks before the Committee was seized of a motion last week, I look forward to returning to Committee as soon as I am invited back to share my views on this important Bill.

The Bill includes the Consumer Privacy Protection Act, or CPPA, which seeks to modernize Canada’s federal private sector privacy law and the new Artificial Intelligence and Data Act, or AIDA, which aims to bring a regulatory framework to artificial intelligence.

I was encouraged by the introduction of this Bill, and look forward to seeing it progress through the legislative process. The CPPA addresses a number of concerns that were previously raised by my Office and others. For example, it requires that information used to obtain consent be in understandable language; it provides my Office with order-making powers; and it includes an expanded list of contraventions to which administrative monetary penalties may apply in cases of violations.

The introduction of AIDA could make Canada one of the first countries to regulate AI, which is important given the technology’s potential risks. Although AIDA does not specifically address privacy risks, the CPPA would apply to the processing of personal information within AI systems and I have recommended ways to improve this.

Overall, the Bill is a step in the right direction but as I have said, it can and must go further to protect fundamental privacy rights. I am encouraged by comments Minister Champagne made last week of his intent to introduce amendments that are in keeping with our recommendations on privacy as a fundamental right language and more robust children’s privacy protection, among others. Privacy law reform is overdue and must be achieved. 

I have presented parliamentarians with a submission setting out 15 key recommendations. This includes a recommendation that organizations be required to conduct Privacy Impact Assessments to ensure that privacy risks are identified and mitigated for high-risk activities. An important example would be using artificial intelligence to make life-changing decisions about individuals, such as whether they get a job offer, qualify for a loan, pay a higher insurance premium, or are suspected of suspicious or unlawful behaviour.

We also recommend that the definition of “de-identified information” be modified to include the risk of re-identification, that Canadians be given the right to request an explanation when an AI system makes decisions that affect them, and that my Office have more flexibility in negotiating and enforcing compliance agreements and in cooperating and communicating with other regulators. This is important in many areas but will be crucial when dealing with AI and generative AI.

While our recommendations focus on the CPPA, some of them would also apply to AIDA. For instance, AIDA provides significant authority to the government to define key aspects of the law by way of regulation. This would include, for example, determining what does and does not constitute justification to a discriminatory AI decision for the purposes of the definition of biased output.

The government could also establish criteria through regulation for the purposes of defining a high-impact system, or determining measures with respect to the way that data is anonymized, and how that data can then be used and managed.

Given that all of these could potentially have privacy implications, it will be important to ensure that there is a formal mechanism for my Office to be consulted in the drafting of these regulations.

Last Wednesday, the Department of Innovation, Science and Economic Development Canada launched a voluntary code of conduct on the responsible development and management of advanced generative AI systems. At the time, a dozen companies and organizations had already signed on to adhere to the voluntary code, including BlackBerry, OpenText, Telus and the Canadian Council of Innovators, which represents more than 100 start-up companies across Canada. It followed the publication of a scene-setter document in August to guide consultations with stakeholders and AI experts on a potential code of practice. The Code points to the G7 Declaration on AI, and states that it does not in any way change existing legal obligations that organizations may have, for example under PIPEDA.

My Office has an essential role to play to ensure the protection of privacy and fundamental rights and freedoms in the regulation of AI. That is why it is important that we are integrated into Canada’s AI regulatory framework.

Finally, it is important to note that generative AI is not strictly the domain of the private sector. Its use and development are also of great interest to governments and several federal departments have already deployed it. It is also cross-regulatory, touching competition, copyright, human rights and other fields. For this reason, I have recently announced the creation of the Canadian Digital Regulators Forum alongside my colleagues, the Competition Commissioner and the Chairperson of the CRTC.

Last month, the Treasury Board of Canada issued a guide for government departments and agencies on the use of AI. The guide encourages federal institutions to explore how they could use generative AI tools to support their operations and improve outcomes for Canadians, and outlines the parameters in which they should operate.

I believe that the guide is a good start. My Office is currently working with domestic and international privacy counterparts on research and policy initiatives in the area of generative AI and we hope to be in a position very soon to contribute even more to this important conversation on responsible AI.

We expect that the results of our ongoing joint investigation into ChatGPT will also help to inform our recommendations to both the public and private sectors with respect to the use of generative AI technology.

AI is a global issue that demands a global approach. The same is true for cross-border data flows.

Trans-border data flows

As the regulator of one of the world’s most advanced digital economies, I am working closely with my colleagues, in particular at the G7, but also the GPA, APPA, and AFAPDP in global discussions on digital issues, including the adoption of higher standards for data protection around the world.

It is important that Canada’s privacy laws be interoperable with other laws, both domestically and internationally, to facilitate and regulate exchanges that rely on personal data, and to reassure citizens that their personal information is subject to similar protections when they or their data crosses borders.

Indeed, it is essential if Canada is to continue doing business with Europe. As we saw with the United States, there is a real risk that countries may lose their adequacy status with Europe if the European Commission assesses that its data protection laws do not guarantee an equivalent level of protection as the GDPR.

A little over a year ago, I joined my fellow G7 Data Protection and Privacy Authorities in Germany to discuss regulatory and technology issues in the context of cross-border data transfers. The conversation centred on the concept of “Data Free Flow with Trust,” which aims to build consumer trust by ensuring high global data protection standards for information flowing across borders, with the right to privacy and data protection as a guiding principle.

We shared information about “international data spaces”, which can be seen as an emerging approach to trusted and voluntary data sharing within and across organizations and sectors, domestically or internationally, to support innovation in academia, industry and the public sector.

We also discussed international data transfer tools, such as certification mechanisms, privacy-enhancing technologies and de-identification standards, as well as important privacy principles including data minimization and purpose and use limitation, and the role of data protection and privacy authorities in AI governance.

I presented a discussion paper on the de-identification of data, which can be a privacy protective practice with potential benefits to the public good, for instance with respect to public health.

Last month, I had an opportunity to participate in a virtual panel discussion on Data Free Flow with Trust hosted by the Global Privacy Assembly in collaboration with the Organisation for Economic Co-operation and Development (OECD), which has released a report on the subject with a survey of organizations’ needs and challenges.

The OECD report provides a striking overview of the extent to which countries around the world are developing and implementing privacy laws.

The report rightly observed that the ubiquitous nature of privacy laws illustrates the diverse compliance obligations that organizations must comply with.

This highlights the importance of having mechanisms in place that put interoperability into practice, an example of which is the Global Cross-Border Privacy Rules system.

This international privacy certification system, which is based on a set of privacy rules that are commonly agreed upon amongst participating jurisdictions, can bridge across any domestic differences in privacy approaches that may exist.

It can also provide assurances to consumers that their information will receive a consistent level of protection as it travels across international borders.

With the introduction of Bill C-27, the report is timely indeed from a Canadian perspective.

This discussion on certifications resonated with me personally given that Bill C-27 provides for a new scheme for certifying organizations’ privacy practices against the requirements of the law.

If adopted, the proposed new certification program could provide an effective means for organizations to gain a deeper understanding of their own privacy practices, and could offer helpful assurances to consumers, which in turn, generate trust.

Under the proposed scheme, I would be responsible for reviewing and approving such certification programs. Since certified organizations would remain subject to the law, I would also retain the ability to exercise my full range of enforcement powers in cases of violation.

An interesting feature of this and other certification programs is that they allow other actors – certification program operators in our case – to play a role and take on responsibilities to monitor and verify privacy practices and incentivize compliance.

In this respect, being able to leverage the actions of these actors could prove to be quite beneficial to a regulator and to parties.

If adopted, Bill C-27 would explicitly require organizations to make information available about whether or not they carry out certain international transfers or disclose personal information.

I believe that this information will help individuals weigh the associated risks and make more informed choices about their personal information and which companies they do business with.


Cross jurisdictional discussions about trans-border data flows and the impact of new technologies like generative AI on privacy are ongoing both domestically and internationally.

I look forward to continuing to find ways that we can all work together as industry leaders, governments, regulators, consumers, and citizens so that Canadians can benefit from the many conveniences that technology affords without having to sacrifice their own personal privacy.

I believe that Canada can be an innovation hub and a model of good government while protecting the personal information of Canadians. It is not a zero-sum game.

We can have privacy and the public interest. We can have privacy and innovation. Doing so, we can accelerate the trust Canadians have in their institutions and in the opportunities of the digital economy.

I wish you all a wonderful rest of the conference.

Thank you.

Date modified: