Canadian privacy regulators launch principles for responsible development and use of generative AI
Principles launched at opening of an international symposium on privacy and AI hosted by the Office of the Privacy Commissioner of Canada
OTTAWA, ON, December 7, 2023 – Federal, provincial and territorial privacy authorities have developed a set of principles to advance the responsible, trustworthy and privacy-protective development and use of generative artificial intelligence (AI) technologies in Canada.
Privacy Commissioner of Canada Philippe Dufresne announced the new principles document today at the beginning of the international Privacy and Generative AI Symposium that was organized by his Office.
The symposium was held in conjunction with the 72nd meeting of the International Working Group on Data Protection in Technology, co-hosted by the Office of the Privacy Commissioner of Canada and Germany’s Federal Commission for Data Protection and Freedom of Information.
“As technologies such as generative AI play an increasingly central role in our lives, ensuring that we can benefit from such innovations while protecting privacy will be critical to our success as a free and democratic society, and a key challenge in the years ahead,” said Commissioner Dufresne.
“This game-changing technology demands a collective approach. This is why we are working closely with our domestic and international counterparts to ensure that AI is made and used responsibly.”
The privacy regulators note in their joint document that while AI presents potential benefits across many domains and in everyday life, there are also risks and potential harms to privacy, data protection, and other fundamental rights if these technologies are not properly developed and regulated.
Organizations have a responsibility to ensure that products and services that are using AI comply with existing domestic and international privacy legislation and regulation.
The joint document lays out how key privacy principles apply when developing, providing, or using generative AI models, tools, products and services. These include:
- Establishing legal authority for collecting and using personal information, and when relying on consent, ensuring that it is valid and meaningful;
- Being open and transparent about the way information is used and the privacy risks involved;
- Making AI tools explainable to users;
- Developing safeguards for privacy rights; and
- Limiting the sharing of personal, sensitive or confidential information.
Developers are also urged to take into consideration the unique impact that these tools could have on vulnerable groups, including children.
The document provides examples of best practices, including implementing “privacy by design” into the development of the tools, and labelling content created by generative AI.
The Privacy and Generative AI Symposium brings together domestic data privacy regulators as well as members of government, industry and civil society to discuss the opportunities and risks involved in generative AI, and how all sectors can best work together to prepare for them. The event includes a keynote address by Gary Marcus, Professor Emeritus at NYU and co-founder of the Centre for the Advancement of Trustworthy AI.
The International Working Group on Data Protection in Technology, also known as the Berlin Group, was founded in 1983 with members from various sectors including data protection authorities, government agencies, academia and civil society. The group meets twice a year to stay ahead of technological trends and to foster the exchange of expertise among its members.
The Ottawa meeting, which runs today and tomorrow, includes discussions on a number of topical papers, including one from the Office of the Privacy Commissioner of Canada on generative AI and large language models.
For more information
Office of the Privacy Commissioner of Canada
- Date modified: