Language selection

Search

Consultation on the Development of a Children’s Privacy Code – What We Heard

About the process

From May to August 2025, the Office of the Privacy Commissioner of Canada (OPC) ran an exploratory consultation on the development of a children’s privacy code. In that consultation, we put forward questions about how the OPC’s positions could be operationalized to ensure that private sector organizations are implementing strong safeguards and transparent practices for handling children’s personal information, as well as providing effective tools for children to meaningfully exercise their privacy rights. We invited interested parties to provide feedback that:

  • built upon the OPC’s established principles and positions and their applicability;
  • identified potential issues that the OPC should address in a children’s privacy code; and
  • informed the OPC of real “on-the-ground” challenges and/or potential solutions to protect children’s privacy.

We also outlined that responses provided by stakeholders would also be used to inform our work related to championing children’s privacy rights under the OPC’s 2024-27 Strategic Plan.

The OPC received 37 submissions, with one joint response representing the views of over 40 individuals and organizations. These responses came from a wide variety of stakeholder groups, including industry, civil society, academia, policy think tanks, legal professionals, and interested individuals. As part of the consultation, the OPC also hosted 4 roundtable discussions, including a roundtable with youth from across Canada.

Since the launch of the consultation, there have continued to be developments for better protecting children online, including in Indonesia, Brazil and Australia. We will continue to monitor such developments as we strive to align our children’s privacy code with other standards internationally.

We want to take this opportunity to thank those who contributed to this exploratory consultation.

What we heard

Respondents advised that they would welcome a code of practice clarifying obligations for handling children’s personal information. Despite the fact that PIPEDA as currently drafted would not allow for it, respondents recommended that the code be enforceable. More generally, they also recommended:

  • Be technology neutral. Respondents were concerned about the code keeping pace with evolving technologies and cautioned against including references to specific technologies or design methods that could “lock [them] in” and stifle the development of more efficient solutions.
  • Be non-prescriptive. Respondents believed that the code should be principles-based, with practical examples and guidance, to ensure flexibility and support responsible innovation.
  • Contain robust privacy-by-design requirements. Respondents supported proactively embedding privacy protection into design and default settings.

In general, respondents strongly believed that children’s personal information should be recognized as sensitive. As such, special protections should be required, based on developmental psychology and respect for children’s evolving capacity.

Many respondents wanted the code to also inform practices in the public sector, including institutions that regularly handle children’s personal information to provide education, healthcare, legal, and child welfare services.

Respondents highlighted the importance of different regulatory mechanisms, including the value of self-regulation. Some respondents were concerned that the code’s requirements would duplicate existing requirements in certain, heavily regulated sectors. As such, respondents recommended that the code accommodate and complement existing requirements.

Respondents supported the code aligning with similar codes and guidance released in other jurisdictions. They also recommended that the code align with the protections in the United Nations Convention on the Rights of the Child (UNCRC) and clarifying general comments.Footnote 1 Respondents felt that this alignment would assist organizations, and particularly small businesses, by providing consistency and regulatory certainty.

Many respondents noted that children could benefit greatly from legislative amendments to the Personal Information Protection and Electronic Documents Act (PIPEDA) that provide them with special protections.

Comments were identified across five themes. Generally:

1 - Application of a children’s privacy code: Respondents were generally supportive of a code that applies broadly to products and services that are used by, directed at, intended for, or likely to be accessed by children. They also thought that the code should follow a risk-based approach to ensure that it is applied with flexibility, proportionality, and respect for the best interests and evolving capacity of the child.

2 - Enabling the exercise of children’s privacy rights: Respondents believed that the code should recognize children’s agency and vulnerabilities. They believed that consent mechanisms should include consideration of capacity, while respecting any applicable legal thresholds. Respondents also supported requirements regarding deletion.

3 - Designing to address privacy impacts and the best interests of the child: Respondents believed that organizations should be required to integrate Privacy by Design, and ensure that the best interests of the child is a primary consideration in the design and deployment of products and services accessed by children. Respondents believed that this could be achieved through a Privacy Impact Assessment (PIA) process that considers the impacts on the child along with other tools such as a Children’s Rights Impact Assessment (CRIA).

4 - Ensuring child-appropriate transparency practices: Respondents thought that privacy notices should be understandable to both parents and children and provided examples of how this could be done, as well as what information should be communicated to users.

5 - Being privacy protective by default: Respondents felt that digital spaces occupied by children should be built in a privacy-protective manner, which could be achieved by establishing high default settings, limitations on what data can be shared, and prohibitions on certain inappropriate practices, such as using deceptive design. Respondents proposed many potential no-go zones, with strong support for limiting collection and disclosure of location information and biometric data.

Further details on each of these themes is included below:

Theme one: Application of a children’s privacy code

  1. When the Code Should Apply

Respondents were generally supportive of a code that applies broadly to products and services that are used by, directed at, intended for, or likely to be accessed by children, in alignment with international best practices.

Some responses indicated that the code should also apply to products and services that are likely to impact children even if no children actually access the product or service. For example, this could include an app that a daycare service provider uses to transmit information about a child to their parent or guardian. This would recognize that children’s rights can even be impacted when their personal information is being provided by someone else. Respondents also suggested that the code should apply to personal information about a child, even if that person is no longer a child.

Some respondents believed that the code should explicitly exclude services not likely to be accessed by children, such as business-to-business services. Respondents also believed that where organizations separate adults and children into different online environments, the code should not apply to the environment intended only for adults. Similarly, respondents thought that devices owned by adults or accounts managed by adults should be excluded from the code.

Some respondents thought that there should be a “tiered approach” with different privacy requirements for products and services for “mixed audiences” versus those intended only for children. However, others, including youth, believed that children’s rights should be protected whenever they interact with a product or service, even if they are not the intended audience. In this vein, some respondents believed that certain obligations, such as the right to delete a child’s personal information, should always apply.

Youth Perspectives

Youth respondents pointed out that they frequently access sites that are not intended for them.

People often fake their age, but I understand why lots of people my age do. A lot of times it’s about social pressure and trying to gain access to things their friends are using. Trying to be part of the world their peers are in.

Youth respondents wanted all sites that they visit to have transparent data practices, have high default privacy settings, and to operate with their best interests in mind.

Despite the differences in approaches, respondents were all generally supportive of the code covering all products and services that are “likely to be accessed” by children, with the OPC developing guidance on how to assess this likelihood. Respondents suggested that the OPC develop a non-exhaustive list of factors to consider, in alignment with the UK Information Commissioner’s Office’s (ICO) guidance,Footnote 2 and the US Federal Trade Commission (FTC)’s approach which considers the “totality of the circumstances”.Footnote 3 For instance, respondents recommended that organizations assess factors such as:

  • If the content, language, or features of the site are directed at or likely to appeal to children;
  • The nature of the product or service;
  • How the product or service is presented and advertised;
  • Whether user demographics of the product/service or a similar product/service indicate that it is “likely to be accessed by children”; and
  • If effective age controls are in place.

Respondents noted both technical and privacy challenges associated with determining whether a product or service is likely to be accessed by children, such as when and how to collect personal information to make the determination without over-collecting. Some respondents suggested that determining if a product or service is likely to be accessed by children could be done through simple, privacy-protective methods such as the use of aggregate and anonymized data; voluntary declarations with parental onboarding; and other methods of age assurance.Footnote 4 Respondents believed that emerging technologies and business practices could help to improve the quantity and quality of evidence available.

Many respondents were not supportive of the code only applying when a “significant number” of children access the product or service. They argued that this approach could require organizations to collect more personal information than is necessary to make this determination, fail to consider the risks associated with using the product or service, and allow organizations to avoid obligations if they prohibit users under a certain age or introduce more adult-oriented content. Respondents also argued that assessing “significance” should not just be based on the number of children potentially affected but on how their rights, interests, and well-being are affected. Other respondents believed, however, that identifying the number of child users could assist organizations in determining if their product or service is intended for a “mixed audience”, which would have implications for how the code’s requirements apply.

  1. How the Code Should Apply

Respondents supported the code applying in a way that adapts requirements to the risks to children, to ensure that it is flexible, contextual and proportionate. For example, products and services that are clearly harmful to children would have to comply with stricter rules, while those with more innocuous content would have more general obligations. Mixed audience products and services could fall in the middle.

Generally, respondents felt that the approach should be:

  • Flexible: A risk-based approach could ensure that the code is scalable, allowing organizations to adapt as technologies and data practices evolve. However, others were concerned that flexibility could have unintended consequences, like allowing organizations to invest less in privacy protections and increase their data collection.
  • Contextual: A risk-based approach could consider specific contexts such as when a privacy-invasive practice is for the benefit of the child, for example, to detect or prevent fraud, or using geolocation data in the case of an emergency.
  • Proportionate: Respondents noted that a risk-based approach that considers children’s capacity could ensure that the code does not unfairly limit children’s access to beneficial services or undermine their best interests.

Some respondents raised concerns with implementing a risk-based approach, including that risk assessments may vary in quality between organizations, leading to inconsistent application of the code, and that not all risks may be apparent at the outset.

Theme two: Enabling the exercise of children’s privacy rights

Respondents believed that to enable children to exercise their rights, the code should:

  • Recognize children’s autonomy and agency, as well as their vulnerability and need for protection;
  • Move beyond rigid age-based models towards contextual assessments of capacity based on research and evidence, while respecting any applicable legal thresholds; and
  • Affirm children’s rights and encourage tools that help them learn, develop and participate in the global economy.

Respondents recommended that the code maintain a flexible approach regarding consent that respects children’s best interests, evolving capacity and autonomy as set out in the UNCRC. This way, children would be able to gradually engage more independently as they gain capacity. Respondents suggested that guidance developed by UNICEF could provide useful insights on how to assess capacity and respect the evolving capacities of children.Footnote 5

Respondents believed that consent should always be active and explicit when organizations are obtaining consent from either children or their parents/guardians. Respondents also wanted the code to specify that consent could not be obtained through deceptive design or by manipulating children’s vulnerability and that organizations should have to communicate information and design products with children’s best interests in mind. Respondents suggested that users be provided with options that group together purposes with similar risks or similar practices to avoid overwhelming children.

Aligning with international best practices, respondents supported having different processes for obtaining consent from different age groups, based on the risk of harm. Most respondents supported the OPC’s current position that in all but exceptional circumstances, children under 13 are unable to meaningfully consent to the collection, use and disclosure of personal information and that consent must, therefore, be obtained from a parent or guardian. Respondents believed that any age thresholds set out in the code should consider international approaches to the implementation of children’s rights in early childhood and adolescence.Footnote 6 Suggestions for consent approaches were provided for specific age groups.

When obtaining parental consent, respondents believed that the process should:

  • Enable parents and guardians to manage their children’s digital interactions and exposure, while respecting children’s evolving capacity and autonomy;
  • Not present a barrier to children obtaining additional privacy protections; and
  • Consider and reflect the diversity of children and familial relationships.

Respondents recommended that the code support contextual, easy to use methods of obtaining parental consent such as using linked accounts and in-app confirmations, rather than more privacy-invasive methods like verifying birth certificates or proof of guardianship, unless necessary. However, some respondents pointed out that methods like self-declaration can be easily bypassed, while having a single sign-in across providers and using hard identifiers such as credit card, debit cards or government ID were more reliable. Respondents supported the use of privacy dashboards and parental controls to make it easy for parents to enable age-appropriate experiences for children, manage personal information, and to exercise their child’s privacy rights. Respondents also supported the development of a regulatory sandbox to test privacy-protective consent models.

Some respondents noted concerns with relying on parental consent, cautioning that the rights exercised by parents and guardians should not be absolute, and that their interests should be balanced with those of the child. As well, respondents pointed out that parents or guardians may not always be available to provide consent, delaying or restricting a child’s access to a product or service. This can create barriers, especially for at-risk and marginalized children. Respondents also pointed to situations where obtaining parental consent may not be practical, appropriate, or safe, such as when children are seeking help with abuse by a family member. Respondents were also concerned that verifying parental or guardian relationships might require parents or guardians to provide more personal information than necessary.

Respondents also believed that consent and transparency alone were not enough to protect children’s privacy, as they place the burden on the child to navigate dense, technical information about complex data practices. This is particularly concerning as children might not be able to understand privacy risks or long-term consequences. As well, children might be more susceptible to consent fatigue, lack of knowledge, or businesses’ opaque data practices. As such, respondents believed that proactive approaches, like building safer spaces for children by using high default settings, privacy by design and data minimization, could be more effective than simply relying on consent.

Respondents also supported the code clarifying that children have a right to de-indexing or de-listing, and that parents or guardians, in some circumstances, could exercise this right on their behalf. Respondents also believed that children should have a clear right to erasure, similar to the right in the European Union and Quebec, including a requirement for organizations to notify third parties of requests and valid exceptions for legal or security reasons. Respondents believed that the code should provide a right for children to challenge personal information about them provided to an organization by someone else, based on their ability to withdraw consent and challenge the accuracy of that information.

Youth Perspectives

Youth respondents indicated that children need to be able to understand the consequences of their privacy decisions to be able to provide meaningful consent, and when they have questions, they should be able to go to their parents or guardians for advice and guidance.

For me I’m a strong believer in whatever age you are, if you can’t understand the consequences of social media, you shouldn’t be able to use it without a parent’s consent. There should always be some kind of dialogue, it’ll be different between 12 year olds and 17 year olds. You should have an idea of what’s safe and what isn’t safe. You should go to your parents with issues.

Youth wanted parents and guardians involved but based on their capacity and content. They felt that parents and guardians should be close by for advice with settings, but not necessarily always monitor their activities, especially as they get older. Youth wanted to be able to exert decision-making over lower risk activities like choosing usernames or avatars.

Theme three: Designing to address privacy impacts and the best interests of the child

Respondents believed that the code should ensure that the best interests of the child is a primary consideration in the design, development and deployment of products and services likely accessed by children; and that organizations should also have to integrate privacy by design in their products and services. Respondents agree that this can be achieved through a Privacy Impact Assessment (PIA) process that considers how a product or service may impact children.

Respondents believed that PIAs should be required under the code for products and services targeting children and for those with mixed audiences where there is a higher risk of harm. They also provided feedback regarding how PIAs under the code could be implemented. For example, respondents suggested that organizations:

  • Complete PIAs that consider the level of risk, the severity and scale of the potential harm, and ensure a commitment to mitigating potential harms;
  • Document their mitigation strategies and implementation plans in a PIA under the code, which would help them be able to demonstrate accountability, while also respecting the need to protect commercially confidential information;
  • Not deploy inherently high-risk features if less harmful alternatives exist and be encouraged to assess and mitigate privacy risks early in the design process;
  • Have their PIA processes informed by research on child development and consultations with a diverse cross-section of stakeholders including youth, parents, teachers, advocates, pediatric experts, government and industry, with special regard for vulnerable populations; and,
  • Update PIAs regularly, and prior to any significant feature changes.

Respondents also felt that these PIAs could inform the design and default settings of products and services, as well as organizations’ consent and transparency frameworks. Respondents believed that, under the code, a PIA would help organizations to:

  • Demonstrate why and how the collection, use or disclosure of children’s personal information serves children’s wellbeing rather than commercial interests based on supporting evidence; and
  • Proactively integrate children’s best interests and developmental needs into the design process.

Further, respondents believed that a risk taxonomy, developed by the OPC in collaboration with relevant stakeholders, should clarify obligations for complex, high-risk sectors where risks are known and therefore help organizations complete a PIA. Respondents believed that the risk taxonomy should balance risk of harm with the best interests of the child and remain responsive to emerging practices and technologies.

Respondents provided many examples of features that can lead to risk of harm, including, among others:

  • Systems making behavioural recommendations trained on children’s data;
  • Algorithmic nudging toward over-disclosure (i.e. streaks and simulated affection);
  • AI agents interacting with minors;
  • Default settings that enable location tracking, biometric profiling, or emotion-inference models;
  • Addictive design features, such as infinite scroll or autoplay, designed to maximize engagement time;
  • Social pressure mechanics (i.e. public follower counts, “seen” indicators, etc.);
  • Gamification features that reward data disclosure;
  • Age-inappropriate marketing techniques;
  • Collection of unnecessary personal information for advertising purposes; and
  • Photo manipulation and deepfake technologies. We heard that these particular technologies can pose serious risks to indigenous girls, who face disproportionate rates of violence, oversexualization, and trafficking.

To identify harms, respondents believed that organizations should be encouraged to implement the following practices:

  • Monitor products and services by, for example, using child user surveys and behavioural analytics tracking usage patterns;
  • Maintain awareness of new research on child development and digital harms; and
  • Partner with mental health organizations and youth-led organizations to identify harms.

Many respondents emphasized the importance of recognizing that children’s privacy rights are connected to other rights, and that the best interests of the child must be prioritized over commercial interests. To help organizations operationalize this, respondents recommended that organizations complete a Child Rights Impact Assessments (CRIAs) that would consider the impact on children’s rights broadly, while also considering contextual elements such as the risk of harm, age, and the nature of the product or service, and implement “child rights by design” principles.

Youth Perspectives

Youth respondents indicated an interest in being involved in the development and design of products and services, and felt that their participation could lead to better outcomes.

Talk to us and get our opinion. Doing things like beta testing your sites with young people. Making sure your sites work well for kids. Do surveys, consultations, having more open discussions could help us come to better solutions.

We heard that youth want organizations to have their best interests in mind when designing products and services, and should educate children on risks as well as provide them with tools that provide protections.

Theme four: Ensuring child-appropriate transparency practices

Respondents pointed out that both children and adults are often unable to understand the opaque terms of service presented by organizations. They believed that efforts should be made to present information in a way that helps people, including children, to make informed choices. Respondents highlighted research indicating that children expect greater transparency practices that clearly provide them with relevant information about how their information will be used and what the impacts are at appropriate moments. Respondents believed that organizations should help build children’s capacity to understand privacy, rather than expect them to make informed decisions about their privacy in high-pressure moments.

Respondents specifically suggested that privacy notices be made to indicate whether artificial intelligence is used and how automated decisions are made. Additionally, we heard that there should be economic transparency for all monetization of the user’s personal information.

Respondents believed that the transparency requirements in the code should be scalable for organizations of different sizes, and align with international approaches and standards for age-appropriate design. Specifically, respondents believed that published terms of service should:

  • Language: Use clear, simple, plain language that the youngest likely user can understand. Organizations should ensure that the key elements of the terms of agreement have been understood.
  • Length: Be concise, complete, and divided into clear bite-sized sections with appropriate headings and bullet points so they can more easily be understood.
  • Format: Be presented in multiple formats for different age ranges. Formats could be made effective through interactive layering, quizzes and gamified modules. Audio/visual formats can be tailored to suit the medium, including icons, graphics, images, illustration, colour-coded visual aids, infographics, and videos. Any written notices should be provided in an easy-to-read font.
  • Navigability: Be prominent, easy to find and searchable, and structured to provide the information a child needs or wants to know. This can be done through simplified settings that emphasize important information, and hyperlink to further information and modules.
  • Timing: Be presented at relevant moments in the user journey and on an ongoing basis. However, just-in time notifications should not be required each time that data is shared, especially when such sharing is necessary for core services.
  • Inclusivity: Be written and presented in an accessible way that considers the needs of diverse groups of children and not assume adult engagement. Terms should be readable by children of varying ages, maturity and literacy levels. Language related to concepts such as “ownership,” “custody,” and “control” should be respectful as these are not neutral terms for Indigenous Peoples.

Respondents felt that transparency related information should be jointly presented to both children and their parents and guardians to encourage discussion about privacy risks. When information is provided separately, respondents believed that it should be consistent and complementary. Respondents suggested that parents and guardians should be provided with tools that help them to understand and manage their children’s privacy settings, such as dashboards for family settings and parental controls. Respondents believed that organizations should provide resources like parental guidelines when parents or guardians are registering a child for a product or service. This could be supplemented with the OPC’s parental resources or external age-appropriate guides that help explain and evaluate privacy risks.

Respondents recommended that organizations consult with a diverse cross-section of children and external research to ensure that their transparency practices are age appropriate. Respondents also recommended that staff be trained to effectively communicate with various audiences, including children.

Youth Perspectives

Youth respondents felt that privacy policies were too complex and lengthy.

They should be direct and concise. Get rid of the legal jargon. Young people don’t have a lot of experience, they might want symbols beside them. Make it interactive!

We heard that youth want information presented in a simple and visual way that highlights key information.

Theme five: Being privacy protective by default

Respondents felt that the spaces that children navigate should be built in a privacy-protective manner, and that this could be achieved by establishing high default privacy settings, setting limits on what data can be shared and for how long it should be retained, and prohibiting certain practices that are inappropriate, such as deceptive design.

  1. High Default Settings

Respondents supported the code restricting disclosures and requiring organizations to have high default settings, unless it is in the best interests of the child to do otherwise. Respondents believed that requiring privacy by design would: provide a robust baseline of protection; alleviate some challenges associated with obtaining and managing informed consent; shift the burden from children and parents to organizations; and ensure that children’s personal information is protected regardless of their understanding of the data practices.

However, respondents believed in a flexible approach. For instance, there was support for having high default settings that would limit the use of location and biometric data with specific exceptions where appropriate. Respondents thought that default settings should take into consideration the best interests of the child, the necessity of the personal information to the “core functionality” of the product or service, and the risks of harm. Respondents also raised that monetization of children’s personal information should not be considered a core function of a product or service.

Respondents also felt that high default settings may not be sufficient if, for instance, deceptive design is used to incentivize users to turn them off or modify them. As well, respondents thought that users should not be able to opt-out of uses that would result in a significant negative impact (i.e. fraud detection and prevention). We also heard concerns that requiring high default settings could undermine a child’s right to access sites and services, such as when mature minors cannot access sites or services based on their age when they have capacity. Therefore, the code should provide organizations with the ability to challenge limitations with appropriate evidence.

  1. Limiting Disclosures

Respondents believed that organizations should generally have to use contractual and technical measures to limit third parties’ use and disclosure of children’s personal information. Respondents believed that disclosure requirements must be appropriately contextualized, specific, and accompanied by tools that assist parents or guardians make decisions to consent to disclosures.

To reduce the likelihood of a breach and limit the possibility that children’s data will be used beyond its initial purpose, respondents recommended that information only be retained as long as reasonably necessary. Respondents believed that retention obligations should not be overly prescriptive, and should consider the sensitivity of information and risks of organizations retaining it beyond what is necessary.

  1. No Go Zones

Respondents thought that the code should include child-specific no-go zones. Respondents believed that purposes that do not align with the child’s best interests, undermine their rights, or are unlawful, including high-risk data processing that cannot be mitigated through parental involvement/supervision or technical measures, should be deemed inappropriate and considered as no-go zones.

Respondents suggested that the following practices be considered no-go zones as they are incompatible with children’s rights and present a high risk of harm:

  • Collecting and using children’s data to conduct routine or indiscriminate surveillance;
  • Collecting and using children’s data without the child’s or, where appropriate, the parent or guardian’s knowledge;
  • Using children’s data for behavioural profiling or commercial exploitation rather than the child’s best interests;
  • Collecting children’s data in a way that undermines the child’s agency, safety, or dignity; and
  • Sharing children’s data beyond the original context without transparency, consent, and a clear benefit to the child.
  1. Deceptive Design

Respondents supported the code prohibiting deceptive design practices as research shows children to be more vulnerable to deceptive design, putting them at risk of financial, psychological, and autonomy harm. Respondents asked that the OPC distinguish harmful deceptive design patterns from legitimate user engagement strategies. Respondents also supported the Government amending Canadian privacy laws to ensure that such a prohibition is enforceable and appropriately sanctioned.

Specifically, respondents recommended that organizations refrain from:

  • Using default settings and designing products or services that facilitate extensive data collection, guide user behaviour, influence decisions, and manipulate children to share unnecessary personal information;
  • “Nudging” children to share unnecessary personal data or sensitive information without a direct benefit to them; and
  • Including complex and opaque language in privacy notices.

Instead, respondents recommended that organizations build upon privacy by design principles by:

  • Ensuring that privacy settings and user interfaces are presented in age-appropriate, user-friendly ways that avoid pressure tactics or misleading design;
  • Testing interface elements for clarity and effectiveness, and presenting privacy choices using neutral language;
  • Promoting privacy-protective behaviours through opt-in settings for non-essential data uses, using default-on protections, and reminders about data-sharing choices;
  • Providing proactive and guided prompts and nudges (paired with parental involvement) before sharing, and interactive tutorials, role-based choices, or storytelling formats; and
  • Reflecting on how best to communicate and encourage privacy protective behaviour in the context of their product or service and using existing developmental research.

Youth Perspectives

Youth respondents felt that there should be high default settings that limit collection to what is necessary for the product or service.

Companies should set strong default privacy settings. Setting profiles private by default. If youth are the primary audience, you have to be clear instead of a pop up of legal jargon. You have to limit data collection to that which is necessary for using the service.

The youth we spoke to were concerned with collection and use of certain types of personal information such as location and biometric data; and wanted restrictions on features such as chat bots and livestreaming to ensure their safety.

Next steps

The OPC is currently preparing a children’s privacy code that builds upon the responses received during the exploratory consultation. We will also consider all recommendations that were put forward for guidance to assist organizations in applying the code to specific sectors and technologies.

We will continue our direct engagement with youth, including through the OPC’s new Youth Council to better understand youth perspectives and intend to engage them in preparing a youth-friendly version of the code. We will also continue to monitor Canadian and international developments and look forward to continued engagement with stakeholders on this topic.

Date modified: