Language selection


Telecommunications firm failed to obtain appropriate consent for voiceprint authentication program

PIPEDA Findings # 2022-003

March 30, 2022

Complaint under the Personal Information Protection and Electronic Documents Act (the “Act”)

Report of findings


The complainant, a customer of Rogers Communications Inc. (“Rogers”), alleged that Rogers had improperly enrolled her in its voiceprint-based biometric authentication program, Voice ID. The complainant claims that when asked by a customer service representative (“CSR”), she declined to consent but was still enrolled into the program. The complainant subsequently opted out of the program but later discovered that she had been once again enrolled without her knowledge or consent. She also challenged the appropriateness of Rogers’ Voice ID program.

Rogers explained that the Voice ID program was designed to be an authentication and anti-fraud solution for securing customer accounts. The Voice ID solution developed algorithmic voiceprints by passively listening to callers in the background, a process known as “tuning”. This print would then be assigned to a client account, ostensibly with express consent, a process referred to as “enrolment”. On subsequent calls, the voiceprint of the caller would be matched in the background to authenticate the caller.

Appropriate Purposes: First, our Office determined that Rogers was collecting and using personal information for a purpose that a reasonable person would consider appropriate in the context of its Voice ID program. In our view, the program represented an effective solution to address Rogers’ legitimate need for account authentication and security in the context of the high-threat environment facing telecommunication service providers. The program presented limited identification risks when compared to other biometrics solutions, and was designed with a number of limitations, safeguards and controls to mitigate privacy impacts. We therefore found this aspect of the complaint to be not well-founded.

Consent and Retention: With respect to the specific circumstances of this complaint, Rogers acknowledged, from review of a recorded call, that the complainant had been improperly enrolled on one occasion. It accepted it was possible that the complainant had been enrolled erroneously on the previous call but could not verify this as the recording had been deleted. Rogers explained that CSRs could mistakenly click through enrolment or select the wrong option, without having obtained the caller’s consent.

More generally, we determined that Rogers failed to obtain valid and meaningful consent for Voice ID. In our view, express consent was required in advance of tuning, as well as enrolment, since: (i) voiceprints represent sensitive biometric information; and (ii) an individual would not, when calling Rogers, reasonably expect their voice to be captured and used to create a biometric representation of their voice. While Rogers’ policy and protocols indicated that CSRs should obtain express consent before enrolment, the company: (i) undertook the “tuning” process, which involved biometric collection, without first obtaining valid consent; and in our view (ii) had not implemented adequate protocols and associated monitoring to ensure express consent was consistently obtained for enrolment.

We further determined that Rogers did not provide a clearly explained and easily accessible option for individuals to opt out of the collection and use of their voiceprint. Rogers had a process whereby individuals could opt out of the Voice ID program, but that process was only mentioned in a “Frequently asked Questions” document on Rogers’ website. Individuals who specifically asked a CSR to opt out would be unenrolled from Voice ID, and told that their voiceprint would be retained for security purposes.

Ultimately, however, Rogers advised us that the retained voiceprints had never actually been used for security, or any other purpose. Rather, Rogers simply stored them in its database. We determined that as Rogers had no purpose for retaining these voiceprints after opt-out, it should have deleted them at that point.

Finally, our Office noted certain deficiencies in training materials and protocols employed by Rogers to ensure its staff obtained valid consent. These materials were general and high level, sometimes lacking critical details. Further, Rogers lacked a robust mechanism for implementation, instead relying on ad-hoc call reviews by supervisors, without tracking to ensure effectiveness of that monitoring.

In response to our recommendations, Rogers agreed to, by 30 September 2022, make a number of significant changes to its Voice ID program, including to: (i) obtain express consent from individuals before tuning going forward; (ii) more clearly inform customers of their ability to opt out, and delete voiceprints upon opt-out; (iii) delete voiceprints of individuals who previously opted out of Voice ID; (iv) implement significant changes to its process documents and training, as well as associated monitoring to ensure compliance; and (v) reconfirm consent for previously enrolled individuals as they call in.

Based on these commitments, we find the consent and retention aspects of the complaint to be well-founded and conditionally resolved.


  1. The Office of the Privacy Commissioner of Canada (“OPC”) received a complaint alleging that Rogers Communications Inc. (“Rogers”) created a biometric voiceprint of the complainant’s speech during a phone call, without her consent. The complainant further claimed that Rogers did not allow her to opt out of the collection and use of her voiceprint and that a request to delete the voiceprint was ignored. Finally, the complainant expressed her doubts about whether Rogers was using her voiceprint for the purposes that it claimed, and expressed her wish to know if any third parties had access to this information.
  2. Based on the complaint, the OPC investigated the following two issues:
    1. Appropriate Purpose: Was the collection and use of voiceprints for a purpose that a reasonable person would consider to be appropriate under the circumstances?
    2. Consent: Did Rogers obtain valid and meaningful consent for the collection of the complainant’s voiceprint; and does Rogers obtain valid and meaningful consent for collection of voiceprints in general? Did Rogers provide an adequate mechanism for the withdrawal of consent?

      In the course of considering Rogers’ practices with respect to withdrawal of consent, we identified an additional issue with respect to Rogers retention of voiceprints, which we also address in this report.


  1. Rogers is one of Canada’s largest federally regulated telecommunication companies, and provides a variety of services including internet, television, telephone and wireless to millions of Canadians.
  2. In 2018, Rogers implemented a biometric voiceprinting program (“Voice ID”) as a means of authenticating account holders. Rogers explained this to be a means of increasing the security of accounts, combatting fraud and improving the efficiency of operations for phone-based interactions with customers. Rogers’ policy set out that customers needed to provide express consent before being enrolled into the program and having a voiceprint associated to their account.
  3. The complainant was a customer of Rogers, and alleges that she was enrolled in the Voice ID program during a phone call with Rogers’ customer support (the “first call”), without her consent. She stated that when asked for her consent by the customer service representative (“CSR”) on the call, she declined. On a subsequent call, the next month (the “second call”), the complainant discovered that she had been enrolled in Voice ID despite her refusal, and stated her desire to opt out and have her voiceprint deleted.
  4. Several months later, after a “third call”, the complainant discovered that she was still enrolled in the Voice ID program. She therefore further alleged that Rogers failed to allow her to opt out, and did not delete her information following her request.
  5. Our Office determined that Rogers had in fact deleted the complainant’s voiceprint after the second call, per her request, but that she was re-enrolled during the third call, again without her consent. Subsequent to this second enrolment, Rogers once again deleted her voiceprint and removed her from the program for the final time.

Overview of Rogers’ Voice Printing Solution

  1. To understand Rogers’ use of voice printing technology, our Office sought and received a number of representations on the topic of its Voice ID solution and protocols.
  2. We note that there are two technologies generally used for voice authentication, Active and Passive voice, each of which has pros and cons, noting that they can be implemented in tandem:
    1. Active voice: This solution requires the individual to speak a specific phrase, known as a “passphrase”, which the software analyses to create a voiceprint. In an Active solution, biometric analysis and voiceprint development occurs when the target individual creates the passphrase, and is tied to that specific phrase. This serves to combine both collection and enrolment into a single step. The individual must generally repeat the passphrase multiple times throughout the process. As a result, this solution supports a certain level of awareness and knowledge regarding the collection of their personal information.

      Active voice solutions can use either a generic or customized passphrase. In the former solution, the passphrase is a generic phrase used for all enrolees (ex: “My voice is my password”) whereas in the latter, the passphrase is set by the individual and known only to them, which creates a form of two-factor authentication (something the user is, their voice; and something they know, the passphrase).
    2. Passive voice: Rogers utilizes this technology for their solution. This form of recognition software runs in the background of a call and builds a general algorithmic pattern for the individual’s voice and speech. This solution does not require the target individual’s knowledge to function, such that it can operate covertly unless the individual is informed of its implementation. Passive voice can offer certain security benefits by virtue of running in the background throughout a customer interaction, assessing natural speech, rather than as a single-point check at the beginning of a call.
  3. Rogers’ Voice ID solution functions by mapping an algorithmic representation to customer speech. The core element of the solution is a licensed third-party software known as Nuance FreeSpeechFootnote 1, which runs in the background of the call and completes an analysis, referred to as “tuning”, to determine the physical traits of the individual’s vocal tract (length, shape and size) and behavioural characteristics such as accents, pronunciation, emphasis and speed. Using this combined information, the software creates a voiceprint, meant to serve as a unique identifier reflecting the features of the individual’s voice. This voiceprint is not a recording, but a numerical algorithmic pattern.
  4. Rogers implemented FreeSpeech through a software application, known as a “widget”, embedded in its interactive voice response (“IVR”) phone system. This widget presents CSRs with the option to enrol customers into the Voice ID program, and in the case a customer is already enrolled, returns information on whether or not their voice is a match. It also has the capability to flag fraud attempts as will be explained further below.
  5. Rogers explained that to collect and use voiceprints, its call management system engages the “tuning” process after the customer has passed through its IVR system (i.e., during which they answer questions including the purpose of their call) and identified an account. The individual is then transferred to a CSR and as they speak to the CSR, the system initiates “tuning” and builds a voiceprint. Once this voiceprint has been developed, the process proceeds in one of two ways. If a voiceprint is already associated to the account, the caller’s voiceprint is compared to the print on file. Our Office notes that biometric comparisons can take two forms: one-to-one and one-to-many. In a one-to-one comparison, the biometric input is compared strictly against a single biometric on file, generally for verification, while in the context of one-to-many, it is compared to a set of biometrics, and can more easily be used for identification. Rogers confirmed that the mechanism used during calls is generally one-to-one, and is not used for general identification.Footnote 2 Rogers did, however, advise that it conducts a limited one-to-many comparison, for security purposes, as explained in paragraph 16.
  6. If no existing print is already associated to the account, the system proceeds to enrolment. The CSR will be presented with an option, in Rogers’ call-management interface, to proceed with enrolment after manually authenticating the customer. Per Rogers policy and training documentation, representatives are required to explain the Voice ID program to the customer and obtain express consent in advance of associating the voiceprint to the customer’s account. If the customer agrees, the voiceprint is associated to the account by clicking a button.
  7. If the customer declines, there are two options. If the customer indicates that they do not wish to enrol in the Voice ID program at that time, they will be prompted again after 30 days; whereas if they indicate they do not wish to enrol at any time, they will be prompted again after 90 days. If the customer does not opt-in, the “tuning” voiceprint will be discarded, and no voiceprint will be retained. Conversely, if a customer who has previously enrolled, subsequently chooses to opt-out, the voiceprint will be retained in Rogers’ system for “security purposes”, though we ultimately learned that it was never used for those purposes, as explained in paragraph 61.
  8. After a voiceprint has been associated to the account, the software uses this pattern to attempt to authenticate callers during subsequent calls in relation to that account. When matching a caller’s voice to the voiceprint database, it applies a “confidence interval”, indicating the closeness of the match. Rogers has determined a custom confidence interval that is required for the system to accept the voiceprint as a matchFootnote 3, and represented that it has not identified any cases where invalid users have improperly accessed an account previously enrolled in Voice ID.
  9. In the case that there is a match to the voiceprint, the system will authenticate the user and return a positive response to confirm that the CSR may continue with the call. Rogers advised that in the case that a negative or “mismatch” response is returned, the system conducts a one-to-many check against a separate “fraud database”. Rogers stated that in the context of the Voice ID program, the fraud database consists of voiceprints from callers whom Rogers has determined, after a review by its fraud team, to have fraudulently enrolled in Voice ID on another individual’s account. Our Office notes that the one-to-many check is only used in this specific context.
  10. As indicated above, it is possible for an invalid user to compromise Voice ID if they are able to pass manual authentication and set their own voice as the enrolled print- in which case the valid user would need to go through enhanced authentication to regain control of the account.Footnote 4
  11. While the issue of safeguards was outside the scope of the complaint and thus safeguards were not assessed, we note the presence of various controls and security mechanisms to protect voiceprints in the context of Voice ID. Voiceprints are stored in an encrypted and proprietary format on Canadian servers under Rogers’ control. Rogers confirmed that no third parties have access to the voiceprints for any purpose. Rogers further advised that access to the database is restricted to its Voice ID administration team, and that the voiceprints could not be used outside of their system. Our review of software documentation confirmed that the FreeSpeech solution is deployed by its customers and is not centrally managed, accessible to or controlled by Nuance. Additionally, our review confirmed that voiceprints are signed using an encryption key unique to the specific instance of FreeSpeech, to protect from use in other programs or in other FreeSpeech implementations.
  12. While it is generally our recommendation that biometrics be stored on the user’s device where possible, we recognize that in this particular context a central implementation via Rogers’ database is required, as callers are engaging with a support line and not a specific device.


Issue 1: Appropriate Purpose

  1. As set out in section 5(3) of PIPEDA: “an organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances”.
  2. As explained in the OPC’s Guidance on inappropriate data practices: Interpretation and application of subsection 5(3),Footnote 5 (“Appropriate Purposes Guidance”) the OPC generally considers the following factorsFootnote 6 set out by the courts, in addition to the sensitivity of the information in question, in order to assist it in determining whether a reasonable person would find that an organization’s collection, use and disclosure of information is for an appropriate purpose in the circumstances:
    1. Whether the organization’s purpose represents a legitimate need / bona fide business interest;
    2. Whether the collection, use and disclosure would be effective in meeting the organization’s need;
    3. Whether there are less privacy invasive means of achieving the same ends at comparable cost and with comparable benefits; and
    4. Whether the loss of privacy is proportional to the benefits.
  1. Our Office has previouslyFootnote 7 held that biometric information is sensitive in almost all circumstances. It is intrinsically, and in most instances permanently, linked to the individual. It is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual. Our Office does, however, recognize that while all biometric information is generally considered sensitive, not all biometric information is equally sensitive. For instance, genetic information and faceprints will generally be considered to be of higher sensitivity.
  2. In Rogers’ case, we note that the voiceprints are biometric information stored in an algorithmic format and are innately sensitive personal information. To further assess the degree of sensitivity in the context of Rogers’ Voice ID we focused on the risk of harm to individuals in the circumstances. We note that passive voiceprinting can allow for matching regardless of what the content of an individual’s speech is and thus potentially provides the general capability to identify individuals by listening to them speak. We determined however, that this risk is significantly mitigated in the context given a number of factors.
  3. First, Rogers primarily applies the solution in a one-to-one authentication scenario, and voiceprinting is only enabled after a caller has identified an account. We have noted that where a one-to-many check is conducted, it is done only against a specified “fraud database” as explained above, for the purpose of flagging a fraud attempt, and not for identification. While this does indicate the capability to apply the voiceprints to more general use in Rogers’ system, we note that Rogers applied a number of design restrictions to minimize this. Secondly, the voiceprinting technology is limited to use in Rogers’ IVR system via a widget, and can only activate in set conditions. Additionally, we note that the FreeSpeech solution is specifically designed to resist matching recorded voices, and checks for live speech, limiting the use cases to active calls. We further note that even if Rogers were to decide to conduct one-to-many comparisons, and we have no evidence to indicate they plan to do so, the practical risk would be limited to identification of customers in Rogers’ IVR system. Finally, we note that the voiceprints in this context are not reversible, are encrypted and are unique to Rogers’ specific implementation of FreeSpeech, limiting opportunities for external misuse or use for alternative purposes.
  4. Notwithstanding the safeguard elements we note above and earlier in this report, we reiterate that our Office did not conduct a full assessment of Rogers’ safeguards. Given the ever-present risk of breaches and the inherent sensitivity and value of biometric information, we encourage and expect Rogers to ensure its security safeguards are commensurately high and stringent. Further, in past breach investigations, we have seen that it is not enough to just have tools and policies in place, but they must be deployed in a dynamic manner, to ensure they are properly implemented and followed, and that they remain effective in the context of evolving threats.
Legitimate Need/Bona Fide business interest
  1. Rogers’ purpose for collecting biometric voiceprints is to improve authentication protocols, and in turn better secure accounts from bad actors and prevent fraud. Rogers represented that it added Voice ID as an optional mechanism to provide an additional layer of account security and combat against fraud, given the ability of threat actors to compromise biographical information used for verification. In Rogers’ specific case, given the heightened threat environment (discussed in paragraph 36), we accept that its purposes for Voice ID represent a legitimate need and bona fide business interest.
  2. The complainant raised concerns that she was still asked verification questions despite being enrolled in the Voice ID program, and she questioned if voiceprinting was actually being conducted for authentication purposes.
  3. We have determined that the complainant was asked verification questions in the period between having been removed from the program and being erroneously re-enrolled. Additionally, Rogers explained that customers may still be asked verification questions even when enrolled in Voice ID, as an enhanced authentication mechanism. We found no evidence to indicate Rogers had any other purposes for Voice ID, aside from their identified purposes of authentication and fraud management. Additionally, Rogers confirmed that no third parties had access to Rogers’ customers’ voiceprints for any purpose.
  1. We conclude that the program, as part of a robust authentication regimen, is likely to be effective in achieving Rogers’ purposes. As stated in the OPC’s guidance on identification and authentication,Footnote 8 biometrics such as voiceprints can represent strong identifiers that provide a distinct and additional layer of security unique to the person. Paired with a properly configured confidence interval on matching and mechanisms to detect voice spoofing,Footnote 9 there is a reasonable expectation that voiceprinting technologies can serve as an effective authenticator and increase overall security and fraud prevention as part of an authentication regimen. We do note, however, that as with any technology, voiceprinting is not infallible. As illustrated in a publicized case in the United Kingdom, where such a solution was compromised, effectiveness of the voiceprinting solution will depend on a proper implementation, configuration and periodic review.Footnote 10
  2. Rogers advised that at the confidence interval it has set for its program, it had not identified any cases where its Voice ID solution had granted access to unauthorized users. Rogers represented that, based on internal anecdotal and vendor data, it believed the solution to be 99% accurate. While our Office did not review sufficient evidence to establish a specific 99% accuracy rating, based on a review of the technology and software information and Rogers’ representations, we accept that the solution was likely to be effective in this context.
  3. That said, the effectiveness of the Voice ID solution will, in turn, be dependent on proper authentication at the enrolment stage to ensure that the correct voice has been associated to the account. While we have not assessed Rogers’ other authentication measures fully in this investigation, these were the subject of our investigation into Fido Solutions Inc. (a subsidiary of Rogers), discussed further in paragraph 54 of this report,Footnote 11 and are currently under examination in another ongoing investigation into Rogers’ practices.
Less privacy-invasive options
  1. Our Office determined that while other options were available to achieve Rogers’ purposes, they were not comparable. While we note that there are a variety of other valid authentication methods available, such as PINs or security questions, we recognize that only biometric information represents a layer of security unique to the individual’s characteristics. As indicated in the OPC’s Guidelines for identification and authentication,Footnote 12 these other solutions only reflect what an individual “knows”, while voiceprints include who an individual “is”. In the context of this case, where Rogers is authenticating individuals during phone calls, voiceprints are the available biometric identifier.
  2. This is not to say that biometric identifiers are always appropriate as authenticators. As will be explained, proportionality serves as a key element in making this determination.
  1. Our Office considered whether the loss of privacy associated with Voice ID in this case is proportional to the benefits. Voiceprinting presents certain risks from a privacy perspective, via collection of a biometric that may reveal vocal traits or identify individuals. However, in the context of Rogers’ implementation, as an optional authentication method, the potential for identification is quite limited. Based on the design of the Voice ID solution and Rogers’ implementation thereof, we accept that the voiceprints are not being used for other, more privacy invasive identification or surveillance purposes.
  2. We note that individual customers can derive a benefit from the Voice ID program by means of additional security for their telecommunication accounts. As explained further in the paragraph below, telecommunications companies are high-value targets for attackers, which creates significant risks for both organizations and individual customers. As indicated in our Office’s Guidelines for identification and authentication, the stringency of authentication methods should be commensurate to risks for the organization, as well as to those for the individual. It is our view that Rogers’ accounts are subject to a high level of risk- both for account holders and the company- and call for a high standard of authentication.
  3. Rogers offers package solutions, including phone/mobile, internet/email, television and home security, to millions of customers, which leads to a high-risk environment. Telecom companies, such as Rogers, are aggressively targeted by threat actors in a variety of attacks, such as “sim swaps”Footnote 13 and account takeovers, and are a gateway to additional downstream breaches. Mobile phone accounts are often used as a key authentication method for other accounts, such as financial accounts; and email accounts can be used to take over multiple associated accounts through “forgotten password” functions. This can result in a variety of serious harms to consumers, including identity theft, reputational harm, and financial fraud.
  4. Accordingly, we accept that the benefit to individuals and Rogers, via increased account security through the use of the Voice ID solution, is proportional to the loss of privacy in this context.
  5. It is our view, therefore, that for the reasons explained above, Rogers’ collection of biometric voiceprints is for a purpose that a reasonable person would consider appropriate, in the circumstances of this case.

Issue 2: Consent, Withdrawal of Consent and Retention

  1. Notwithstanding our conclusion in Issue 1, it remains that the collection, use or disclosure of personal information for an appropriate purpose must be conducted in line with PIPEDA’s consent requirements.
  2. Principle 4.3 of Schedule 1 of PIPEDA provides that the knowledge and consent of the individual is required for the collection, use or disclosure of personal information, except where inappropriate. Principle 4.3 further establishes that the form of consent may be either express or implied.
  3. The OPC’s Guidelines for Obtaining Meaningful ConsentFootnote 14 explain that organizations must generally obtain express consent when: (i) the information in question is sensitive; (ii) the collection, use or disclosure is outside of the reasonable expectations of the individual; or (iii) it creates a meaningful residual risk of significant harmFootnote 15.
  4. As explained above, our Office has previouslyFootnote 16 held that biometric information is sensitive in almost all circumstances. It is intrinsically, and in most instances permanently, linked to the individual. It is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual. Additionally, in our view, individuals would not, during a call with Rogers to discuss their account or technical issues, reasonably expect their voice to be recorded and converted to a biometric, to be used for the purposes identified earlier in this report, without having given their prior consent to the specific practice. As such, it is our view that Rogers requires express consent for its collection and use of voiceprints in the context of its Voice ID program.
  5. Rogers initially pointed to a previous case (PIPEDA 2004-281), where voiceprints used for one-to-one authentication in an employer-employee context were characterized as less sensitive and “fairly benign”,Footnote 17 asserting that it could rely on implied consent in the circumstances of Voice ID. While Rogers ultimately accepted our determination that voiceprints are sensitive personal information requiring express consent, and withdrew this argument, we see value in briefly addressing this question.
  6. Our Office assesses each individual case based on its particular facts and context. In the previously referenced case, the affected individuals were employees. PIPEDA-2004-281 involved an employer-employee context and can therefore be distinguished from the context in this matter, involving a customer-provider relationship. As found in R v. Cole,Footnote 18 the “operational realities of the workplace may diminish the expectation of privacy that reasonable employees may otherwise have in their personal information”.Footnote 19 In other words, the expectation of privacy may be attenuated in the workplace. Additionally, our Office notes that environmental changes in technology from 2004 to today, such as increasingly interconnected systems, the ubiquity of breaches and the potential for abuse of biometrics, has radically changed the calculus of sensitivity surrounding this technology. As such, we do not see the abovementioned case as comparable in the context of this case or at this time.
  7. We note that Rogers’ own policy and training documentation required its CSRs to obtain express consent for enrolment, to associate voiceprints to accounts. Rogers did not, however, seek express consent at the point of developing the voiceprint (tuning). Rather, Rogers represented that an automated message advised individuals calling into its support line that recordings could be used for “quality assurance training, security and identification purposes [emphasis added]”. Rogers relied on the individual’s decision to proceed with the call as their consent to the tuning process.
  8. Our Office does not accept this is sufficient. It is our view that “tuning” is in fact the collection of a voiceprint, and constitutes the collection of sensitive biometric personal information- whether or not the individual is ultimately enrolled in Voice ID, such that Rogers should have obtained express consent in advance of beginning the tuning process. This is a collection even if it is ultimately discarded where the customer decides not to enrol. We recognize that Voice ID is a protection against fraud. However, it serves as only one aspect of Rogers’ safeguards and anti-fraud regimen, one which customers are able to opt out of. In any event, the potential benefits of Voice ID do not relieve Rogers of its obligation to obtain individuals’ express consent before collecting and using their sensitive biometric information.
  9. Section 6.1 of PIPEDA states that for consent to be valid, it must be reasonable to expect that the individual to whom the organizations activities are directed would understand the purpose, nature and consequences of the collection, use and disclosure of their information. Additionally, Principle 4.3.2 states that the purposes must be stated in such a manner that the individual can reasonably understand how their information will be used or disclosed.
  10. The OPC’s Guidelines for obtaining meaningful consentFootnote 20 further elaborate that individuals should be made aware of all purposes for which information is collected, used or disclosed. These purposes must be described in meaningful language. Furthermore, where consent is not a condition of service, individuals’ choice to consent must be explained clearly and made easily accessible, with a clear option to say ‘yes’ or ‘no’.
  11. Rogers’ reliance on a one-line recorded message indicating that that “recordings” can be used for “identification purposes” is not, in our view, sufficient to obtain meaningful consent for “tuning”. Callers would not reasonably understand that “identification” meant that a biometric representation of their voice would be collected and used for purposes of authentication.
  12. With respect to the Complainant specifically, despite the fact that Rogers’ policy instructed CSRs to explain the program and obtain express consent for Voice ID (post-tuning), Rogers acknowledged that it improperly retained and associated the complainant’s voiceprint to her account without her consent on at least one occasion. From our review of the third call, we noted that the representative did not ask for consent or even advise the complainant of the Voice ID enrolment. Rogers acknowledged that this was the case and stated that it addressed the matter with the employee, but suggested that this could have been an error on the part of the CSR, who may have accidentally clicked the enrolment button when transferring the complainant to another department.
  13. Rogers was unable to verify whether its protocol was correctly followed during the first call, as the recording had been deleted in accordance with its retention schedule.
  14. Given the above- the CSR’s failure to request or obtain consent in call 3, and Rogers’ inability to demonstrate that consent was obtained in call 1- we determined it appropriate to further examine Rogers’ Voice ID protocols, as well as its associated monitoring and enforcement.
  15. We note that Rogers’ relies on CSRs to obtain consent orally. Furthermore, given the design of the voiceprint “widget”, it appears to be fairly easy for CSRs to erroneously enroll individuals by either failing to request consent, or quickly clicking through the widget to enroll callers, accidentally or otherwise. In fact, Rogers itself offered this latter accidental scenario as a potential reason for erroneous enrolment of the complainant.
  16. As we noted in our recent finding in relation to Fido Solutions Inc., a subsidiary of Rogers, where several CSRs had failed to follow the company’s authentication protocols, it is important that organizations implement measures to ensure their processes are actually followed, particularly when employees may face pressures related to speed or customer satisfaction that could incentivize bypassing protocols.Footnote 21 It is not enough to have tools and processes in place if they are not successfully implemented and respected.
  17. In this case, certain training materials we reviewed contained erroneous or incomplete information, which could have led to systemic gaps in CSRs’ compliance with Rogers’ Voice ID consent protocols. For example, the scripts provided to our Office lacked detail regarding opting out, provided a vague explanation of the tuning process, and included incorrect timelines for re-engaging individuals who had opted out. Additionally, Rogers explained that its review of CSRs’ compliance with Voice ID consent protocols was carried out on an ad-hoc basis, and that it could not provide any records of review, or statistics regarding, monitoring or compliance with those protocols. In our view, this is insufficient to ensure CSRs consistently obtain valid consent for Rogers collection and use of sensitive biometric information.
  18. For the reasons explained above, we find that Rogers’ failed to obtain valid and meaningful consent for the collection and use of personal information in the context of its Voice ID program, in contravention of principle 4.3 and section 6.1 of PIPEDA.
Withdrawal of Consent and Retention
  1. While we found that the complainant did not provide valid consent, we also considered, more generally, whether Rogers allowed individuals to withdraw consent, considering among other factors, the complainant’s experience in this case.
  2. In accordance with Principle 4.3.8 of PIPEDA, and as reflected in the OPC’s Guidelines for obtaining meaningful consent,Footnote 22 individuals may withdraw their consent at any time. Consent choices can be re-considered, and individuals should have full information available to them as they make decisions regarding maintaining or withdrawing consent.
  3. Rogers has developed an internal procedure to allow individuals to withdraw consent and have their voiceprint disassociated from their account, as well as a separate process to have their voiceprint deleted. However, a review of Rogers’ scripts and training materials indicated that the withdrawal and request for deletion processes are not easily accessible or clearly explained to individuals either at time of consent or in a privacy policy.
  4. Rogers does explain in a frequently asked questions documentFootnote 23 on its website, that enrolled account holders can opt out of Voice ID, but does not explain that there is a separate process for voiceprint deletion. If an individual proactively asks to opt out of the Voice ID program, the CSR will effect the opt-out and explain that the individual’s voiceprint will be retained “for security purposes”. The CSR would not proactively advise the individual that they could have their voiceprint deleted via a separate process.
  5. Rogers represented that it retained the voiceprints of opted-account holders for “security purposes”. However, at the conclusion of our investigation, Rogers corrected their previous representations and clarified that this was never implemented. While these voiceprints were retained, they were never used for any purpose. Principle 4.5.3 of Schedule 1 of PIPEDA provides that personal information that is no longer required to fulfill the identified purpose should be destroyed, erased or made anonymous. Given Rogers had no purpose for the retention of these voiceprints, it contravened Principle 4.5.3 and should have deleted the voiceprints on customers’ opt-out of the program.
  6. While an adequate opt-out (or deletion) mechanism would not have rendered Rogers’ retention compliant with PIPEDA, we still considered whether the manner in which Rogers allowed account holders to delete their voiceprints would have complied with PIPEDA requirements had it actually used those voiceprints post opt-out for security purposes.
  7. Rogers explained that while its CSRs can complete the withdrawal process for individuals, disassociating an account from Voice ID upon request, they cannot delete any voiceprints. If the individual proactively requests to have their voiceprint deleted, they are advised that the request must be sent to Rogers’ privacy office, via a separate process. The deletion process takes 1-2 days after a request is received. Given these factors, we determined that Rogers would not have been in compliance with PIPEDA requirements for withdrawal of consent concerning deletion even if it had used them for security purposes.
  8. Rogers does not clearly explain, other than in an online “frequently asked questions” document that an individual may or may not read, the option for account holders to opt out of Voice ID. As such, we find that Rogers’ mechanism for opting out was not compliant with Principle 4.3.8 of PIPEDA.


  1. The OPC made the following recommendations with a view to bringing Rogers into compliance with PIPEDA:
    1. Implement measures going forward, to consistently obtain express opt-in consent before creating any voiceprint (“tuning”) and associating it to the account, by amending its protocols and procedures, and providing a more meaningful explanation regarding the creation and use of voiceprints.
    2. Clearly inform customers, in privacy communications and during the enrollment process, of their ability to opt out of the program after they’ve been enrolled, and delete their voiceprint upon opt-out.
    3. Delete the voiceprints of any individuals who have previously opted out of Voice ID.
    4. Implement measures to ensure that revised consent practices are properly implemented, including: (i) developing clear procedures, consistent with Rogers’ policies, for CSRs or others who will implement them; (ii) training to communicate those revised procedures to relevant staff; and (iii) monitoring, tracking and remediation to ensure procedures are followed. We also encourage Rogers to consider technological measures to ensure more consistent application of individuals’ choices (to minimize the risk of CSR/human error).
    5. For all account holders believed to have previously opted in, implement the new consent approach, as recommended in (a), as they call in, to reconfirm their consent.

Response to our Recommendations

  1. Rogers agreed to take a number of steps to address our recommendations with respect to its Voice ID program.
  2. Rogers agreed to redesign its Voice ID protocols to explain the program and obtain express consent through its IVR system prior to conducting any tuning or enrolment. Rogers advised that going forward, all customers would be given a choice between setting a PIN or enrolling in Voice ID. Rogers agreed to re-confirm, through the new mechanism, the consent of all individuals who were previously enrolled in the program.
  3. Rogers agreed to delete the voiceprints of users who opt-out of Voice ID, and to delete the voiceprints of any individuals who previously opted out, but had their voiceprints retained.
  4. Rogers agreed to implement the following measures to ensure CSR compliance with established Voice ID consent-related requirements:
    1. Providing regular refresher training in respect of existing enrolment and deletion protocols. This training will also communicate to employees to ensure that they understand that: (i) there are significant consequences for failure to obtain customer consent for enrolment; (ii) the organization takes steps to monitor and ensure proper enrolment protocols are followed, and (iii) consequences will be enforced;
    2. Providing refresher training for Managers and staff about the existing and documented consequences that may flow when an employee has not followed these protocols (e.g., coaching; progressive discipline up to and including potential termination); and
    3. Implementing proactive feedback related to non-compliance with authentication protocols, using identified internal customer complaint and fraud management tools to complement existing monitoring of employee compliance with all enrolment protocols, and maintain records of the results of this monitoring to measure compliance.
  5. Finally, Rogers committed to provide our Office:
    1. by 30 April 2022, a detailed plan for implementation of the above measures; and
    2. by 30 September 2022, documentary evidence to establish that it has implemented all commitments to comply with our recommendations.


  1. With respect to Appropriate Purposes, we find the matter to be not well-founded.
  2. On the matters of consent, withdrawal of consent and retention, considering Rogers’ commitments to bring itself into compliance with the Act, we find this matter to be well-founded and conditionally resolved.
Date modified: