Language selection


The Privacy Impact Assessment: Your GPS Through the New Landscape of Privacy Protection

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

Remarks at the Privacy Information Agency’s 2nd Annual Privacy Workshop

September 28, 2011
Ottawa, Ontario

Address by Chantal Bernier
Assistant Privacy Commissioner of Canada

(Check against delivery)

Thank you for your kind invitation. I am very happy to start the ball rolling at this retreat, where you will have the opportunity to exchange ideas and recharge your batteries.

I know that your work is complex, that you spend much of your time responding to access requests and answering questions from your colleagues. I am positive that a day away from the phone and email will be more than worthwhile.

I would like to talk to you this morning first of all from a philosophical point of view about the impact of the digital revolution in our field, about privacy as a basic value, and then from a more practical point of view about a very important tool to safeguard that basic value, namely the Privacy Impact Assessment (PIA). I am fully aware that I am preaching to the choir this morning, but the fact remains that the PIA is an essential tool in risk management and the cornerstone of the federal government’s compliance framework as regards the protection of personal information.

What is changing

The digital revolution is changing civilization as dramatically as the printing press did—at a pace that’s dramatically faster than that of the industrial revolution. It may be a truism, but it is no less true.

Increased capacity for data collection, storage and analysis are opening up new possibilities in various areas of human activity.

Digitalisation, networkization and globalization — and the breakneck pace of change — are happening as new public safety measures are imposing a new relationship between the citizen and the state.

Of course, all of these social changes brought on by the digital revolution have a profound impact on the right to privacy, because they affect how our personal information is being collected, used, protected and disclosed. But more importantly, they have a significant impact on our ability to control our personal information.

What stays the same

One thing the digital revolution has not changed is how we value our privacy. It seems there has been a privacy angle to every news story to hit the front page in the last decade or so.

Privacy is more than ever at the core of every public debate, it is far from dead—and people don’t want to get over it, as you will hear from Judy Benvie later today.

Privacy is a fundamental and immutable right. But its modalities are to be redefined in light of the new realities of digitalization, networkization and globalization, the breakneck pace of technological advances, and the new relationship between the citizen and the state.

Organic relationship between privacy and policy objectives

This new reality is also reflected in policy priorities, in both the public and private sectors. On both sides, there is often a tendency to give into a false dichotomy where privacy and security (be it targeted at data or persons) are pitted against each other in a zero-sum game. In reality, privacy and security are not at odds.

On the contrary: I would put to you that measures to protect privacy must be integral to any security initiatives.

Why? Because we live in a free and democratic society where individuals enjoy the right to live, to move around, to communicate and to go about their daily lives, free from unwarranted interference by the state.

And for practical reasons too:

Any effort towards greater security that is strictly tailored to the actual risk – and that therefore minimizes the infringement of privacy or other rights – will be more targeted and more effective.

For example, an investigation that is carried out in accordance with the law, and in a way that respects privacy and other rights, will yield cleaner evidence and a more compelling case for the prosecution.

In other words, all the work that is poured into greater security is more likely to pay off if it is carried out in a strategic, targeted manner. And an essential consideration in that regard is due respect for the right to privacy.

Challenges in integrating privacy and policy objectives

In 2008, the Public Policy Forum conducted a workshop on our behalf with 40 leading privacy experts, including senior executives from the public sector. The topic of discussion was Modernizing the Federal Privacy Regime.

One theme that flowed through their various comments was the need for guidance. They stressed the need for a baseline of skills and standards, the need for a principles-based approach to privacy, and the importance of the OPC’s leadership in providing information and resources to the ATIP community.

They explained how a lack of standards contributes to the general perception of privacy as a barrier, rather than as a fundamental part of a public servant’s job, going on to explain how principles are critical when there is a need to weigh privacy against other values, such as public safety. They also mentioned that PIAs are an essential tool for integrating privacy considerations into new programs, policies and technologies.

How you can help—and how we can help you

You may be an active member of the team performing the PIA or you may act as a consultant to the program managers who take the lead in the assessment. You may work for a public organization mandated by the Treasury Board to perform PIAs or you may work for a private organization that conducts PIAs as part of its overall risk-management strategy. Either way, you play a critical role in a process that is at the forefront of privacy protection in Canada. And we want to support you in that role.

An effective PIA is not an exercise in style. It is not a public relations vehicle. And it is definitely not limited to detailing the safeguards surrounding personal information that was collected without due consideration.

2007 Audit: Assessing the Privacy Impacts of Programs, Plans and Policies

In 2007, five years after the launch of the first Treasury Board Directive on PIAs, we performed an audit to measure the extent to which privacy issues were being managed appropriately within the Government of Canada.

Our audit focussed on both policy and operational issues, and served to highlight the effectiveness of PIAs as a risk-management tool. I will share with you some of the findings that are relevant for your practice.

The two themes from that audit that I would like to highlight for you today are:

  • the importance of a privacy management framework to support completion of PIAs, and
  • the importance of key control elements to support the framework.

Firstly, PIAs are only as good as the processes that support them. Once privacy is understood as a strategic variable for organizations, the need to factor privacy analysis into the management of organizational risk becomes increasingly compelling for program managers.

Secondly, frameworks need three essential control mechanisms:

  • for initiating PIAs,
  • for tracking PIAs, and
  • for monitoring PIAs.

Initiation of PIAs: In order to ensure that PIAs are performed when they should be, an organization must be aware of the conditions under which a PIA is required, and thereafter have a process in place to initiate a PIA if and when those conditions are met. This “trigger point” is a critical first step in the PIA process.

Tracking of PIAs: Programs with potential privacy impacts need to be tracked and monitored. A centralized tracking system for programs that involve the use of personal information may help monitor PIA activities and assess whether all potential PIA candidates have successfully passed the trigger test.

A program repository within a department could also serve as an indicator of potential privacy risks associated with programs already in place.

Monitoring of PIAs: A strong PIA isn’t worth much if the risks it identifies aren’t mitigated — by implementing the measures identified in the PIA and the recommendations made by the OPC. Our 2007 audit found that institutions were generally slow to mitigate risks identified through the PIA process, especially if these risks were considered medium or low.

PIAs are only effective when the recommendations are implemented.

The new Treasury Board Directive on PIAs underscores the main recommendations from our audit: PIA integration into an organization’s broader risk management practices; the need for an administrative infrastructure to support PIAs; and the need for a sensible (pragmatic) and risk-based approach to conducting PIAs.

We have noticed a significant improvement in the quality of PIA submissions over the past several years. Yet over these same few years, we have witnessed the federal government embark on several programs with potentially serious effects on the public’s privacy — Government On-line, Beyond the Border, the creation of a “no-fly” list and the establishment of a single point-of-service kiosk for federal social services, to name but a few.

Overview of preliminary PIA on the Passenger Behaviour Observation Pilot Project and how it was reviewed by the OPC

I will now give you an example of how we reviewed the preliminary PIA that was submitted to us by the Canadian Air Transport Security Authority (CATSA) regarding the Passenger Behaviour Observation Pilot Project. This is an excellent example of a new program being contemplated by the government and that could have a significant impact on people’s right to privacy.

This is an initiative that involves observing passengers in the pre-boarding security screening line-up, looking for suspicious activity. If particular concern is the potential that anyone exhibiting unusual behaviour — even for perfectly legitimate reasons such as coming down with a cold or facing a fear of flying — becomes a target for extra security.

Under this initiative, CATSA officers may approach passengers and engage them in a brief conversation, and ask to see their identification and travel documents.

Depending on the outcome of the conversation, passengers may be directed to secondary screening. Following each interaction, the officer fills out a case card, which describes the incident and the passenger’s appearance, but contains no personally identifying information such as names or addresses.

After looking at the preliminary PIA, we were concerned about the effectiveness of this initiative in identifying threats to aviation security. We questioned its necessity, in light of the many other security procedures and programs already in place.

We also raised the potential for inappropriate risk profiling, based on characteristics such as race, ethnicity, age or gender.

CATSA appears to be moving toward identity-based screening. This represents a significant shift in operations that had been previously focussed on screening for objects that pose a risk to aviation security, as opposed to persons.

As part of our assessment, members of the OPC’s Audit and Review Branch went to the Vancouver International Airport in June to see the pilot program in action.
In Vancouver, they:

  • interviewed the program manager,
  • reviewed the recruitment and training procedures,
  • interviewed the behaviour observation officers,
  • reviewed their “rules of engagement” and procedures,
  • reviewed their method for recording incidents and results of interactions with passengers,
  • discussed issues of human rights, ethnic profiling, privacy intrusions, and other issues that we felt were relevant in context,
  • reviewed the physical setup where passenger interactions take place,
  • observed actual interactions with passengers selected because of their behaviour patterns and reviewed results with officers and managers.

We are continuing our consultations with CATSA and look forward to receiving the evaluation report from the Vancouver pilot project.

We have asked CATSA for a full PIA should they move beyond the pilot project stage. In particular, we’ve asked that it include a business case justifying the need for this program and also explaining the expected effectiveness, and how this effectiveness would be measured.

Tools to assist you in preparing PIAs

Optimally, the PIA process should help government institutions justify privacy-invasive programs and activities against a four-part test: Is the project absolutely necessary? Is it likely to be effective in achieving its objectives? Is the project’s anticipated infringement on privacy proportionate to any potential benefit to be derived? And are less intrusive alternatives available?

When the four-part test has been met, government institutions must still demonstrate that the information that was collected will be protected.

We therefore also encourage proponents to consider the ten internationally recognized fair information principles for the stewardship of personal information. Among other things, these principles call for data collection that is minimized and appropriate, and for mechanisms to ensure it is secure, so as to lower the risk of future privacy invasions.

We believe that this logical progression can aptly and usefully be applied by security agencies, policymakers or others in searching for that elusive equilibrium between public security and privacy rights.

Stage 1: Making the Case

Key to the effective integration of privacy into policymaking is to start early. As such, the first crucial step in privacy protection begins when a policy or program is first being conceived.

This stage, which we refer to as “Making the Case,” vets any proposed initiative against the four-part test I mentioned a minute ago. Allow me to go into further detail about it now.

The test is used by courts and legal experts to determine whether any law, program or exercise of power ought to be allowed to supersede or intrude on basic freedoms and rights such as privacy.

In this test, one would first consider whether a proposed initiative is truly necessary to achieve the stated purpose, understanding that the purpose must correspond to a pressing societal concern.

If it is, in fact, essential, then the next question is whether the program can be demonstrated as clearly effective in achieving the stated objective. This demonstration must be supported empirically, at the very least, in the cogency of its assumptions.

What I mean is that we cannot always know in advance whether a new measure will be effective, but our expectations in that regard must be robust, based on facts, not suppositions, and constantly rechecked.

The third question asks whether the intrusion on privacy can be viewed as proportionate to the purported security benefits.

That means that authorities should not collect or use information beyond what is strictly relevant to support the security measure at hand.

And the final question is: Could there be other means to achieve the same ends, with less impact on privacy? We should always strive for the most minimal collection or use of information and, as a rule, avoid all privacy-invasive measures. Privacy should be invaded only under exceptional circumstances.

We acknowledge that privacy is quite moot in the absence of security.

We also recognize that secrecy and covert methods can be essential to protect public safety and national security. The contents of a threat and risk assessment are typically classified as well.

But while it may not be possible or advantageous to tell the world exactly how you carry out your analysis, the point of the exercise is to ensure it is carried out in as thorough and systematic a manner as possible.

As such, we have used this phased approach in two guidance documents we have issued within the last year: in the Expectations document, which I hope you are already familiar with, as it is specifically targeted to members of the ATIP community; and in A Matter of Trust, a guidance document that explains how the OPC applies this analytical framework to public safety measures.

Stage 2: Setting the Stage

Having established a rationale for the collection of personal information, the next step is about “Setting the Stage.” It’s about planning for the secure handling of collected data — including how it is stored, used, linked and shared with others.

Fortunately, there’s no need to reinvent the wheel here. A set of internationally recognized standards already exists. Referred to as the Fair Information Principles, they guide commercial and government organizations in the development of initiatives where personal information is used.

These ten principles, in fact, serve as the foundation for many countries’ data-protection laws, including our own private-sector Personal Information Protection and Electronic Documents Act.

I won’t speak to all of them now, as you are no doubt very familiar with them — you know the principles deal with such important concepts as identifying the purposes for the collection of personal information, obtaining consent where appropriate, safeguarding the data, and limiting the collection, use, disclosure and retention of the data.

Stage 3: Running the Program

I know it sounds like a lot of planning when you’re anxious to get your initiative off the ground.

But, as they say in sewing and woodworking: “Measure twice, cut once.”

It is so much easier to spend the time at the front end – to take the necessary time to plot out the justification for your program, and to plan the architecture in a way that embeds all necessary safeguards. It also helps reduce risks to an organization’s operations, reputation, and public goodwill.

Stage three then elaborates on the internal policies and practices that are necessary to ensure that privacy will actually be respected, once the program is up and running.

Here again, no wheels require reinvention. Treasury Board Secretariat, for example, administers a comprehensive suite of policies, guidelines and best practices in this area, and our document provides references to all of them.

In brief, though, here are some examples of what we’re talking about:

  1. Designating a Chief Privacy Officer to ensure accountability and senior-level representation when matters related to personal information handling arise. For example, during the Olympics, we recommended that the Integrated Security Unit designate a Chief Privacy Officer and post his contact information on their website. They did this, thus offering a point of accountability for addressing privacy concerns.
  2. Making sure everybody’s roles and responsibilities for the handling of personal information are crystal clear, and that responsible personnel receive ongoing training in privacy issues. As we have seen in our recent investigations at the Department of Veterans Affairs, privacy must also be protected by a governance system that is designed to protect personal information.
  3. Documenting privacy policies and practices in plain language, and developing straightforward processes to handle errors or inaccuracies, public complaints, data breaches or other problems. An example of that safeguard is the Office of Reconsideration for the Specified Persons List, also known as the no-fly list.
  4. Detailing the sharing of personal information in proper agreements. An example of that are the restrictions around information sharing at the National DNA databank.
  5. Creating an audit mechanism to oversee such matters as data security and the transfer of information to others. For example, in relation to the Enhanced Drivers Licence, we obtained an assurance from the Canada Border Services Agency that the EDL database would remain in Canada.
  6. And, finally, some form of public access and reporting to bolster accountability.

On this last point, I often get a look of disbelief from security professionals. But, in fact, there are excellent precedents. For example:

  • An online form on Transport Canada’s website lets people review their passenger travel information, even through this data feeds highly sensitive aviation security programs.
  • And Public Safety Canada’s annual reports furnish detailed information on the use of electronic surveillance by federal officers – how many interceptions take place and for how many days, whether it was audio or video, and the number of arrests that resulted.

Stage 4: “Calibrating the System”

The fourth and final stage in the analysis we propose in this document relates to external review, oversight mechanisms, and redress. We refer to this stage as “Calibrating the System.”

Many inquiries and legislative reviews that have examined Canada’s national security regimes have zeroed in on the same problems, including poor information-handling practices, patchwork accountability mechanisms, and limited oversight.

Review mechanisms should include a systematic process for handling complaints and concerns from the public, as well as a method for appeal and redress when problems arise.

There could also be regular external oversight by Parliamentarians or other specially-mandated bodies. Our own Office ensures compliance through the investigation of complaints and by auditing federal institutions.

So why is all this important?

Because privacy is key to a free and democratic society.

Because personal information is sensitive, in that it can be used against us.

And yet, public security programs invariably collect and use a great deal of personal information, much of it highly sensitive.

These tend to be extraordinary powers – broad and discretionary. In a democratic society, invasive measures by the state must be held in check by effective oversight mechanisms – be those judicial controls on authorization or rigorous administrative checks and balances.

To be effective, oversight needs to be independent, properly resourced, and equipped with powers commensurate with those entrusted to the security program it is overseeing. The oversight mechanism must, moreover, serve as a credible avenue for redress, a place where citizens can turn if they feel their privacy rights have been violated.


The Office of the Privacy Commissioner took a bold step in 2011 when we published Expectations. Our goal was to make sure that all relevant public policy is firmly rooted in the basic right to privacy. To be blunt, we wanted to make sure that PIAs did not become a rubber stamp by which public policies were simply given operational approval without their legitimacy with respect to basic rights having been examined.

We realize that this is no small challenge at a time of profound change within an unchanging framework and we hope that the guidance document on our expectations with respect to PIAs will indeed be a useful guide.

Report a problem or mistake on this page
Error 1: No selection was made. You must choose at least 1 answer.
Please select all that apply (required):


Date modified: