Integrating Privacy and Public Safety in the 21st Century: The Canadian Experience
This page has been archived on the Web
Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.
Remarks at the International Conference on Information Flow organized by the Centre for Public Law Research at the University of Montreal
October 17, 2011
Address by Chantal Bernier
Assistant Privacy Commissioner of Canada
(Check against delivery)
I would like to thank the Centre de recherche en droit public for organizing this very timely conference. The timing is right not only because many are looking back over the last ten years, but also because many states, including Canada, are considering new security measures to protect us over the coming years.
There is, of course, the Canada-US agreement on perimeter security, but there are also biometric passports, facial recognition technology, and passenger behaviour observation. After setting the scene, I will share some thoughts with you this afternoon on the right to privacy and security measures, and then present the model proposed by the Office of the Privacy Commissioner of Canada for a rigorous framework for security measures.
We believe that privacy protection in a world in which the way of looking at public safety is changing requires both a certain latitude and a rigorous framework based on established principles. The Canadian approach is based on a combination of set parameters and flexible methods.
As everyone in this room knows all too well, the public safety and national security landscape has changed dramatically in the past decade. The events of 9/11 did not so much create this new reality as they accelerated and amplified changes already unfolding across Western societies.
The Chief Justice of the Supreme Court, Beverley McLachlin, made a speech two years ago on the challenge of fighting terrorism while maintaining civil liberties. And she based her speech on this premise: threats to national security are not new; but their modalities are new. For our Office, that telling remark points to how we should respond to the new public safety challenges. Starting from established principles, we need to develop an analysis framework to adapt the methods of application.
Two main factors are redefining the methods of ensuring public and national security. In addition to the sombre events of ten years ago, technological developments have also had an impact on the field of security. A new generation of mobile devices, remote sensors, high-resolution cameras and analytic software has revolutionized surveillance practices. Today, the collection, processing and sharing of data unfolds on a truly global scale and at the speed of light.
No one is opposed to safer streets. However, public safety authorities must be accountable to a degree commensurate with the significant powers they are given to ensure public safety.
The exercise of such powers must be in line with fundamental rights and Canadian values. Otherwise, the bond of trust between the state and the citizen—the very fabric of a democratic state—would be jeopardized.
Edward Shils asserted in the mid-1960s that democracy means total intimacy for the individual, but none for the state. This touches on the defining nature of information or an information social contract in a democracy.
In that context, policymakers, as well as citizens, need to understand why security and privacy are both essential in a free and cohesive society.
Social cohesion hinges on trust between citizens and their neighbours. It also presupposes a level of trust between citizens and the state.
Citizens, in fact, need to trust that the state will protect them—but not at the cost of other fundamental rights, including the right to privacy.
The former Auditor General of Canada Sheila Fraser reminds us in her March 2009 Annual Report that “for Canadians to have confidence in their security and intelligence organizations, they need to know that government agencies and departments maintain a balance between protecting the privacy of citizens and ensuring national security.”
The Canadian approach is based on the integration of these two equally valid rights. Before describing that approach, I would like to lay down certain premises.
Our first premise recognizes that the right to privacy is not an absolute right. It is a fundamental right, but since it concerns ones relationship with others, it cannot be exercised except in relation to collective rights.
Our second premise is that although this right is subject to collective imperatives, the collectivity cannot infringe on the right to privacy beyond those imperatives, which must be precisely defined and applied within a collective framework.
From that flows our third premise: public safety and national security authorities must be accountable for any invasion of privacy. The challenge of holding accountable public safety authorities who need to operate in secret calls for an even more rigorous framework.
Security measures are necessary to protect a democratic state. However, democracy must produce results, in particular transparency, fairness and justice.
On the basis of those premises, we developed the analytical framework A Matter of Trust. Our objective was two-fold: first to structure the debate around the relationship between privacy protection and public safety beyond ideology and emotion, and second, provide a specific framework based on lasting principles but flexible methods that would make it possible to respond to the ever changing relationship between public safety and privacy. We wanted to provide analytical support to governments, public safety authorities, as well as citizens. The number of initiatives to which our analytical framework has already been applied shows that there was a real need that we were trying to meet.
We also wanted to counter the excessive broadening of security measures based on an overestimation of the risks. Let me explain: as regards security, risk management is based on an overestimation of the threat. Risk management is composed of two elements: the probability that the risk will materialize and the consequences if it does. As regards public safety, the probability that the risk will materialize is low, but the consequences are catastrophic. That means that the risk is considered high, although very few people can justifiably be suspected of plotting harm. We conclude that the surveillance of persons must be subject to an especially strict discipline. To illustrate my point, I would like to share some figures drawn from a study by Dr. Marc Sageman, author of Understanding Terror Networks.
After examining 66 trials for attempted or actual terrorist attacks in the West since September 11, 2001, Dr. Sageman concluded that 4 people out of 100,000,000 are terrorists. This is in contrast to the margin of error in anti-terrorist investigations: 1%.
I doubt those figures are strictly accurate, but they are telling: so many people are falsely suspected of terrorist activities and the potential consequences of such suspicions are such that security measures must be subject to a well-defined normative framework.
How to integrate respect for privacy and public safety measures
The perilous juncture of the integration of the right to privacy and public safety measures requires that such integration be built on an empirical, verifiable foundation to counter the ever present risk of slippage.
In order to propose such a normative framework that was both empirical and verifiable, the Office followed the advice of experts in both privacy and security from the academic and legal communities, civil society, community groups, politicians, the media, law enforcement and surveillance.
This input helped us to develop a reference document entitled A Matter of Trust. We prepared it to help policymakers, practitioners and citizens examine closely the issues raised by integrating privacy protections with new public safety and national security objectives.
I would, in fact, like to warmly thank Arthur Cockfield, who spoke to us this morning, and Karim Benyekhlef, our host for this conference, who guided our project at the consultation and reflection stage. Their support provided invaluable assistance.
The document approaches the challenge from both a conceptual and practical point of view.
It begins with an overview of the context I just described for you. It then examines the main legal concepts that are essential for any discussion about privacy and security.
For example, what does “personal information” mean at a time when security agencies can gather a myriad of information about individuals?
The document also examines the meaning of “reasonable expectation of privacy” in comparison with threats to national security and public safety, as defined by Canadian courts.
On the basis of these fundamental concepts, the document then describes the framework an organization must use if it wishes to include the main privacy issues in the process of designing, creating, implementing and evaluating a public safety program or policy.
I would like to add that it is not simply a normative document. Even the Office uses this four-stage analytical process when it assesses proposed legislation, does an audit in a federal department or carries out an investigation of a government program.
The goal is not to provide the right answers, which in fact cannot be decided in advance. Its aim is rather to ask the right questions, those that would help us all protect both our safety and our privacy.
We believe that this logical progression can aptly and usefully be applied by security agencies, policymakers or others in searching for that elusive equilibrium between public security and privacy rights.
I would now like to quickly explain the four stages:
- making the case;
- setting the stage;
- running the program;
- calibrating the system.
Stage 1: Making the case
Key to the effective integration of privacy into policymaking is to apply it to the initial concept in order to establish its legitimacy. The first crucial step in privacy protection begins when a policy or program is first being conceived.
This stage, which we refer to as “making the case,” vets any proposed initiative against a four-part test.
In order to structure the debate, as I mentioned, beyond emotion and ideology, we have taken a legal approach to our analytical framework. The test established by the Supreme Court of Canada in 1986 in Oakes is the one used to determine under what circumstances it is reasonable to limit individual rights and freedoms in a free and democratic society.
In this test, one would first consider whether a proposed initiative is truly necessary to achieve the stated purpose, assuming that the purpose corresponds to a pressing societal concern.
If it is, in fact, essential, then the next question is whether the program can be demonstrated as clearly effective in achieving the stated objective. This demonstration must be supported empirically, at the very least, in the cogency of its assumptions.
What I mean is that we cannot always know in advance whether a new measure will be effective, but our expectations in that regard must be robust, based on facts—not suppositions—and constantly reassessed. At the very least, public safety authorities must present convincing arguments at the start, which can be verified after the implementation of the measures concerned.
The third question asks whether the intrusion on privacy can be viewed as proportionate to the purported security benefits.
That means that authorities should not collect or use information beyond what is strictly relevant to support the security measure at hand. In this regard also, the arguments must be convincing and ultimately verifiable.
And the final question is: Could there be other means to achieve the same ends, with less impact on privacy?
We should always strive for the most minimal collection or use of information and, as a rule, avoid all privacy-invasive measures. Privacy should be invaded only under exceptional circumstances.
In that respect, developments in the same technology that can be used for invasion of privacy can also be used to protect it: anonymous algorithms can be used for identification, targeted representations can focus on the minimum information needed, and encryption can increase the protection of the information gathered.
We acknowledge that privacy is quite moot in the absence of security.
We also recognize that secrecy and covert methods can be essential to protect public safety and national security. The contents of a threat and risk assessment are typically classified as well.
But while it may not be possible or advantageous to tell the world exactly how you carry out your analysis, the point of the exercise is to ensure it is carried out in as thorough and systematic a manner as possible. Some members of our staff have access to secret documents and can therefore apply this accountability framework.
Stage 2: Setting the stage
Having established a rationale for the collection of personal information, the next step is about “setting the stage.” It’s about planning for the secure handling of collected data—including how it is stored, used, linked and shared with others.
Fortunately, there is no need to reinvent the wheel here. A set of internationally recognized standards already exists.
Referred to as the Fair Information Principles, they guide commercial and government organizations in the development of initiatives where personal information is used.
These ten principles, in fact, serve as the foundation for many countries’ data-protection laws, including the Personal Information Protection and Electronic Documents Act, which covers the federally regulated private sector.
I will not speak to each of them now, but the principles deal with such important concepts as identifying the purposes for the collection of personal information, obtaining consent where appropriate, safeguarding the data, and limiting the collection, use, disclosure and retention of the data.
Stage 3: Running the program
We systematically remind federal departments and agencies that submit privacy impact assessments for their programs that it is much easier to take the necessary time to plot out the justification for their programs, and to plan the architecture in a way that embeds all necessary safeguards. We point out that it also helps reduce risks to their operations, reputation, and public goodwill.
Stage three then elaborates on the internal policies and practices that are necessary to ensure that privacy will actually be respected, once the program is up and running.
Here again, no need to start from scratch. Treasury Board Secretariat, for example, administers a comprehensive suite of policies, guidelines and best practices in this area, and our document provides references to all of them.
Here are some examples of what we’re talking about:
- Designating a Chief Privacy Officer to ensure accountability and senior-level representation when matters related to personal information handling arise. For example, during the Olympics, we recommended that the Integrated Security Unit designate a Chief Privacy Officer and post his contact information on their website. They did this, thus offering a point of accountability for addressing privacy concerns.
- Making sure everybody’s roles and responsibilities for the handling of personal information are crystal clear, and that responsible personnel receive ongoing training in privacy issues. For example, in one of our recent investigations at the Department of Veterans Affairs, we discovered major shortcomings, and even the lack of an effective governance system to protect personal information.
- Documenting privacy policies and practices in plain language, and developing straightforward processes to handle errors or inaccuracies, public complaints, data breaches or other problems. An example of that safeguard is the Office of Reconsideration for the Specified Persons List, also known as the no-fly list.
- Detailing the sharing of personal information in proper agreements. An example of that are the restrictions around information sharing at the National DNA databank, which is used for penal purposes.
- Creating an audit mechanism to oversee such matters as data security and the transfer of information to others. For example, in relation to the Enhanced Drivers Licence, we obtained an assurance from the Canada Border Services Agency that the EDL database would remain in Canada.
- And, finally, some form of public access and reporting to bolster accountability.
On this last point, I often get a look of disbelief from security professionals. But, in fact, there are excellent precedents. For example:
- An online form on Transport Canada’s website lets people review their passenger travel information, even through this data feeds highly sensitive aviation security programs.
- And Public Safety Canada’s annual reports furnish detailed information on the use of electronic surveillance by federal officers—how many interceptions take place and for how many days, whether it was audio or video, and the number of arrests that resulted.
Stage 4: Calibrating the system
The fourth and final stage in the analysis we propose in this document relates to external review, oversight mechanisms, and redress. We refer to this stage as “calibrating the system.” Many inquiries and legislative reviews that have examined Canada’s national security regimes have zeroed in on the same problems, including poor information-handling practices, patchwork accountability mechanisms, and limited oversight.
Review mechanisms should include a systematic process for handling complaints and concerns from the public, as well as a method for appeal and redress when problems arise.
There could also be regular external oversight by Parliamentarians or other specially-mandated bodies. Our own Office ensures compliance through the investigation of complaints and by auditing federal institutions.
Why do we think our approach is useful?
First, because it makes it possible to accommodate public safety imperatives while affirming the lasting aspects of the right to privacy.
Second, because it is founded on the premise that public safety and the protection of privacy are not mutually exclusive. On the contrary, the two objectives coexist in a socio-political context in which both define the democratic nature of that context.
That being said, public safety powers tend to be extraordinary powers—broad and discretionary. In a democratic society, invasive measures by the state must be held in check by effective oversight mechanisms—be those judicial controls on authorization or rigorous administrative checks and balances.
To be effective, oversight needs to be independent, properly resourced, and equipped with powers commensurate with those entrusted to the security program it is overseeing. The oversight mechanism must, moreover, serve as a credible avenue for redress, a place where citizens can turn if they feel their privacy rights have been violated.
In conclusion, I would like to state again what motivated us to develop this model framework for the integration of respect for privacy into public safety measures: it is precisely because such integration must happen, and it must be based on an approach that is both principled and empirical, so as to resist the slippage dictated by ideology, technological advances or practical considerations.
Such integration must take place within a framework sufficiently rigorous to allow the flexibility necessary for implementing safety measures in constant evolution. Implementation must be based on both the specific circumstances of the measures called for and the basic principles of the right to privacy.
With A Matter of Trust, we are trying to structure the debate on the reconciliation of the right to privacy and security issues. We hope that the discussion will this be based on facts and the law, rather than on fear and raw emotion.
I am thrilled at the idea of discussing this approach with such distinguished thinkers as my co-panellists.
Report a problem or mistake on this page
- Date modified: