Ottawa Centre for Research and Innovation

This page has been archived on the Web

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available.

GOL - OCRI Series

June 4, 2003
Ottawa, Ontario

Julien Delisle
Executive Director

(Check against delivery)


I want to talk today about the meaning and importance of privacy, about the role of the Office of the Privacy Commissioner, and about building privacy into information systems.

Privacy is a fundamental human right. It's a critical element of a free society-as Justice La Forest of the Supreme Court once said, it's "at the heart of liberty in a modern state."

There's no real freedom without privacy. In fact, some people call privacy "the right from which all others flow." A private sphere of thought and action, one that's your business and no one else's, it is fundamental to freedom of thought, freedom of conscience, freedom of speech, and so on.

Now, what do I mean by privacy?

It's often called "the right to be let alone," and that's a good definition. It reflects people's visceral reaction to being monitored or scrutinized or bothered. That's what "invasion of privacy" means to many people.

But there's another kind of privacy invasion that's less obvious, and that's the collection, use and disclosure of information about us without our knowledge or consent.

The Privacy Commissioner of Canada, George Radwanski, defines privacy as the right to control access to one's person and information about oneself. This broader, informational concept of privacy is useful for understanding how privacy is threatened.

Until recently, privacy was protected by default. When information about us was in scattered paper records, it was a lot of trouble to compile a detailed dossier on any individual.

Unless you were famous or important, or notorious, your privacy was pretty safe.

But the barriers of time, distance and cost that once guarded our privacy are gone.

Now a stranger at a computer can compile a detailed dossier on our whole life in minutes.

One way we've responded to this in Canada is to legislate to protect privacy.

The Privacy Commissioner and the staff of his Office oversee two pieces of legislation: the Privacy Act, which applies to the federal public sector, and the Personal Information Protection and Electronic Documents Act-the PIPED Act, as we call it-which applies to the private sector.

Right now the PIPED Act applies to federal works, undertakings, and businesses - chiefly banks, the telecommunications, broadcasting and transportation sectors - as well as to the sale of personal information across provincial or national borders.

By 2004, where provinces have not passed similar legislation, it will apply to all commercial activities in Canada. Eventually, all of the private sector in Canada will be required to comply either with the PIPED Act or with a similar provincial law.

So, in a short time, information systems throughout the country, in the public and private sectors, are going to have to be built to respect privacy principles that underlie Canada's laws.

And what are those privacy principles? In a nutshell:

Personal information should be collected directly from individuals wherever possible and appropriate.

Individuals should be able to control the use and disclosure of their information by exercising or withholding consent.

Personal information collected for one purpose should not be available for use for unrelated purposes.

Information should be kept only as long as it's needed for the purpose the individual consented to.

Third-party access to personal information should be limited to those with a need to know, for authorized purposes.

Most importantly, collection, use, and disclosure of personal information should be for purposes that a reasonable person would consider appropriate.

These principles should be the starting-point for any system that uses personal information. Protecting the privacy of your clients and employees is not just about complying with the letter of the law. It's about building a culture of privacy in your organization, and building privacy into your systems at the outset.

You may have noticed in my list of privacy principles that I didn't say anything about security or confidentiality. That's because, although the terms tend to be used interchangeably with privacy, they're very different.

Privacy is our fundamental right to control information about ourselves-including the collection, use, and disclosure of that information.

Confidentiality is your obligation to protect personal information in your care, to maintain its secrecy and not misuse or wrongfully disclose it.

Security is the process of assessing and countering threats and risks to information.

If privacy is not respected, ensuring confidentiality and security is not enough. If you collect, use, or disclose information about someone, without their consent, you've violated their privacy. Ensuring confidentiality and security of the information doesn't change that.

A good example to illustrate the differences between privacy, confidentiality and security is what was known as the HRDC long file case. Canada's largest ministry, the department of human resources development, or HRDC, developed a longitudinal labour force file for research, evaluation, and analysis to support departmental programs. It contained records on over 30 million individuals drawn from widely separate internal and external files, such as welfare and income tax records. The profile on any given individual could contain as many as 2000 data elements.

This huge database was relatively invisible to the public. When its existence was made public, more than 70,000 Canadians demanded access to their personal information contained in it. As a result of the public outcry, the database was dismantled.

The security and confidentiality of this database were impeccable. HRDC had in place strict protocols for access to the database - access was strictly limited to only a very few public servants and researchers - and no information was ever improperly disclosed from the database.

But Canadians were still concerned that their privacy had been violated. They were concerned about the vast collection of personal information without a specific defined purpose. They were concerned that information had never been purged from the database. They were concerned that the state had unduly pried into their private lives, and that they had been kept in the dark about it.

Since GOL is the main focus of your deliberations here today let me give you some observations about that initiative.

Government On-Line proposes the elimination of walls between agencies and programs, within government and across levels of government.

That sounds progressive and efficient.

But those walls are also walls between collections of personal information about individuals and their interactions with government.

That information has been collected for specific purposes. When it's held in separate databases or "silos" specifically for those purposes, it's segregated.

When the walls of those silos come down, two things can happen. One is that someone with a need to know only one piece of information has access to lots more. The people processing my application for a CPP disability pension, for example, have a need to know my personal health information. No other government official needs to, or should.

That's one problem. The other is that information can be combined, to reveal new information and create profiles of individuals.

Profiling is the hallmark of surveillance societies. Dossiers on individuals, tracking their activities and their interaction with government, have no place in an open, democratic society.

Sometimes there's justification for matching personal information from different sources. Both the Privacy Act and the PIPED Act allow it in certain circumstances. But those circumstances are strictly limited, and they have to be justified.

Separate databases are a built-in protection against unrelated uses and against profiling. The advantages of this can be lost when databases are merged-unless you take steps to build in protections.

Another of our concerns is client authentication.

Governments often need to identify who they're dealing with, particularly when people are accessing benefits and programs electronically. So they've been looking at "e-identities" and smart cards.

Authentication mechanisms are fraught with privacy problems.

Smart cards, for example, have the capacity to store or access large amounts of personal information, relating to different programs and services. If they're designed right, they can protect privacy.

But a single card that holds all the information about our interactions with government would accelerate the centralization and sharing of personal information, and raise the problem of combined databases to a whole new level.

So these are a couple of our concerns. How would you address them?

I can't give you a prescriptive answer. That's your area of expertise, not mine. I will simply say that you need to build privacy into your systems at the outset. It can't be an afterthought. You can't say, we'll cross that bridge when we come to it, or if someone complains. Privacy principles have to be as fundamental to the design of your systems as hardware and software.

I want to say a few words now specifically concerning information about employees.

This is a critical privacy issue, since the workplace is where we spend most of our waking days and where, to a large degree, our lives are defined.

Some people still believe that employees have no rights to privacy in the workplace. They argue that if you're on the employer's time and premises, you have no right to expect privacy.

Our view is that employees don't hang up their rights along with their coats when they step through the door of the office or factory.

We're not alone on this; judges and arbitrators recognize that employees have rights to privacy-for example, in their personal communications, their lockers and desk drawers, and their personal effects.

It's also a view that's supported by the Privacy Act and the Personal Information Protection and Electronic Documents Act, both of which apply to employment.

Of course, employers have to manage. An employer has obvious information needs-a social insurance number, for instance, to meet the requirements of the tax and social benefits systems. To employ someone, an organization has to get information about that person's education and work experience, and verify it. And an employer obviously has to collect and use information about work performance, attendance, and potential for advancement.

But the employer doesn't have the same obvious need for other sorts of personal information-an employee's religion, for instance, or sexual orientation, or personal financial circumstances.

Now, what's to stop an employer from collecting information like this? What if people who object to it are told to choose between having their privacy and having a job?

That's where a fundamental privacy principle comes in. In the PIPED Act, it's known as "the reasonable person test," the principle that organizations may collect, use, or disclose personal information "for purposes that a reasonable person would consider appropriate in the circumstances."

This means that an employer can't require something unreasonable as a condition of employment. The request that the employee give up some of his or her privacy has to be appropriate under the circumstances-in other words, it has to be justified.

A privacy-friendly information system begins with a focus on who really needs to know what. It requires a clear picture of what personal information is collected and what's done with it. It requires employers to be honest with themselves about what they need to know-and sometimes to restrain their curiosity.

If that's the basic picture of a privacy-friendly information system, then let me just touch on how you can get there.

A good starting point is to assess your existing practices and systems with the reasonable person test-asking yourself whether personal information is being collected, used, or disclosed for purposes that a reasonable person would consider appropriate in the circumstances.

Your responsibility for personal information extends "from end to end." From the moment personal information first touches your hands, to the moment it is properly disposed of, it is your responsibility to respect the person's privacy. It doesn't just start when you secure the information in a database, or end when you pass the information on to another party.

For human resource information systems, you need to use the guideline of "who needs to know what." Employees should be able to reveal information about themselves where they need to, as much as they need to-and no more.

Some people in your organization need access to some specific elements of employees' personal information; other people need access to other elements. A supervisor needs to know certain information about an employee's performance, for example. The human resources officer responsible for pay and benefits doesn't need to know any of that information. Conversely, the HR officer may need to know, for pay and benefits purposes, how many dependants the employee has, or his or her Social Insurance Number. The supervisor has no need to know any of that information.

It has always been good practice, and a challenge, to segregate this kind of information in an organization. It's better practice, and more of a challenge, in an organization using modern digital information systems.

Above all, it's crucial to remember the importance of knowledge and consent: employees should know what information you are collecting about them, and how you are using and disclosing it. And they should have the opportunity to exercise control over that collection, use, and disclosure, through the power of consent.

So these are some of the considerations you need to look at in designing information systems for handling human resource information.

I've said a number of times already that the key to making these systems privacy-friendly is to build privacy in at the outset.

The way to ensure that privacy is built in, when you're designing a system, is to do a Privacy Impact Assessment.

This is something that the federal government has made mandatory for all its new or redesigned information systems, including Government On-Line systems. A similar kind of process, on a smaller scale, can be adopted by any organization.

What it means, simply, is analyzing the likely impacts on privacy of a project, practice, or system. It involves looking at all the personal information practices that go into the system, such as what kinds of information are collected, how consent is obtained, how and for how long the information is kept, how it's used, and to whom it's disclosed.

It means looking at things like the purposes for collection, use, and disclosure of personal information, the authority you have for them, what kinds of linkages there will be between this and other information, and how individuals will be able to exercise their right of access to their information. And of course it requires assessing how the system complies with privacy legislation and principles.

Now here are some of the kinds of questions you should ask yourselves:

Will the system limit access on a need-to-know basis?

Will it be possible to make inferences about an individual, based on matching disparate bits of information?

Will the system effectively amount to surveillance? Will it track clients' or employees' activities, or at least facilitate tracking? If so, is that justified?

Besides questions about possible violations of privacy, there are questions about the resources to deal with them-such as whether you have an accountability structure in place. Is there somewhere in the organization that individuals can go if they have a privacy problem? Can you give people timely access to their own personal information? Do you have someone who understands privacy, who can advise them? And can that person swing some weight in the organization, and ensure respect for privacy?

Doing this kind of assessment also enables you to sensitize the people in your organization to privacy issues. It can help you to create an organizational culture of respect for privacy, where everyone supports and understands privacy as part of the corporate goal.

And it's a very useful tool for monitoring your information system over time. You've identified privacy risks, you can see whether your means of addressing them are working, and you're alert and attuned for new, unforeseen ones coming up.

I might add that one of the interesting things we've noted, in the early stages of the implementation of the Privacy Impact Assessment Policy, is that this process is allowing project designers and managers to catch more than the impact on privacy. It's given them a valuable second look at their proposed projects-and sometimes that's sending them back to the drawing board, as they realize that they've overlooked something crucial. In short, this is a useful management tool for more reasons than just privacy.

But even with this advantage, some people will argue that PIAs will get in the way of what systems designers are supposed to seek-and that's efficiency.

So am I selling inefficiency? No, I'm not.

"Efficiency" means choosing the best use of resources to achieve defined goals.

What's critical is how we define the goals.

The fundamental human right of privacy has to be as much a part of your goal as the bottom line of the ledger.

We at the Office of the Privacy Commissioner believe that it's possible to run a business or a country, and provide services efficiently and conveniently, without sacrificing privacy.

Protecting privacy is part of what defines a successful information systems design.

So build it into your system. Respect that fundamental human right of privacy, and you'll win the trust and respect of Canadians, as clients and as employees.

Date modified: