ISED’s Bill C-27 + AIDA. Part 1: Tech, Human Rights, and the Year 2000
What Are We Actually Worried About Here?
This is a series of posts on the AI and Data Act (AIDA) portion of Bill C-27. To begin, we’re going to go back in time a bit to put some longer context around the current government effort to legislate AI in Canada. There is some great background and analysis out there on this already, will be putting together a little library shortly. See this from Erica Ifill on harms related to C-27’s approach, this from Mardi Witzel on governance, and Teresa Scassa’s full series of posts on a range of issues with the bill, start here. This list is not exhaustive, if you’ve got stuff please pass it along.
Mandatory Canadian Legislative History on Tech
Months ago, I finally had the good luck to be in a conversation with Val Steeves on the topic of automation and harm in the workplace. She shared reminders of the history of human rights and industry in canada, at one point referencing Fordism, which is an interesting aside. As a result, we talked a bit more, and Val shared this chapter of her writing with me, from 2015. It’s called Now You See Me — Privacy, Technology, and Autonomy in the Digital Age. It’s 30 pages, and I strongly recommend you read it in full. What I’m going to do below is pull out a few key bits, so that we understand how privacy, human rights, industry, and legislative history factor into the moment we are in today.
I am not a privacy nor a human rights scholar, but there were decisions made in the late 90s and 2000 in these realms that help a lot to understand, and ideally, to challenge today. Bill C-27 and how it relates to our privacy regimes, both the Privacy Act and PIPEDA, matter, and we’ll pick that up in future. For now, here are a few key pieces of background. This is not only important for how to manage the proposed AIDA text, but it is also instructive if you’re wondering about what we could and should be doing instead. Or perhaps in concert with C-27.
The crux of these series of posts is to make clear that we should be stopping, or at least slowing, any legislation of AI. If there is general belief that doing this legislation is helpful (I do not agree with this), then on basic principles of doing it right and doing it democratically, it should be removed for now and put through a proper consultative process. I believe that slowing down to take the time for that process may end up revealing more of the reasons the approach is fundamentally flawed.
Below is an excerpt from the aforementioned chapter, pages 19–23. All of the below, until I come back in, is Val Steeve’s writing. Emphasis is mine.
Back to the Beginning: Protecting Privacy as a Human Right
“Our brief review of the history of privacy protection indicates that privacy rights as set out in the UDHR were expressly linked to human dignity and autonomy. The UDHR also acknowledged the important role that privacy plays in allowing us to enjoy other human rights; it is hard to exercise our freedom of speech or association when everything we say and do is routinely and constantly monitored and then shared with governments and corporations.
Ironically, when the government first began talking about the need for PIPEDA in the late 1990s, a parallel process was initiated by the House of Commons Standing Committee on Human Rights and the Status of Persons with Disabilities (HURAD) that expressed privacy protection firmly in the human rights language of the UDHR.
HURAD conducted hearings and public consultations to explore legislative options that could account for the effect of new technologies; it concluded that, although data protection legislation was “clearly a critical part of the spectrum of privacy interest, in a world of increasingly intrusive technologies, it is by no means the only game in town.” 35
HURAD argued that truly effective privacy protection can be sustained only if the value of privacy as a human right is given greater weight than the bureaucratic efficiencies and economic benefits of an unconstrained flow of personal information. To do this, it recommended that the government enact a privacy rights charter with quasi-constitutional status and require all federal laws to respect everyone’s “physical, bodily and psychological integrity and privacy; privacy of personal information; freedom from surveillance; privacy of personal communications; and privacy of personal space.” 36
The proposed charter was intended to be “umbrella legislation” that would help guide the development and application of all federal laws, including data protection legislation. By giving the charter precedence over the latter, HURAD hoped to “capture the full breadth of privacy, like a wide angle lens taking in a panoramic view, as opposed to the data protection framework …
that focuses, like a close-up lens, tightly on informational privacy rights.” 37
In making its recommendations, HURAD drew expressly on the early work of the United Nations and the Council of Europe:
“Ultimately, [the privacy charter] is about taking privacy seriously as a human right. To do that, we must invoke recent history and remind ourselves why the right to privacy was entrenched in the UDHR and subsequent human rights instruments. Otherwise, we may be
seduced into believing that privacy is simply a consumer rights issue that can be fixed by a few codes of conduct and some new, privacy enhancing technologies.” 38
A Privacy Rights Charter modelled on HURAD’s recommendations was introduced in the Senate of Canada by Senator Sheila Finestone in 2000. However, the Charter died on the order table after the government refused to support it, because of fears that legal recognition of privacy
as a human right would place many of the government’s information practices in jeopardy.
The Department of Justice’s senior general counsel for public law policy, Elizabeth Sanderson, told the Senate Standing Committee on Social Affairs, Science, and Technology at the time, although the government was “sympathetic” to the Charter, legally protecting privacy as a human right
“would create a good deal of uncertainty and quite possibly may pose obstacles to many government programs and policy”:
Let me give you a concrete example where the [Charter] could affect
departmental legislation and operations. Citizenship and Immigration Canada (CIC) collects a great deal of personal information relating to immigration applications and to the enforcement of deportation orders and immigration offences. [The Charter] would potentially require CIC to defend its information gathering and sharing activities in court … In conclusion, while [the Charter] can be praised as intending to enhance the privacy of Canadians, the devil may be in the detail. Changes could come at the expense of certainty, public safety, operational efficiency and fiscal responsibility. 39
As Ursula Franklin notes, the choice of language is particularly important:
When human rights informs the language in which the discussion among you and the general public and Parliament takes place, you speak then, rightfully about citizens and all that comes with that. On the other hand, if the emphasis is primarily on the protection of data, one does look at a market model, one does look at an economic model, and all the things you’ve heard about the new economy. Then it is the language of the market than informs your discourse.
That’s it for excerpts for today. I’m back now:
Human Rights Subsumed to Industry + State Tech Practice is Not New. That Does Not Make it Acceptable Nor Something to Continue With.
As Mardi’s piece focuses on, this legislation is coming through ISED, the branch of the government that prioritizes and has a mandate to support economic development. What we need to understand here is that this continues to frame this conversation in the wrong way.
I do not believe that a privacy frame gets at a lot of it either, but that does not remove the importance of this piece of history nor its relationship to where we are today. However, the foundational error that informs both data protection and AI legislation is that the idea of human rights should be subsumed to commercial interests and state efficiencies. Fast forward 20+ years, and the way these two pieces are getting blended into one another (industry and the state) because of the use of private technologies in public service delivery is another element of this conversation that requires expansion.
The point of this first post is not to make an argument yet, it’s to make sure we understand the direction we’re travelling in, the history of this direction from a legislative perspective, and to consider it for what we think about moving forward from here.