ISED’s Bill C-27 + AIDA. Part 4: Calling on Federal MPs For a Necessary Defense of Democratic Process

Bianca Wylie
8 min readApr 22, 2023

--

AIDA is anti-democratic and needs to be shut down

This is the fourth in a four part series on AIDA. Post one is here, two is here, and three is here.

How Did We Get Here?

Before the last federal election, there were public consultations and conversations that fell under the broad idea of a “Digital Charter” for Canadians. The government sensed public discomfort with the ways tech and data misuse were impacting society and wanted to show leadership, or at least action, on the issue. After the election, the Liberal government’s Digital Charter thinking re-emerged as Bill C-27. The draft bill is fundamentally about strengthening privacy protections. But a new Frankenstein bit also appeared, sloppily glommed onto the proposed law. It’s called the Artificial Intelligence and Data Act, or AIDA for short.

As per Ana Brandusescu’s research on the sector, Canada has sunk one billion dollars of public funds to date into the idea, the economy, and the promise of AI. The government has also signaled its concern about the harms and risks that pursuing the development and use of AI brings. Those harms are real and serious. The federal government’s approach to dealing with them in AIDA is not.

The Ministry of Innovation, Science, and Economic Development, ISED for short, is the primary backer and funder of the AI sector in Canada. It is both cheerleader and fearmonger for industry. Through playing both roles for long enough, it has decided that it is also well-positioned to author a law that purports to deal with the most harmful uses of AI. Unsurprisingly, as Erica Ifill has written and talked about often, misuse of AI disproportionately impacts racialized and other communities made marginal by society.

AIDA was not informed by any kind of broad and wide-scale public discussion, as the rest of C-27 was. (Broad and wide-scale for a federal government-run consultation, at least — the consultation on the Digital Charter left a lot to be improved on too, but we’ll file that for another day.) It was more fulsome than what went on for AIDA. The general public, including the communities most impacted by these technologies, as well as their representatives, have had, and continue to have, minimal access to informed conversations about AI. There is an entire argument here to be made that ISED should also be familiar enough with these harms that their proposed law shouldn’t be empty on approaches to deal with them.

To make matters worse, media has been persistently distorting the issue, as recently researched and reported on by Guillaume Dandurand, Fenwick McKelvey, and Jonathan Roberge. News media is generally surfacing largely technically-framed and expert-led conversations, rather that organizing conversations around the reality of the consolidated corporate power and lack of both private sector and state accountability for AI development and use.

Most of the organizations and people calling for MPs to support AIDA in the House of Commons next week, such as those that signed a recent letter, because of the “major potential harms” of AI are either a) funded by ISED or b) reliant on the notion of AI as a legitimate topic for their professional livelihood. This, of course, does not make their concerns invalid or lacking substance in any way. But it must also be acknowledged that they represent a very narrow set of vested interests, and that their concerns do not tell anywhere near the whole story of what is going on in the sector.

Last week, the Minister in charge of ISED, MP Champagne, called an emergency meeting of the Canadian Advisory Council on AI. After this meeting, the letter calling on MPs to support AIDA emerged. The terms of reference for this advisory council go beyond ISED as leader to include the Departments of Global Affairs Canada, Treasury Board Secretariat, Privy Council Office and other federal departments.

The letter that emerged directly after this emergency meeting of the advisory council reflects a closed loop chorus of voices. None of the signatories — corporate, academic/research, non-profits — are a replacement for formally and properly engaging the broader public in talking about AI in context. Talking about AI in real life use and in terms that do not feed into the hype, the euphoria, the always near but never quite here promise.

Again, lest anyone have a melt-down about what I’m saying because this is a canadian sore spot: the voices that have a vested interest in the professionalization and productization of AI, whether they receive state-funding (many do) or not, need to be heard. The joke here is that they’ve been heard loud and clear over and over again the whole time the bill was being written. ISED has been talking to people throughout the process of crafting AIDA. Just not the public it has a mandate to serve. In the absence of these conversations, the volume level on those that have the capacity to be present in this moment of law-making draws a different picture for MPs than the one they should be seeing.

Process Problems and Law as a Press Release

It is insidious to democratic process to sign-off on this level of insider influence on law-making. Not only does it disallow and block the aperture of the conversation from being appropriately broad, it scares off and silences most of the public. Who has the confidence to challenge this set of highly-decorated and lauded technical experts? How would they get there?

While ISED and others gesture at public opinion polls about AI, the public is not getting any kind of informed and balanced understanding of the topic and the range of ways that harms related to AI could be addressed. Rather than fearmonger about the technology as though it has fallen from the sky and can’t be slowed down, we should be directing people squarely at the core issue: major corporate power issues and erosion of democracy impacts.

We need to be looking at the actors and dynamics behind the tech. There is nothing to be solved in the code or the maths that isn’t downstream of the fact that we’ve got major industrial and military capture driving us. It is directing us to think about tech is ways that serves industry, and actively avoids our consideration of the implication of corporate and state power acting hand in glove.

As things stand, we’ve got a tiny number of people that both say we have a big problem with AI and then call on the government to pass a bill that says nothing substantive about how to address said harms. Michael Geist lays this out in his latest post on C-27. There is a basic logic problem in the argument being made for AIDA.

To make this even more inane, the government itself admits the law is basically a placeholder without content, as though this is some sort of agile and innovative approach to law-making. It says the law will be backfilled properly with specific regulations after it is passed. This is a clear and concise order of operations problem.

Why is industry and ISED so excited to urgently pass a law that says nothing? because then we the publics will have ceded the rhetorical, narrative, and political grounds upon which future conversations of AI will be firmly planted to take place. As an organizing logic. Capture.

If we’re going to write a law, we should do it properly. We should not be passing a law that, as Teresa Scassa writes: “really lends itself to a game of statutory Madlibs….some of the most important parts of the bill are effectively left blank — either the Minister or the Governor-in-Council is tasked in the Bill with filling out the details in regulations.”

As Jim Balsillie wrote: “The proposed law fails to provide even the shell of a framework for responsible artificial-intelligence regulation and oversight. All the regulations will be determined at some future date and decisions will come from the Minister of Innovation, Science and Economic Development (ISED) or the minister’s designate. Digital-governance experts have pointed out that the same person drafting laws and regulations as well as providing oversight and enforcement runs completely counter to the OECD’s guide on AI regulations.”

Where to Go From Here?

Next week, MPs should move to have AIDA removed from Bill C-27 wholesale. From there, we can set up a new process to explore if/how to address AI from a regulatory perspective. In a recent oped, Andrew Clement laid out some proposed approaches to start this process again. Likewise, a recent fulsome report from Christelle Tessono, Yuan Stevens, Momin M. Malik, Sonja Solomun, Supriya Dwivedi and Sam Andrey explores a range of approaches to include in this process.

This process should also consider taking any law-making on the matter out of ISED’s sole remit, which would remove it from the closed circuit in which it current resides. ISED’s mandate is economic development, the ministry and industry are functionally the same thing.

A new process must also be expansive enough to ask the questions about other ways to consider addressing the harms of AI. One that would open the door for other parts of government and pre-existing laws/regulations to be brought to bear on the issue. And one that also allows us to ask whether it makes sense to regulate AI, as a “thing” at all.

Legislation as hype and legislation as performance are not things ISED should be doing. Getting hung up on and challenging the specifics (if you can call them that) of AIDA is a trap, because one of the things we need to do is to consider what it means to reject the frame wholesale.

It is arguable that formalizing AI via law-making is not an approach that will age well, and one that adds complexity to an already shaky foundational approach to our management and treatment of data and information. As written, when PIPEDA was created, there were implications for human rights that were subsumed under the need for global trade. All of these issues should be brought back to the table for ongoing and future consideration.

If MPs want us to take their interests in addressing the harms of technology seriously, they’ll move to get rid of AIDA and the flawed democratic process used to create it. If MPs of all parties are comfortable being quiet on this kind of democratic process capture they all need a good look from us.

Process capture is insidious. It destroys public engagement. What is being done with AIDA is corrosive. On a personal level, I can’t in good faith ask people to spend their time on this matter when it’s so clearly corrupt. That’s a severely bad situation for our democratic landscape.

If MPs want to be serious about their efforts related to technology and society they should know we don’t benefit by scaffolding ever more complexity into inaccessible systems of law-making and access to justice.

There are old problems we need to address and should do so thoughtfully, not entrench where we are ever further because we’re in a rush. As ever, the problems with technology inevitably lead back to people. At this 11th hour, it’s up to the people that we elected to show that they too think it matters to protect and value our law-making processes. Nothing thoughtful is born of false urgency.

black and white fabric texture — by Abby Lanes

--

--

Responses (1)