ISED’s Bill C-27 + AIDA. Part 3: Contravention by Compliance

Bianca Wylie
8 min readOct 30, 2022

--

Five Major Problems with AIDA

This is the third in a four part series of posts on the Artificial Intelligence and Data Act that is part of the Canadian federal Bill C-27. To start with what AIDA is, in very basic terms, see the second post. And for some history and context on the human rights issues with AIDA, see the first post, with ongoing gratitude to Val Steeves for her work on this topic.

Also, here is a public mark-up copy of AIDA, it’s about 18 pages.

Five major problems with AIDA are as follows: Mixing human rights protection in with market framing, the lack of public process that has informed the bill, ISED being the one in charge of this issue/incoherence with other federal efforts around AI, the incompleteness of the bill, and the one that i’ll focus on mostly here and start with: the compliance construct and industry that AIDA sets in motion.

The fourth and final post will focus a bit more on how to engage on these issues (and others).

Contravention By Compliance

When you read AIDA, you understand that because this is a bill written primarily for the private sector, the government is trying to set rules about how artificial intelligence is fundamentally understood by the state and the public. It moves the conversation away from dealing with harms in practice and access to our rights, and moves toward the idea of disclosure as a regulatory success unto itself.

Because we understand and have seen the efforts of the government to prop up the tech industry over time, it should also make sense that they are excited to underwrite a new industry of audit and compliance professionals to support these new rules. What the government is doing with AIDA goes beyond the mere theatre of activity, this regulatory activity, being somehow beneficial and actively creates a framework for a new sub-industry to thrive as it evades its responsibility to the public.

To make matters more confusing, there are going to be a small percentage of the firms that take on algorithmic auditing and compliance that will do genuinely good work. When I read this recent interview with Cathy O’Neil about shame, the focus of her new book, I was reminded that she is one such professional engaged in this space.

But for every O’Neil, there are at least twenty professionals that will not be engaging with the same depth and intent to uphold human rights in their auditing practices. There will be clients and AI creators that will actively avoid taking these processes in such a way that genuinely scrutinizes their activity. And AIDA creates plenty of opportunities for anyone to hold their hands up and say “we didn’t really know this was a problem.”

O’Neil’s thoughts on this industry, from the interview, in italics:

After Weapons was published you started ORCA, an algorithmic auditing company. What does the company’s work entail?

Algorithmic auditing, at least at my company, is where we ask the question “For whom does this algorithmic system fail?” That could be older applicants in the context of a hiring algorithm, or obese folks when it comes to life insurance policies, or Black borrowers in the context of student loans. We have to define the outcomes that we’re concerned about, the stakeholders that might be harmed, and the notion of what it means to be fair. [We also need to define] the thresholds that determine when an algorithm has crossed the line.

So can there ever be a “good” algorithm?

It depends on the context. For hiring, I’m optimistic, but if we don’t do a good job defining the outcomes of interest, the stakeholders who might be harmed, and — most crucially — the notion of fairness as well as the thresholds, then we could end up with really meaningless and gameable rules that produce very problematic algorithmic hiring systems. In the context of, say, the justice system, the messiness of crime data is just too big a problem to overcome — not to mention the complete lack of agreement on what constitutes a “successful” prison stay.”

Compliance and the third-party consulting industry have not fundamentally addressed access to justice issues related to rights and harms. The construct and incentive structure does not make any sense, when we take the industry on as a whole. By focusing much of AIDA on a disclosure and audit frame, ISED sets up ever more convoluted ways for people to act as though the harm is being managed via complexity, obfuscation, and the birth of a new market of service providers.

Human Rights and AIDA

AIDA gestures at notions of harm that can be caused by artificial intelligence, but ISED — innovation, science, and economic development, is the not the appropriate venue for these conversations, given its commercial mandate. This is not to say ISED has no role in this conversation, but it should not be tasked with the implementation of something that falls so far outside of its basic remit. The crux of this argument sits in older decisions made more than 20+ years ago, to complicate human rights protection with market expansion and harmonization principles, as well as state use of technology.

Lack of Public Process around AIDA

The bulk of C-27 is grounded in work that was informed by public consultations done on the Digital Charter held by ISED, back in 2018. But AIDA, and its proposed approach, were not tabled in this conversation.

There has been smaller scale and small group conversations held about AIDA, which I am seeking more information about, but there has not been proper public discourse on the approach and concept. The reason this is significant is that the nature of this approach would fall apart with wide-scale public scrutiny that explores the unaddressed definitions of harm, the unclear definitions of what AI even is, the lack of access to justice approaches available to the public as things stand now, the way these systems are used in actual practice to date.

By avoiding the public on this matter, ISED has created a bill that it would not be able to clearly describe, educate the public on, or ask for feedback about. Without proper public process around the bill, it creates this ongoing situation where the issue is never properly dealt with, rather constantly kicked into the future. ISED is resourced well, it has a mandate to be clear about why it wrote AIDA, who wrote it, and how it will work in implementation. It is failing on these counts. This reason alone is enough to refuse AIDA as it stands, remove it from C-27, and reconsider the approach entirely, with a focus beyond this bill — one that makes the conversation about tech regulation coherent.

The government should have set the table to not only properly explain this bill, but also to put it in conversation with other bills that touch on the use of artificial intelligence and machine learning. Without looking at this coherently, they split and fracture the reality of how these systems work in practice. Fundamental and coherent explanations of what the government is doing and why, in the tech regulation space, may not be the norm at this point but we should never accept the exclusive and anti-democratic processes that the government is growing and expanding with AIDA. We are simply too far into conversations about tech and regulation to take this effort as acceptable writing of law.

ISED’s Role — Rule Writer and Rule Oversight

There are governance issues in play that have been well summarized by Mardi Witzel in this piece from August. They are also explored by Jim Balsillie in this piece from last week. The crux of the issue relates back to the framing of AIDA as a circular oversight construct.

From Witzel’s piece: “The most curious aspect of the proposed law is also the most foundational thing about it: the overarching governance arrangement. A single ministry, ISED, is proposed as the de facto regulator for AI in terms of law and policy making and administration and enforcement.”

and

“In the case of the AIDA, however, ISED drafts, interprets and enforces the legislation. Further, the AIDA states that “the Minister may designate a senior official of the department over which the Minister presides to be called the Artificial Intelligence and Data Commissioner, whose role is to assist the Minister in the administration and enforcement of this Part.” There is no independence from ISED or separation of roles.”

To make matters more confusing (and I do struggle with this, as someone that believes it is helpful to have different privacy laws for public sector and private sector use, as we do in Canada) - ISED has failed to put this work in clear conversation with other federal efforts on tech legislation. Teresa Scassa’s first piece in her series of posts on AIDA considers some of these gaps, and the jurisdictional issues at play as well.

The Incompleteness of AIDA

As Scassa has written about in her series of posts on AIDA, there are definition problems related to what “high-impact” systems mean, in practice. She also has a post that explores the narrow scope and lack of clarity around the terms “harm” and “biased output”.

Which would be important to get clarity on, given that this is what AIDA governs. But this has been kicked into the future. Another issue with AIDA is the borrowing of the consumer protection frame that focuses intently and consistently on the notion of individual harm, when it is screamingly clear and well-known that there are major issues related to collective harms and discriminatory practices against groups with the use of automated decision-making systems. Erica Ifill explores some of these issues in this piece from July. Jamie Duncan and Wendy Wong explore the error of the individual data rights frame in this recent piece.

David Fraser has created a nine-minute video on the AIDA, focusing on its incompleteness. From Scassa’s August post on how AIDA works in practice: “The AIDA itself provides no mechanism for individuals to file complaints regarding any harms they may believe they have suffered, nor is there any provision for the investigation of complaints.”

In Closing

There are more than these five things wrong with AIDA, but hopefully these examples lay out the kinds of problems that are general, structural, and provide more than enough reason to refuse AIDA as it stands today. More on some ways to participate in the next and final post. It is still unclear which federal committee this bill will go to, and when, beyond “soon”.

UPDATE Oct 31, 2022: Politico is reporting C-27 and AIDA are going to their second reading debate in the House of Commons this Friday, November 4.

photo credit: Carlos ZGZ

--

--

No responses yet