ISED’s Bill C-27 + AIDA. Part 6: ISED Fails to Engage Office of the Privacy Commissioner of Canada on Voluntary Generative AI Code of Practice

Bianca Wylie
5 min readAug 21, 2023

--

Major Process Error

This is the sixth in a series on AIDA. Post one is here, two is here, three is here, four is here, and five is here. And in related writing, a recent piece on AIDA in CCPA’s The Monitor.

Voluntary Generative AI Code of Practice Consultation Skipped The Office of the Privacy Commissioner of Canada

On Friday Aug 11, Michael Geist tweeted about a listing for a consultation running August 4 to Sept 14 on the topic of a voluntary code of practice for Generative AI. Big thank you to Michael Geist for finding this. He noted that there was no additional info, and no way to participate. (Of note: the consultation period had already started a week prior to this no-info listing appearing.)

The story the government is telling about this no-info listing has been confusing. It warrants more attention. But let’s pin that for another day.

In the days following, ISED put out more info on the consultation. This includes its rationale for this code of practice, which they stated as follows:

“Recent advances in AI tech prove how urgent AI regulation is. The Government of Canada is engaging with experts to discuss the creation of a code that would provide guidance to AI developers for now. In the coming days, the Government of Canada will be releasing a brief document outlining the context for this consultation.” (source: Ask ISED Twitter account)

The experts that ISED has engaged with so far do not include those at the Office of the Privacy Commissioner of Canada (OPC), as the OPC shared with me via email. I asked them, via their comms alias, if ISED had consulted with them on this code, and if so, if they could share their feedback with me.

From the OPC, via email, Friday August 18th:

“Our Office has not yet been consulted by ISED on the development of a Canadian code of practice for generative artificial intelligence systems.

Staying ahead of technological advancements that could impact privacy, including generative AI, is a strategic priority for our Office. You may be interested in the recent joint statement by the G7 Data Protection and Privacy Authorities on generative AI, which outlines key areas of concern with respect to the technology.”

Why Does this Matter?

The process for AIDA has been rushed. The process for this voluntary code of practice is being rushed. One of the most persistent, and arguably most important, critiques of what ISED is doing on this entire file is their lack of adequate process.

It’s an error to get too concerned with the content (especially on this code) as ISED’s missteps are deeply process-related. Engaging with the content over the process drags you into their screwy frame of urgency and “they’re (US, EU etc.) doing it so we have to too”. This urgency and this thoughtlessness with our domestic approach to this topic — with our democracy — must be resisted with major persistence.

Bad Process vs. Good Process — Some Basics

One non-intuitive part of good public consultation process is the ‘internal to the organization’ rounds of consultation you do (in this case, across government and government agencies) *before* you go out to experts and the broader public.

Privacy is but one plank of how to consider any technology. But the OPC would absolutely have beneficial insights to provide on this generative AI code. They have capacity and expertise that is valuable to apply to this topic.

ISED skipping over them in the rush of their process is a major error. There is no question ISED would know this is a mistake and out of step with expectations. Why ISED skipped over them? That’s a very good question. They’ve got critical expertise.

Speed? Speed feels a plausible culprit. There are other possible reasons, but no need to start a guessing game. We can start with what ISED might say about this, if they reply to the email I’m sending them. If I had to guess, it would be that they are looking forward to engaging with the OPC now and in future. Or that they wanted to bring the OPC a more fulsome draft after the process. Or or or. Whatever they say, this was a big process mistake.

Secondarily, on process — more on order of operations: One of the reasons that good process requires you to go *inside* the organization first is that you can get on page about contentious issues. By ISED shoving its work out into the public without engaging the OPC, it’s putting the OPC in an awkward political position which might require them to disagree in public with ISED. It’s not to say the OPC wouldn’t. But the political culture in Canada is not strong on this front.

In any case, this is messy. It might seem internecine in nature, but it’s genuinely bad, and pretty brazen on ISED’s behalf. We’re talking about respect for the OPC, professional conduct, courtesy and good working relations, etc.

This is a very stepping in it unforced error type of situation.

As I wrap I have to say that I would welcome other people that know more about the political dynamics here than me to fill in blanks I might be missing. I am writing my opinion based on my desire, as a member of the public, to see any of these kinds of processes done so that they maximize public sector expertise, coordination, and collaboration that spans as much of the government as possible. I am also writing from the vantage point of a process expert. I am admittedly not someone that knows the very inside inside ways that a lot of this all works in Ottawa, and if someone has good reasons for how this is all aligned with good actor/good faith process that I can’t see — or understand — from my vantage point, I am here for it.

More About the Generative AI Voluntary Code of Practice Consultation

From ISED’s updated page on the consultation, which can be found here:

Who: “These consultations involve roundtables with stakeholders with expertise and experience in this area, including Canada’s Advisory Council on Artificial Intelligence, and representatives from academia, civil society, Canada’s AI research institutes, and industry.

The key elements for discussion during these roundtables are outlined in the consultation scene setter, entitled Canadian Guardrails for Generative AI — Code of Practice.

Following the consultation period, we will publish an update on this web page.”

How This Relates to AIDA

To be very clear, this exercise is about a voluntary code of practice, which is separate from, but related, to AIDA. There has been some commentary about how these two things relate, but I am still too early in review to get into it.

Urgency is Not the Way

ISED is claiming urgency because of the tech. In that urgency ISED is becoming single-minded and hardened on approaches to the problem. These approaches do not appear to be involving major swathes of existing Canadian public sector capacity, nor policy communities, nor the general public.

screenshot of image that is an error message dialogue window that says “Something bad happened”

--

--