Bianca Wylie
3 min readSep 22, 2023

ISED’s Bill C-27 + AIDA. Part 8: Canada’s AI Regulation is Moving Fast and Breaking (Legislative Process) Things

With the House of Commons back in session, a federal committee (INDU), is about to review Canada’s proposed artificial intelligence law, AIDA. While it’s tempting to dive into how AIDA does or does not address the harms of AI, the problem with the proposed law is bigger than that. The federal government is playing fast and loose with the very idea of law-making in a democratic society. AIDA should be voted down as a matter of maintaining adequate process for how we make laws in Canada.

In response to the constant refrain that technology moves too fast to be regulated adequately (a myth that industry and industry alone continues to profit from), Minister Champagne, through ISED, has tabled an incomplete shell of a law, 18 pages in total. It is, by design, vague and incomplete.

The federal government is rationalizing its unfinished and rushed thoughts on the matter of AI regulation as a new “agile” approach to law-making. ISED says they’ll figure out how the law will work after they pass it. Apart from the obvious order of operations problem this poses, another red flag with the idea of agile process is that is comes from the software development industry. An agile approach says that it is better to approach the situation of regulating AI with maximum flexibility.

In its race to catch up to international peers and their regulatory approaches for AI, the federal government is threatening to replicate one of the most troubling historical tendencies of the technology industry: move fast and break things. As AI researcher and scholar Ana Brandusescu explains, the federal government is currently porting an ill-conceived approach from the tech industry to the idea of regulation: better to go fast and be imperfect. In this context, it’s “move fast and break how we make laws in a democracy”.

Some of the urgent calls to regulate AI draw on the historical approach used to regulate automobiles. This analogy falls apart when one looks at our current situation. Automobiles are highly regulated. And many people die due to cars, in large numbers. Daily. Why? Because the automobile was regulated as an independent object, rather than something that considers its environment. The context we created for automobiles was a much bigger factor in how unsafe and deadly they are.

With AIDA, we’re set to do the same overly narrow and ill-conceived thing with AI, a general purpose technology. The answer to how to safely use (or not use) AI sits squarely, and only squarely, in the thousands of different ways it is and could be used. There is no version of a safe car that overrides the environment in which it exists and the people that are using it and being impacted by it.

If regulating AI like a car is a good idea, why haven’t we regulated mathematics, statistics, databases, or general computer code? One answer to this question is that these are fields that cannot be properly managed outside of specific context. Location, people, relationships, use. Beyond that, these topics are not inherently profit-oriented fields. Artificial Intelligence is. Even the name is marketing.

Truth is, if we were to slow down long enough to not replicate the regulation of the automobile, we would likely see this proposed legislation fall apart quickly. Take only several cases of AI use — medical research, climate modelling, automation of office administration, legal advice, etc. — and one quickly sees that each comes with its own unique set of requirements for safety. We can’t regulate AI outside of regulating its context. The AI industry is tiny. The broader economy is large.

To properly deal with the legal liabilities and risks that automation poses, in addition to human rights concerns, cultural impacts on labour and society, is overwhelming. It’s also necessary. It’s groundwork the federal government hasn’t done. Long-held historical legislative norms are being tossed aside in an effort to shape Canadian society in the name of an over-hyped technology. One would struggle to find any recent bill or law in modern history so explicitly political, and so fundamentally in need of a rethink.

image source: