Supremacy of Error and Argument. We Know Enough to Say No to AI Law.

Bianca Wylie
8 min readOct 10, 2022

The Allure of False Complexity

As I consider the days and weeks ahead of reading, writing, administrative hustling, and organizing around Bill C-27, a few baseline thoughts about why. Why to spend some time, not a lot, but not none, on this topic. Given the request of others time and attention to this, a bit more of that calculation seems helpful to share.

As I read more into the posts and the law, I know that trying to argue into its fundamentals is a trap of sorts. My position is that legislating AI as an object, as a thing, is a hard no. The weeks and months prior to today have had many pointing to the work that other countries are doing/have done on this topic. I come to my position not from any deep experience with AI (I don’t have it) or the associated math. I come to my position from the 20+ years of dealing with software, in one way or another, but that’s not even the heart of it. I come to my position from a much longer history and the world we are all trying to live in and what this so-called country owes to itself to define our future democracy.

The idea that working for now within what are borders and notions of sovereignty can and should differ from place to place dependent on the will of the people living there. I am not interested in harmonizing bad practice to validate other people’s choices and their relationships to their governments.

Like climate collapse, I don’t need to know the intricacies of the science or the arguments to understand that our behaviours, systems, and modes of life are not serving us well. I am not saying AI is climate collapse, I’m saying tech, software, AI and all of it, does not require expertise to understand the heart of its damage.

AI is not a public good. The idea that the state is seeking to negotiate to perform influence in a space where it has no footing is only to further this creep of private influence while pretending otherwise. There is no big mystery here. There is only further evasion for long held inactions and it is a mistake to give this a pass. More on process soon, for this is where the idea of AI legislation in canada falls apart entirely.

I know that even within the small-ish community of tech experts and critics in canada, few have read this bill in full. The largest number of people that have read it are likely commercial lawyers. I would put the number of civil society actors that have read it at under 100 and I may still be high there. In a country of 30 million + adults that would theoretically be able to read it or read analysis.

This is an issue I’ll take up in another post. You should know more about the tables of people involved in the creation of this law. For now, the heart of this problem is that we know enough to know what software is actually eating. It’s not eating away at clarity or sense or decision-making. It’s eating at our sense of belief that we are missing the tools we have at our current disposal to address our collective ruin.

In this moment, we see economists, politicians, civil society, and many others lamenting the fact that capital, as it is organized today, is destroying us. Software will not stop that. Software might, in modes removed from the market model, be of some help to us in organizing and reorganizing how we share things. But this is still a weak argument. And it is fully tethered to a model much larger than code.

When in the recent past have the benefits of commercial innovation been leveraged for the poor? In a systemic way. We need to grapple with the idea that the world is not tilting towards better. We are in austerity. It is deepening. This is highly relevant to the arguments being made in favour of what we might miss out on with AI. How is the trajectory we are on for how capital will continue to move looking in relationship to the wealth that is driving tech’s legitimization by legislation?

It may seem that the legislation of AI in canada is already too far down a path to stop, but even if this is the case, the mitigation, the insertion of doubt, is work worth doing. Let me go back a few decades to make this make more sense.

When I began to understand what software could do, I fixated on it as an object and worked for a short time on educational software. It was a mistake and a business failure. But along that road, there were many who were buoyed enough by my confidence that it could be useful that they were happy to keep me going. From educators to hedge fund people, there was a joy and delirium about potential and possibility. It was social and it was confusing. Perhaps there was some magical way out of our deepening and hardening problems in how we were trying to do education that technology could somehow get at. That buoyancy was ill-informed, but it did not stop any of what has happened since.

Fast-forward past the business failure and into the days of sitting on my mom’s balcony after sending resume after resume just trying to get a job. I eventually got lucky and got a job, and that put me on a path to learning about technology and meeting encouraging people along the way. I never, in the years since then, have worried about being able to have some kind of job. I knew enough about this new genre of machine that I could always be some kind of a mechanic of it.

This is partial to the reason I’m spending some time on this bill. The older longer memories and experiences I carry have created a certainty that this mode of assigning power to systems and objects is the wrong path. It doesn’t matter in the least which container we are trying to legitimize, this object model for justice and harm reduction is not it. Software is eating at our relationships to each other in ways too numerous to count.

It is not the right direction to create more of this problem. The same ways we in canada have failed to wrangle the power and violence done by the objects of land (understood as such by some) and capital are not the site of the way out of here. The complexity of more systems is not the path towards dealing with the simplicity of how we look at harm. What happened, who did it, and what do we do? We are creating more of a mess with black boxes, absolutely. But opening them or fixating on them will not by any means do much to address the primacy of the error.

The reckonings of every day relate to people and their actions. People in all kinds of different places of power, some elected, some corporate, some friends, some employers, it’s all relational. There is no sense in seeing stopping the AI part of C-27 as detached from this longer work. Stopping it helps us refocus on what we already have to use when people get hurt. But what we have to be able to know and see with confidence is that the problem of AI is not the problem it is being made out to be in this law. It is a secondary problem on top of foundational problems, and legislating it in this simplistic way, with wilful ignorance as to how that has worked out in the past, makes getting at foundational problems harder.

I will not, in my few total posts planned for this topic, get too mired in the specifics of how the bill is a problem because I know that argument space. Arguments that rely on laws and policies and other detached modes of power management isn’t the part of the conversation that I’m interested in. I’m interested in making it feel calm, confident, and clear for those outside of this narrow field to know that we don’t need to know more to refuse it.

Sometimes the answer to doing something is as simple as no, this is not a good idea on the whole. I would not feel comfortable making too many sub arguments about this topic because we lose the point. The point is that there is nothing about the current structure and arrangement of power that suggests legislating AI will help us in our democracy, that it will help us take back our confidence to fight against the state’s worst, and its worst is very old.

There is no happy comfortable reasonable relationship to be had with the state. Managing its power is by nature adversarial. When it does something helpful you don’t lean back and let the rest of it slide. You move onto the next site of violence and harm. If you aren’t comfortable being adversarial with the state I suggest that you reconsider what democracy means in practice. There is no one that is not an activist in a democracy. There is no civility to civil society. This is not a matter of being grateful that the state is finally “doing something” about AI when it absolutely has shown nothing but error in how it has managed public technology since the advent of its use. If you believe legislating AI is a good idea, that’s fine. But then you have to answer for the how. And the how here is not well-founded.

This is not something that cannot be changed. But it is not something to laud, this performance of attention and action that will placate the few that are tied into the topic. It is not the general public’s problem that those most attuned to this topic have livelihoods that depend on this model, this industry, this science, or anything else. But I would argue those tied deeply into this space do have a deep responsibility to those who are trying to keep democracy functional.

I am, as is likely clear, committed not to leaving this topic entirely, tech. But the long work to put the idea of data and tech and software and AI into genuine conversation with the world at large is the only way to try. And so I hope in these small writings I can help you feel confident if you have any instinct that managing our lives in these rational modes of objects untethered from too much is wrong. It’s wrong. It’s not going to do what we need to do. Having a bigger public conversation about how to put this all in better context will take time and money. There is much of that within the state should it seek to have this conversation properly.

For now, it’s enough to say no on the basic grounds of history, patterns, and the point in time we are in. There are elected officials that profess to be in office to protect and expand democracy and improve our lives in this country. They do not have adequate and solid ground to rewrite the world in the name and face of technical objects driven by markets. They would never describe their interest in politics as such. And so one next baby step in this particular work on C-27, logistically speaking, is to know which ones have the power to make the next decisions and to hold them accountable for their actions.

photo: Wendy Wei