Interview with Prof. Dr. Aimee van Wynsberghe
I wanted to remind people that there is a physical infrastructure required to make and use AI technology. And this physical infrastructure is currently unsustainable because it generates carbon emissions. The infrastructure required for the training and use of algorithms – things like batteries and microprocessors – require minerals, and the conditions under which humans have to work in in order to acquire these minerals are horrific. They are commonly referred to as blood minerals. There’s also the issue of water and land usage to maintain this infrastructure. And what do we do with the electronic waste, the servers? We dump them in Asian countries and the people there have to suffer the environmental consequences. So, what I was trying to do with the distinction was to say it‘s not enough for us to say we will use this technology to achieve sustainability. We have to assess the technology itself for sustainability.
The discourse on AI for sustainability is progressing quite rapidly. Why isn’t the discourse on the sustainability of AI keeping pace with it?
Once we uncover more of the hidden costs, we realize just how costly AI is, and then we have to burst the bubble all these companies are creating – Google, Amazon, Facebook, Apple. There’s a strategic reason for them to hide these costs because Europe invested 3.2 billion euros in AI in 2020. The Big Tech companies try to avoid being required to measure the costs and to establish a complete understanding of their procurement chains. We don‘t see the discourse on the sustainability of AI progressing much because a) it would come with a lot more work for Big Tech, and b) we don’t like to acknowledge that the hidden ecological costs are much more exorbitant than we think. The sheer complexity of the problem is an obstacle to the progression of the discourse. And furthermore, acknowledging the ecological costs of AI would curb our society’s enthusiasm for the technology.
Do you expect the Big Tech companies to address the question anytime soon?
No. I think it really comes down to regulations. If we leave it up to the companies, we have to keep in mind that economics is their bottom line. They have an obligation to their shareholders, they have to make money. Having an AI ethics board was a strategic decision that made them look responsible, and this was basically their sole instigator. We need regulations that require Big Tech to evaluate the ecological impact of AI so that we can get the full picture of how bad the situation is.
How bad is the situation?
We have to act immediately. We are now in a state of “code red for humanity,” as a recent IPCC report put it. Every technology we use needs to be evaluated for its impact on the environment. When it comes to AI, we‘ve already reached a point that billions around the globe are invested in it, in every sector that you can imagine, and not just the algorithms but also the infra- structure being built on top of them. AI is spread out on a global scale, it’s an incredibly pervasive technology. If we don‘t act now, it will be too late and we will be stuck in a kind of carbon lock-in caused by unsustainable infrastructure. The global community will bear the burden of the ecological costs.
I have frequently heard that imposing ethics on AI would stifle innovation by creating all these annoying checks and balances procedures. All we are doing is trying to push for good innovation that promotes social and ecological sustainability. There was a time when AI itself wasn‘t considered feasible, and now it’s everywhere. So, why wouldn’t implementing it in the right way be feasible? This is where regulation comes in. We need governments, the European Commission and the European Parliament to oblige Big Tech to measure and track carbon emissions and to look into the procurement conditions of minerals used in the infrastructure, reardless of the difficulty. If it’s not possible to do it in a sustainable way, then don‘t do it at all. Otherwise, let’s innovate. It’s feasible if you push yourself to be innovative.
The AI Act currently being discussed at the European level is supposed to minimize risks stemming from AI and protect fundamental rights. What’s your stance on the AI Act in its current form?
In the sustainability department, it’s not doing anything at all really. My biggest problem with the AI Act is that it doesn’t conceptualize ecological risks as risks that require a risk assessment. We need more transparency on the sustainability of AI. This would be the first step towards a discussion about a carbon cap or a training cap allowing for a certain number of GPUs in algorithm training for a certain number of hours. Before we have the data on how much electricity is needed to train an algorithm or how much water is needed to cool down the servers, formulating demands would be making uneducated guesses. That’s why I advo- cate for mandatory measuring and transparency.