Skip to main content
#1InterviewResource consumption

Stuck in an Unsustainable Infrastructure

Issue #1 | Summer 2022

Stuck in an Unsustainable Infrastructure

- in an interview with Prof. Dr. Aimee van Wynsberghe

AI systems are not only data, nodes in a network or computational code – as popular visualizations would have us believe. They heavily rely on the exploitation of natural and social resources. The AI ethicist Aimee van Wynsberghe considers regulation to be the only way to hold Big Tech companies accountable for these hidden sustainability costs of AI.

Interview with Prof. Dr. Aimee van Wynsberghe

In your paper on sustainable AI, you differentiate between AI for sustainability and the sustainability of AI. Why is it relevant to have a discussion on the sustainability of AI?

I wanted to remind people that there is a physical infrastructure required to make and use AI technology. And this physical infrastructure is currently unsustainable because it generates carbon emissions. The infrastructure required for the training and use of algorithms – things like batteries and microprocessors – require minerals, and the conditions under which humans have to work in in order to acquire these minerals are horrific. They are commonly referred to as blood minerals. There’s also the issue of water and land usage to maintain this infrastructure. And what do we do with the electronic waste, the servers? We dump them in Asian countries and the people there have to suffer the environmental consequences. So, what I was trying to do with the distinction was to say it‘s not enough for us to say we will use this technology to achieve sustainability. We have to assess the technology itself for sustainability.

The discourse on AI for sustainability is progressing quite rapidly. Why isn’t the discourse on the sustainability of AI keeping pace with it?

Once we uncover more of the hidden costs, we realize just how costly AI is, and then we have to burst the bubble all these companies are creating – Google, Amazon, Facebook, Apple. There’s a strategic reason for them to hide these costs because Europe invested 3.2 billion euros in AI in 2020. The Big Tech companies try to avoid being required to measure the costs and to establish a complete understanding of their procurement chains. We don‘t see the discourse on the sustainability of AI progressing much because a) it would come with a lot more work for Big Tech, and b) we don’t like to acknowledge that the hidden ecological costs are much more exorbitant than we think. The sheer complexity of the problem is an obstacle to the progression of the discourse. And furthermore, acknowledging the ecological costs of AI would curb our society’s enthusiasm for the technology.

Do you expect the Big Tech companies to address the question anytime soon?

No. I think it really comes down to regulations. If we leave it up to the companies, we have to keep in mind that economics is their bottom line. They have an obligation to their shareholders, they have to make money. Having an AI ethics board was a strategic decision that made them look responsible, and this was basically their sole instigator. We need regulations that require Big Tech to evaluate the ecological impact of AI so that we can get the full picture of how bad the situation is.

How bad is the situation?

We have to act immediately. We are now in a state of “code red for humanity,” as a recent IPCC report put it. Every technology we use needs to be evaluated for its impact on the environment. When it comes to AI, we‘ve already reached a point that billions around the globe are invested in it, in every sector that you can imagine, and not just the algorithms but also the infra- structure being built on top of them. AI is spread out on a global scale, it’s an incredibly pervasive technology. If we don‘t act now, it will be too late and we will be stuck in a kind of carbon lock-in caused by unsustainable infrastructure. The global community will bear the burden of the ecological costs.

We are often confronted with the claim that it‘s not easy for industry actors to be transparent regarding the sustainability of AI. How feasible do you think it would be for industry actors to design and produce sustainable AI?

I have frequently heard that imposing ethics on AI would stifle innovation by creating all these annoying checks and balances procedures. All we are doing is trying to push for good innovation that promotes social and ecological sustainability. There was a time when AI itself wasn‘t considered feasible, and now it’s everywhere. So, why wouldn’t implementing it in the right way be feasible? This is where regulation comes in. We need governments, the European Commission and the European Parliament to oblige Big Tech to measure and track carbon emissions and to look into the procurement conditions of minerals used in the infrastructure, reardless of the difficulty. If it’s not possible to do it in a sustainable way, then don‘t do it at all. Otherwise, let’s innovate. It’s feasible if you push yourself to be innovative.

The AI Act currently being discussed at the European level is supposed to minimize risks stemming from AI and protect fundamental rights. What’s your stance on the AI Act in its current form?

In the sustainability department, it’s not doing anything at all really. My biggest problem with the AI Act is that it doesn’t conceptualize ecological risks as risks that require a risk assessment. We need more transparency on the sustainability of AI. This would be the first step towards a discussion about a carbon cap or a training cap allowing for a certain number of GPUs in algorithm training for a certain number of hours. Before we have the data on how much electricity is needed to train an algorithm or how much water is needed to cool down the servers, formulating demands would be making uneducated guesses. That’s why I advo- cate for mandatory measuring and transparency.

Background

Indirect resource consumption

Dimension:

ecological sustainability

Criterion:

indirect resource consumption

Indicator:

key recycling figures

The production of computer hardware requires “conflict raw materials” or “conflict minerals,” rare earths or precious metals the extraction of which is linked to human rights violations, appalling working conditions and environmental pollution. If the recyclables hardware contains are recovered at the time of disposal, they do not need to be extracted again for new hardware. Certified recycling specialists can separate hardware into the recycla- ble materials it contains and thus make it recyclable. Alternately, used hardware can be collected and reused by Origi- nal Equipment Manufacturers (OEMs) or refurbishment companies. They remove individual hardware components and reinsert them into used or new products. The percentage by weight of recycled or reused materials is an important metric for assessing the environmental sustainability of hardware.

Working conditions and jobs

Dimension:

economic sustainability

Criteria:

working conditions and jobs

Indicator:

fair wages along the value chain

Problematic working conditions in the development of AI exist not only in hardware production, but also in the preparation of data. The data sets needed for training AI systems usually must first be labeled, i.e., classified and annotated by crowdor clickworkers. These workers often perform small tasks (per click) under precarious conditions for companies without being formally employed. When developing as well as purchasing AI, care should be taken to ensure that working conditions are fair throughout an AI’s entire lifecycle. This includes adequate pay, good working conditions and opportunities for advancement along with further training, even for clickworkers.

The Desirable Digitalization Project

The Rethinking AI for Just and Sustainable Futures research project examines AI development based on ethical principles. Led by Prof. Aimee van Wynsberghe, researchers involved in the pro- ject are working with the AI indus- try to develop sustainable and just principles for AI design and edu- cation. The project began in April 2022 and will run for five years.

PROF. DR. AIMEE VAN WYNSBERGHE

Professor for Applied Ethics of Artificial Intelligence at the University of Bonn

is the Alexander von Humboldt Professor for Applied Ethics of Artificial Intelligence at the University of Bonn in Germany. She is Co-Founder and Co- Director of the Foundation for Responsible Robotics and serves as a member of the European Commission‘s High-Level Expert Group on AI as well as a member of the World Economic Forum‘s Global Futures Council on Artificial Intelligence and Humanity. She is the recipient of a Dutch Research Council personal grant for the study of the responsible design of service robots. Van Wynsberghe is a 2018 L‘Oreal UNESCO For Women in Science laureate, a founding Board Member of the Netherlands AI Alliance, a Founding Editor of the international peer-reviewed journal AI & Ethics (Springer Nature), and author of the book ”Healthcare Robots: Ethics, Design and Implementation.”