Issue #1 | Summer 2022
The EU’s AI Act: Dangerously Neglecting Environmental Risks
The EU has recognized the need to address risks associated with AI on a political level. But when it comes to the technology’s resource consumption and environmental impacts, the AI
Act is turning a blind eye.
In April 2021, the European Commission published its proposal for the Artificial Intelligence Act (AI Act) – as a response to the increasing need to regulate the technology. Even though the Act claims to have the consideration of fundamental risks of AI on societies and individuals at its heart, the proposal is a disappointment regarding the environmental risks of AI. This is all the more upsetting since the Commission’s White Paper on Artificial Intelligence, which preceded the Act, explicitly pointed out that AI development must proceed in an environmentally friendly way.
The AI Act proposes a regulation laying down harmonized rules for the protection of safety, health and fundamental rights against potential harms stemming from AI, while at the same time fostering innovation. In order to achieve its goal, the AI Act takes a risk-based approach, setting rules based on the perceived level of risk of AI systems or of their deployment. But the AI Act fails to account for the environmental risks stemming from the development and deployment of AI systems.
” Art. 37 explicitly states that the European Union must consider the protection of the environment in its policy-making.
In our view, if the AI Act’s aim is to protect our safety, health and fundamental rights, it would be negligent for the European Union to not account for the protection of the environment. The European institutions can hardly be in doubt about the environmental risks of AI systems, be it the tremendous resource consumption associated with some AI systems or their underlying infrastructures, when there is a plethora of evidence available. In its initial proposal, the Commission thus did not do justice to the Charter of Fundamental Rights of the European Union, which in Art. 37 explicitly states that the European Union must consider the protection of the environment in its policy-making.
Following the promising start in the White Paper on Artificial Intelligence, the draft AI Act is a disappointment. The White Paper stressed that the “environmental impact of AI systems needs to be duIy considered throughout their lifecycle and across the entire supply chain, e. g. as regards resource usage for the training of algorithms and the storage data.” But in the current draft of the AI Act, there are no environmental mandates placed on providers and/or deployers. As it currently stands in the AI Act, providers may create and apply codes of conduct, which can include voluntary commitments regarding environmental sustainability. But vol- untary applications of codes of conduct can hardly be considered an adequate response to an increasingly pervasive and resource-intensive technology such as AI.
Thus, as currently written, the AI Act so far misses a crucial opportunity to ensure that the development and use of AI systems is done in a sustainable, resourcefriendly manner in which our planetary boundaries are respected. This short coming is at odds with our collective endeavor to combat climate change as well as with the objectives of the European Green Deal and other policy initiatives of the EU. The European institutions are still negotiating the Act, so there is time to correct this neglect of one of the most central risks of AI technologies. A necessary first step would be for the AI Act to acknowledge the vast environmental risks of AI by making them a relevant criterion for assessing whether AI systemsshould be classified as high-risk or not. Consequently, organizations developing and implementing AI systems should monitor their AI-related resource consumption, be required to make such data transparent and take adequate steps to develop and deploy AI in an environmentally friendly way.
We need more insight into the actual resource consumption of AI in order to establish a more evidence-based regulation of AI technologies. The AI Act provides an opportunity, which should not be missed. It is now up to the European Parliament and member states to compensate for the Commission’s ommission.
NIKOLETT ASZÓDI
Policy and Advocacy Manager at AlgorithmWatch
Her work focuses on the use of automated decision-making (ADM) systems in the public sector and on horizontal EU regulations in the field of ADM – in particular the EU AI Act.