Governing the environmental costs and transformative regulatory power of AI
Despite the increasingly central role of AI in modern economies, its sustainability implications remain insufficiently examined. This project investigates two critical and underexplored dimensions of the AI–sustainability nexus.
First, we examine the environmental footprint of AI itself. The rapid proliferation of data-intensive, large-scale generative AI systems is driving exponential increases in energy and water consumption across the AI lifecycle—from model training to storage and deployment. These costs remain largely opaque to users and regulators, and current market structures provide little incentive for their internalization. The project will explore mechanisms to make AI’s environmental impacts more transparent—through environmental labelling, lifecycle disclosures, and sustainability dashboards—and to govern these impacts more effectively at scale. Policy options to be assessed include green taxes, tradable permits, and design-based interventions such as the integration of AI infrastructures with distributed renewable energy generation and storage systems. The goal is to ensure that AI innovation supports, rather than undermines, climate and energy transitions.
Second, the project investigates the rise of “algorithmic regulation”—the use of AI, sensors, and satellite imaging to autonomously monitor, detect, and enforce environmental standards. As these technologies become more affordable and precise, they offer the potential to decouple environmental regulation from its traditional reliance on self-reporting by regulated entities. Yet this shift raises foundational legal and institutional questions: What should be the legitimate face of a regulatory system governed by algorithms? How can we safeguard accountability, procedural fairness, and the right to appeal, particularly in regimes where regulation becomes self-executing? What are the privacy risks and broader societal implications of this transition? The project will assess the institutional design, legal safeguards, and potential adverse spillover effects of algorithmic regulation—and develop normative and technical frameworks to ensure that these systems promote, rather than compromise, environmental sustainability and democratic legitimacy.