Using technology to monitor, guide and enforce regulatory objectives
The Lab’s research in this area focuses on the ability of algorithms to improve regulation in general and environmental regulation in particular. The study of algorithmic regulation examines the risks associated with using computational systems, often powered by real-time data, predictive analytics and machine-learning models, to monitor, guide and enforce regulatory objectives. In the environmental sphere, such systems enable dynamic oversight of pollution, resource use and ecological risks by integrating data from sensors, satellites and digital reporting tools. Although they can automatically detect violations, optimize permitting and inspection priorities and support adaptive policy responses, they also raise concerns about transparency, accountability, bias and the distributive effects of data-driven decision-making. These concerns form the central challenge of this research track.
The first line of research examines algorithmic regulation in the environmental sphere. Computational systems powered by real-time data, predictive analytics, and machine learning models increasingly enable dynamic oversight of pollution, resource use, and ecological risks by integrating data from sensors, satellites, and digital reporting tools. Although these systems can automatically detect violations, optimize permitting and inspection priorities, and support adaptive policy responses, they also raise concerns about transparency, accountability, bias, and the distributive effects of data-driven decision-making. Understanding and addressing these concerns forms the central challenge of this research track.
The second line of research investigates the potential of algorithmic approaches to enhance both formal regulatory processes and informal social norms governance. On the regulatory side, the work examines whether machine learning and predictive analytics can improve ex ante policy evaluation by systematically analysing historical regulatory outcomes to predict which regulatory designs are most likely to achieve their stated objectives. On the social norms side, the research explores how computational methods might better capture and measure shared perceptions of prevailing norms within a given society, enabling more accurate assessments of whom individuals trust and how normative expectations vary across different social contexts.
The third line of research investigates the use of large language models to evaluate the quality of voluntary sustainability disclosures, which form the backbone of corporate sustainability efforts and increasingly inform mandatory disclosure regulations. Despite their centrality to ESG debates and regulatory design, we lack systematic methods to assess whether these disclosures actually serve their intended informational purpose. This line of research develops computational approaches to measure disclosure quality at scale, examining dimensions such as specificity, quantitative evidence, and puffery across thousands of corporate reports. By providing empirical grounding for what voluntary disclosures actually look like, this work aims to inform ongoing debates about the design of both voluntary reporting standards and mandatory disclosure regimes.