Ori Aronson, Yuval Feldman & Orly Lobel, 2026
The rapid adoption of artificial intelligence (AI) by regulatory agencies marks a fundamental shift in legal compliance and enforcement. While law and policy debates have heatedly focused on the threat of AI biometric tools such as facial recognition technology (FRT) to privacy and equality, the field of AI-driven behavioral recognition technology (BRT) has been far less charted. In contrast to FRT, BRT uses machine learning not simply to identify who a person is but to predict what they are likely to do. Agencies from the IRS to the EPA increasingly deploy these algorithmic tools to predict individual and corporate behavior and the likelihood of regulatory violations, yet legal scholarship has focused more narrowly on government use of AI in policing and criminal law enforcement. This Article demonstrates that as governments and administrative agencies embrace BRT to make fine-grained determinations about whom to trust and whom to monitor, investigate, audit, or sanction—and in turn to determine how to allocate limited enforcement resources—behavioral prediction becomes a central technology of governance.
In providing the first comprehensive legal framework for evaluating and ethically implementing BRT, this Article makes three key contributions. First, it maps how regulatory agencies are using AI-driven technology to make nuanced determinations about whom to trust and monitor, revealing an emerging regulatory paradigm that accelerates a paradigm shift from traditional command-and-control approaches toward data-driven trust assessments. Second, it argues that while deploying BRT as a regulatory practice raises significant constitutional and administrative law concerns regarding privacy, equality, and autonomy, properly designed systems can enhance these values by enabling more individualized and evidence-based enforcement decisions, as opposed to random selection or demographic-based proxies. Third, it develops a novel normative blueprint for responsible deployment of BRT in regulatory enforcement. The Article establishes a set of principles including the privileging of individual over group-based predictions, requiring strong empirical justification for public sector cross-domain data use, and dynamic balancing of predictive accuracy with civil rights protection. Rather than embracing or rejecting BRT wholesale, as has too often been the case with debates and policies on FRT, we chart a dynamic path forward that harnesses these tools’ benefits while preserving democratic values and individual rights. The Article concludes by offering practical guidance for courts and agencies evaluating and implementing these emerging technologies.
Link.