In 2024, the EU adopted the AI Act, a new set of rules for trustworthy artificial intelligence. The AI Act relies on standardisation, a regulatory technique that consists in crafting so-called harmonised technical standards, to facilitate legal compliance by the AI industry. While technical standards have been used in the past for ensuring product safety, for the first time standardisation aims to foster “human-centred” AI in compliance with fundamental rights. Our working group explores how standardisation processes shape and stabilise notions of justice in the algorithmic society.
In this talk, we will present the first insights and some open questions from our working group. We will discuss how technical expertise translates complex issues such as bias, fairness, fundamental and environmental rights into measurable norms and procedures, and which socio-legal and political consequences will arise. Bringing together scholars from law, philosophy, STS, critical algorithm studies, and computer science, we look forward to an engaging discussion with fellow CAIS colleagues.