In 2024, the EU adopted the AI Act, a new set of rules for trustworthy artificial intelligence. The AI Act relies on standardisation, a regulatory technique that consists in crafting so-called harmonised technical standards, to facilitate legal compliance by the AI industry. While technical standards have been used in the past for ensuring product safety, for the first time standardisation aims to foster „human-centred“ AI in compliance with fundamental rights. Our working group explores how standardisation processes shape and stabilise notions of justice in the algorithmic society.
To answer this, we bring together scholars from law, philosophy, STS, critical algorithm studies and computer science. We study the work of EU standardisation bodies, examining how technical experts translate complex issues like bias, fairness, and fundamental rights into measurable norms and procedures.
The group aims to unpack the hidden power of AI standards and make their social impact more visible. Our working group aims to publish results in a joint publication, foster further interdisciplinary research, and pursue public engagement activities.
Academic reference:
Raphaële Xenidis, Miriam Fahimi. 2025. Standardizing Equality in the Algorithmic Society? A Research Agenda. In Proceedings of the Fourth European Workshop on Algorithmic Fairness (EWAF’25). Proceedings of Machine Learning Research.
https://proceedings.mlr.press/v294/xenidis25a.html
Main research areas
- Artificial Intelligence
- Standardisation
- Social Justice
- European Union
- Algorithmic society
