Hate comments, incitements to violence, or racist remarks – billions of pieces of content are published on social media every day, including problematic statements. But how are these kinds of content identified and moderated? Why do hateful posts often remain online while seemingly harmless ones are deleted? Who makes those decisions? How exactly does content moderation work? And what role does artificial intelligence play in all of this?
Together with researchers Anna Ricarda Luther (Institute for Information Management Bremen, University of Bremen), David Hartmann (Weizenbaum Institute & TU Berlin), and Prof. Dr. Hendrik Heuer (CAIS, research program “Designing Trustworthy Artificial Intelligence”), host Dr. Matthias Begenat discusses how content is moderated on social media platforms.
A special focus is placed on the social consequences. Hate speech doesn’t remain confined to digital spaces — it impacts the lived realities of activists, local politicians, journalists, and engaged users. Those who are regularly targeted often withdraw – with direct consequences for political participation and the diversity of opinions.
Despite the challenges, there are developments that offer hope. The researchers discuss new regulatory frameworks like the Digital Services Act, call for greater transparency and oversight, and demand improved access to platform data. They also highlight civil society initiatives working to create safer digital spaces: How can moderation become more effective, fair, and accountable? What responsibilities do platforms carry – and what rights should users have?
Disclaimer:
This episode discusses hateful language, offensive terms, and discriminatory concepts – including references to the genocide in Myanmar and Tigray, as well as racist statements. For the sake of transparency, we have chosen to present these terms in their original form. If this content is distressing to you, please feel free to skip the relevant sections.
- 4:28 – 5:06: Genocide of the Rohingya in Myanmar
- 5:08 – 6:09: Civil war in Ethiopia
- 10:44 – 11:14: Leaked Meta documents containing offensive statements
- 38:30 – 38:50: Hate against people with disabilities
- 40:00 – 40:36: Insults against Muslims
Support services:
Victims of hate speech can contact HateAid: https://hateaid.org/
beratung@hateaid.org
030 / 252 088 38 (Mon 10–1 PM | Thu 3–6 PM)
Recommendation on the topic:
- Initiative Save Social
- Article “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People” by Jason Koebler about Mark Zuckerberg
- Too little moderation Tigray, Ethiopia
- Article by Nilay Patel on “The Verge”: “Welcome to hell Elon, you break it you buy it.”
- Meta Video: “More Speech, Fewer Mistakes”
- Article on “Relaxed Hate Speech Rules”
- Article “From content moderation to visibility moderation: a case study of platform governance on TikTok”
- Talk by Hendrik Heuer, Anna Ricarda Luther and David Hartmann at re:publica 25
- Public Spaces Incubator/ARD
- Public Spaces Incubator/ZDF
- Digital Services Act
- Project “Data Workers Inquiry”
- “The Cleaners” (2018) – Documentary film
- “Careless People” by Sarah Wynn-Williams
- “Code & Vorurteil – Über Künstliche Intelligenz, Rassismus und Antisemitismus” (Code & Prejudice – On Artificial Intelligence, Racism, and Antisemitism)
- “Empire of AI” by Karen Hao
