1. Home
  2. News
  3. CAISzeit
  4. Hidden Rules, Visible Consequences: Who Moderates Our Timeline?

CAISzeit #23

Hidden Rules, Visible Consequences: Who Moderates Our Timeline?

Billions of pieces of content are published on social media every day – including hate speech, incitements to violence, and racist remarks. Dr. Matthias Begenat speaks with Anna Ricarda Luther, David Hartmann, and Prof. Hendrik Heuer about how such content is identified, moderated, and assessed with the help of artificial intelligence – and why some posts are taken down while others remain online.

16. July 2025

Hate comments, incitements to violence, or racist remarks – billions of pieces of content are published on social media every day, including problematic statements. But how are these kinds of content identified and moderated? Why do hateful posts often remain online while seemingly harmless ones are deleted? Who makes those decisions? How exactly does content moderation work? And what role does artificial intelligence play in all of this?

Together with researchers Anna Ricarda Luther (Institute for Information Management Bremen, University of Bremen), David Hartmann (Weizenbaum Institute & TU Berlin), and Prof. Dr. Hendrik Heuer (CAIS, research program “Designing Trustworthy Artificial Intelligence”), host Dr. Matthias Begenat discusses how content is moderated on social media platforms.

A special focus is placed on the social consequences. Hate speech doesn’t remain confined to digital spaces — it impacts the lived realities of activists, local politicians, journalists, and engaged users. Those who are regularly targeted often withdraw – with direct consequences for political participation and the diversity of opinions.

Despite the challenges, there are developments that offer hope. The researchers discuss new regulatory frameworks like the Digital Services Act, call for greater transparency and oversight, and demand improved access to platform data. They also highlight civil society initiatives working to create safer digital spaces: How can moderation become more effective, fair, and accountable? What responsibilities do platforms carry – and what rights should users have?

Disclaimer:
This episode discusses hateful language, offensive terms, and discriminatory concepts – including references to the genocide in Myanmar and Tigray, as well as racist statements. For the sake of transparency, we have chosen to present these terms in their original form. If this content is distressing to you, please feel free to skip the relevant sections.

  • 4:28 – 5:06: Genocide of the Rohingya in Myanmar
  • 5:08 – 6:09: Civil war in Ethiopia
  • 10:44 – 11:14: Leaked Meta documents containing offensive statements
  • 38:30 – 38:50: Hate against people with disabilities
  • 40:00 – 40:36: Insults against Muslims

Support services:
Victims of hate speech can contact HateAid: https://hateaid.org/
beratung@hateaid.org
030 / 252 088 38 (Mon 10–1 PM | Thu 3–6 PM)

Recommendation on the topic:

Guests:

David Hartmann is a research associate at the Weizenbaum Institute and a PhD candidate at TU Berlin.

Anna Ricarda Luther is a research associate at the Institute for Information Management Bremen GmbH and a PhD candidate at the University of Bremen. She is also an affiliated researcher at CAIS.

Prof. Dr. Hendrik Heuer is head of the CAIS research program “Design of Trustworthy Artificial Intelligence”.

More CAISzeit episodes