On July 23, 2025, employees of the social media platform TikTok in Berlin went on strike. The background to this is the announced dissolution of the entire Trust and Safety department as well as parts of Live Operations at the Berlin site. Around 150 qualified specialists are to be replaced by AI systems and external service providers.
From the perspective of the employees and the union ver.di, the issue is not only about fair working conditions but also about the quality of content moderation on the platform. This is a question that the Center for Advanced Internet Studies (CAIS), in a joint study with the Weizenbaum Institute and the Hertie School, has also examined. The researchers found that AI systems for content control frequently delete permissible posts by mistake and often fail to correctly identify hate speech, especially against disadvantaged groups. AI moderation alone is insufficient and cannot reliably function without careful tuning, transparency, and human support.
Ahead of the strike announced for July 28, 2025, we – in coordination with ver.di – have submitted a statement to be read at today’s strike assembly:
Dear Workers at TikTok Germany,
When we learned about TikTok’s plans to dissolve the entire German Trust and Safety department, as well as parts of the so-called Live Operations team, we reached out to ver.di to express our solidarity. We firmly believe that your work is invaluable and cannot be replaced.
In a recent study conducted by the Weizenbaum Institute, the Hertie School, and the Center for Advanced Internet Studies (CAIS), we systematically analyzed five million decisions made by widely used commercial content moderation APIs (Application Programming Interface). Our findings confirmed what you already know: AI systems lack the contextual understanding needed to reliably detect hate speech and disinformation.
Our research shows that AI tools consistently under-moderate implicit hate speech, while over-moderating counter-speech, reclaimed slurs, and content related to Black, LGBTQIA+, Jewish, and Muslim communities.
Our study highlights a simple truth: content moderation cannot be fully automated. The work you do is critical. It was critical yesterday, it is critical today, and it will remain critical tomorrow.
David Hartmann, Weizenbaum Institute, Forum for Computer Scientists for Peace and Social Responsibility
Prof. Dr. Hendrik Heuer, Center for Advanced Internet Studies, University of Wuppertal, and Forum for Computer Scientists for Peace and Social Responsibility
Rainer Rehak, Weizenbaum Institute, Forum for Computer Scientists for Peace and Social Responsibility
