Miriam Fahimi

University of Klagenfurt

Miriam_Fahimi

(Un)Folding Algorithmic Fairness. Gender as Absent Presence in Credit Scoring

THE PROBLEM: WHAT IS F|AI|R IN GENDER?
The project focuses on gender as one important aspect of AI fairness to study how gender is enacted in multiple ways through computational practices. Amidst the complexity of translating a social concept like fairness into computational procedures, the attempt of implementing fairness into algorithms reflects and contributes to the public demand for accountable and trustworthy AI.

FOLDED OBJECT AND ETHNOGRAPHY
Empirically, the project draws on ethnographic fieldwork in a credit scoring agency that assessed and implemented fairness in their scoring procedures. Conceptually, it builds on the concept of the “folded object” to study how practices and knowledges are related to unfolding and refolding gender in the scoring algorithm. Understanding algorithms through the notion of the folded object allows for analysing how gender is made absent and present reflecting its historical, temporal and political enactments.

CONTRIBUTION
The project contributes to STS by exploring how gender is mediated in various ways in development practices as well as to material-semiotic approaches on AI. Academic outreach of the findings is planned through the submission of a paper to a high-level journal.

Main Research Topics

  • Artificial Intelligence and Justice
  • Socio-technical Values and Infrastructures
  • Technology and Care
  • Feminist Theory
  • Critical Realism

Research Results

My fellowship project, part of a larger PhD study examining the practices and tensions involved in making AI fairness in computer science, specifically investigated how gender fairness is constituted within a credit scoring company (CreditAI).

At CAIS, I analyzed an extensive body of material gathered through my ethnographic research at CreditAI (conducted from October 2021 to January 2023). I found that as much as my interlocutors at CreditAI grappled with grasping the blurry relation between gender and the credit scoring algorithm after the related data label gender had been excluded in 2016, as much did I struggle when trying to analyse what gender fairness meant to them. And, as much as the inherent features of the algorithm changed to produce a reliable and trustworthy prediction after gender being excluded, as much did it change what counted as a good prediction. To understand these two dimensions, I assembled scholarly work on the idea of the “fire object” (Law & Singleton, 2005) and “ghost variables” (Karkazis & Jordan-Young, 2020; M’charek, 2014; M’charek et al., 2014). I suggest that algorithms are folded to an extent that it sometimes becomes challenging for the actors in the field themselves to understanding (and measuring) their politics.

  • Scientific Talk: “Friction in Transparency. Materialities of Value in Algorithmic Credit Scoring” at the EASST-4S Conference, Amsterdam, Netherlands, July 2024
  • Participation: Translating critical data and algorithm studies into impact Workshop, Center for Tracking & Society, Copenhagen, Denmark, August 2024
  • CAISZeit Podcast: Ist Gerechtigkeit programmierbar? Fairness und Transparenz in Algorithmen. Zu Gast: Miriam Fahimi. 27.08.2024

Curriculum Vitae

  • 2021 – ongoing: PhD Candidate in Science and Technology Studies at University of Klagenfurt. Forthcoming thesis: F|AI|R. On Practicing Algorithmic Fairness.
  • 2023 – ongoing: Lecture on “Feminist AI” at htw, Berlin
  • 2020 – 2024: Marie Skłodowska-Curie Fellow in the EU-H2020 ITN „NoBIAS – Artificial Intelligence without Bias“
  • 2020: Research Assistant at the Department of Care Work and Care Politics at the Vienna Chamber of Labour, Austria
  • 2016 – 2018: Research Assistant at Department of Development Studies, University of Vienna

Publications and Presentations

Kinder-Kurlanda, K., & Fahimi, M. (2024). Making Algorithms Fair: Ethnographic Insights from Machine Learning Interventions. In J. Jarke, B. Prietl, S. Egbert, Y. Boeva, H. Heuer, & M. Arnold (Eds.), Algorithmic Regimes. Methods, Interactions, and Politics. (pp. 309–330). Amsterdam University Press. https://www.aup.nl/en/book/9789463728485/algorithmic-regimes

State, L., & Fahimi, M. (2023). Careful Explanations: A Feminist Perspective on XAI. Proceedings of the 2nd European Workshop on Algorithmic Fairness. EWAF’23: European Workshop on Algorithmic Fairness, Winterthur, Switzerland.

Fahimi, M. (2022). Caring 4.0. Geschlechter(un)ordnungen in der digitalen Pflegearbeit. In M. Kastein, L. Weber. Care-Arbeit und Gender in der digitalen Transformation (pp. 200–216). Beltz Juventa.

Feindt, H., & Fahimi, M. (2022). Ethics of Doing Critical Research on the State in the Global South. In M. Fahimi, E. Flatschart, & W. Schaffar (Eds.), State and Statehood in the Global South (pp. 67–87). Springer International Publishing. https://doi.org/10.1007/978-3-030-94000-3_4

Miriam Fahimi

University of Klagenfurt

Fellow at CAIS from April 2024