Arna Wömmel

  Wednesday, 14th June 2023

  14:15 - 15:15

   Butler Room - Nuffield College

   Algorithmic Fairness and Human Discretion: How do Human Decision-Makers Integrate Non-Discriminatory Algorithmic Predictions?

ABSTRACT

Machine-learning algorithms are increasingly used to assist humans in high-stakes decision-making. For example, loan officers apply algorithmic credit scores to inform lending decisions, HR managers use data-driven predictions in selecting applicants, and judges turn to recidivism risk tools when setting bail. Despite their pervasiveness, there are growing concerns that such predictive tools may discriminate against certain groups, which has led to numerous efforts to exclude information about protected group membership (e.g., race) from input data. While, technically, such interventions can increase overall fairness levels, there is little evidence on how human decision-makers, who take these predictions as input, ultimately react to them. Do they consider the elimination of protected characteristics in algorithmic predictions when making decisions about others?

To address this question, I conduct a lab experiment in which subjects predict the other-regarding behaviour of other participants in an economic game. Subjects receive (i) information about the other participants’ social identities and (ii) an algorithmic prediction about the other participants’ behavior based on previous experimental data. I vary the algorithm’s fairness properties, i.e. whether the prediction includes protected social identity variables or not, which is communicated to the subjects. Moreover, I explore how potential reactions to fairness properties might be influenced by subjects’ biased beliefs about differences in other-regarding behavior across protected groups.