You need to sign in or sign up before continuing.
Back to History

Fields Changed

Registration

Field Before After
Abstract Algorithmic decision support tools used in high-stakes domains such as criminal justice, hiring, and lending are often exclude protected variables from input data to comply with anti-discrimination regulations and fairness principles. While these interventions are typically assessed based on quantitative disparity measures, their behavioral implications are less explored. This experiment examines how human decision makers, who are assisted by these tools in making accurate decisions about others, respond to algorithmic predictions that omit protected group membership, and whether these responses are related to their prior statistical beliefs about protected groups. It examines how such interventions affect belief updating, and how this in turn affects discrimination in subsequent decision outcomes. The experiment uses a hiring context in which participants predict the performance of individual workers on a math and science quiz, both before and after receiving algorithmic performance predictions for each worker. The workers are selected to be balanced by gender, with otherwise identical characteristics, allowing for the measurement of prior statistical beliefs about gender differences in test performance. The algorithm’s input data varies between subjects regarding the inclusion of the gender variable. Participants are informed about the input data. Prediction results remain constant as gender is neither a significant predictor nor correlated with significant predictors. At the end of the experiment, participants make consequential hiring decisions. This allows for disentangling the extent to which discrimination is driven by prior statistical beliefs, differences in belief updating, and preferences (taste). Algorithmic decision support tools used in high-stakes domains such as criminal justice, hiring, and lending often exclude protected variables from input data to comply with anti-discrimination regulations and fairness principles. While these interventions are typically assessed based on quantitative disparity measures, their behavioral implications are less explored. This experiment examines how human decision makers, who are assisted by these tools in making accurate decisions about others, respond to algorithmic predictions that omit protected group membership, and whether these reactions are related to their prior statistical beliefs about protected groups. It examines how such interventions affect belief updating, and how this in turn affects discrimination in subsequent decision outcomes. The experiment uses a hiring context in which participants predict the performance of individual workers on a math and science quiz, both before and after receiving algorithmic performance predictions for each worker. The workers are selected to be balanced by gender, with otherwise identical characteristics, allowing for the measurement of prior statistical beliefs about gender differences in quiz performance. The algorithm’s input data varies between subjects regarding the inclusion of the gender variable. Participants are informed about the input data. Prediction results remain constant as gender is neither a significant predictor nor correlated with significant predictors.
Trial Start Date August 23, 2024 September 06, 2024
Trial End Date September 15, 2024 September 30, 2024
Last Published August 27, 2024 03:37 PM September 05, 2024 04:15 AM
Intervention Start Date August 23, 2024 September 06, 2024
Intervention End Date September 15, 2024 September 30, 2024
Planned Number of Observations 700-800 participants 700-900 participants
Sample size (or number of clusters) by treatment arms 350-400 participants per treatment 350-450 participants per treatment
Secondary Outcomes (End Points) - Estimated accuracy of algorithm (calculate implied Bayesian Posterior) - Failures in Bayesian updating; biases (conservatism bias, confirmation bias) - Preferences (tastes) in hiring - Asymmetric updating w.r.t postive vs. negative signal (i.e. low vs top prediction) and female vs. male worker - Estimated accuracy of algorithm (calculate implied Bayesian Posterior) - Failures in Bayesian updating; cognitive biases (conservatism bias, confirmation bias) - Preferences, not explained by beliefs (e.g., tastes, social image concerns) in hiring decisions - Asymmetric updating w.r.t postive vs. negative signal (i.e. low vs top prediction) and female vs. male worker
Back to top