|
Field
Abstract
|
Before
Algorithmic decision support tools used in high-stakes domains such as criminal justice, hiring, and lending are often exclude protected variables from input data to comply with anti-discrimination regulations and fairness principles. While these interventions are typically assessed based on quantitative disparity measures, their behavioral implications are less explored. This experiment examines how human decision makers, who are assisted by these tools in making accurate decisions about others, respond to algorithmic predictions that omit protected group membership, and whether these responses are related to their prior statistical beliefs about protected groups. It examines how such interventions affect belief updating, and how this in turn affects discrimination in subsequent decision outcomes.
The experiment uses a hiring context in which participants predict the performance of individual workers on a math and science quiz, both before and after receiving algorithmic performance predictions for each worker. The workers are selected to be balanced by gender, with otherwise identical characteristics, allowing for the measurement of prior statistical beliefs about gender differences in test performance. The algorithm’s input data varies between subjects regarding the inclusion of the gender variable. Participants are informed about the input data. Prediction results remain constant as gender is neither a significant predictor nor correlated with significant predictors. At the end of the experiment, participants make consequential hiring decisions. This allows for disentangling the extent to which discrimination is driven by prior statistical beliefs, differences in belief updating, and preferences (taste).
|
After
Algorithmic decision support tools used in high-stakes domains such as criminal justice, hiring, and lending often exclude protected variables from input data to comply with anti-discrimination regulations and fairness principles. While these interventions are typically assessed based on quantitative disparity measures, their behavioral implications are less explored. This experiment examines how human decision makers, who are assisted by these tools in making accurate decisions about others, respond to algorithmic predictions that omit protected group membership, and whether these reactions are related to their prior statistical beliefs about protected groups. It examines how such interventions affect belief updating, and how this in turn affects discrimination in subsequent decision outcomes.
The experiment uses a hiring context in which participants predict the performance of individual workers on a math and science quiz, both before and after receiving algorithmic performance predictions for each worker. The workers are selected to be balanced by gender, with otherwise identical characteristics, allowing for the measurement of prior statistical beliefs about gender differences in quiz performance. The algorithm’s input data varies between subjects regarding the inclusion of the gender variable. Participants are informed about the input data. Prediction results remain constant as gender is neither a significant predictor nor correlated with significant predictors.
|
|
Field
Trial Start Date
|
Before
August 23, 2024
|
After
September 06, 2024
|
|
Field
Trial End Date
|
Before
September 15, 2024
|
After
September 30, 2024
|
|
Field
Last Published
|
Before
August 27, 2024 03:37 PM
|
After
September 05, 2024 04:15 AM
|
|
Field
Intervention Start Date
|
Before
August 23, 2024
|
After
September 06, 2024
|
|
Field
Intervention End Date
|
Before
September 15, 2024
|
After
September 30, 2024
|
|
Field
Planned Number of Observations
|
Before
700-800 participants
|
After
700-900 participants
|
|
Field
Sample size (or number of clusters) by treatment arms
|
Before
350-400 participants per treatment
|
After
350-450 participants per treatment
|
|
Field
Secondary Outcomes (End Points)
|
Before
- Estimated accuracy of algorithm (calculate implied Bayesian Posterior)
- Failures in Bayesian updating; biases (conservatism bias, confirmation bias)
- Preferences (tastes) in hiring
- Asymmetric updating w.r.t postive vs. negative signal (i.e. low vs top prediction) and female vs. male worker
|
After
- Estimated accuracy of algorithm (calculate implied Bayesian Posterior)
- Failures in Bayesian updating; cognitive biases (conservatism bias, confirmation bias)
- Preferences, not explained by beliefs (e.g., tastes, social image concerns) in hiring decisions
- Asymmetric updating w.r.t postive vs. negative signal (i.e. low vs top prediction) and female vs. male worker
|