Fair AI in personnel selection

Last registered on July 05, 2021

Pre-Trial

Trial Information

General Information

Title
Fair AI in personnel selection
RCT ID
AEARCTR-0007912
Initial registration date
July 01, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 01, 2021, 8:30 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 05, 2021, 5:45 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Zurich

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2021-07-01
End date
2021-07-10
Secondary IDs
Abstract
Hiring algorithms are designed to help companies save time and resources in recruitment. Higher efficiency, however, might come at a cost of fairness, i.e. create disparities across social groups. In this experiment, we investigate how people perceive efficiency vs. fairness trade-off in personnel selection in a STEM context. We vary the degree of the tradeoff and explore the role of the (incentivized) beliefs on ground truth difference in qualifications across genders and on the backgroun inequalities of gender composition of he labour force.
External Link(s)

Registration Citation

Citation
Kandul, Serhiy and Ulrich Leicht-Deobald. 2021. "Fair AI in personnel selection." AEA RCT Registry. July 05. https://doi.org/10.1257/rct.7912-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We present a scenario on hiring in the STEM context and introduce explore the efficiency vs. fairess tradeoff.
We explore two fairness metrics: satistical parity and equality of opportunities, and vary the degree of the tradeoff by looking at different levels of disparity across genders.
Intervention Start Date
2021-07-02
Intervention End Date
2021-07-07

Primary Outcomes

Primary Outcomes (end points)
The frequency of choices of the fair/unfair algorithms; fairness perceptions of the hiring algorithms; beliefs on cababilities of females and males
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Reasons behind choices (open text), control variables (general attitudes towards algorithm vs. human decision making)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants face a scenario on hiring in the STEM context and then evaluate several screening algorithms which vary on the efficiency (operationalized as the number of evaluation rounds taken to fill a position) and parity (operationalized as acceptance rate of females and males candidates). We elicit participants' fairness perceptions of the algorithms and the choices of the algorithms.
We explore two fairness metrics (between-subject): satistical parity (disparities in acceptance rate across genders) and equality of opportunities (disparities in acceptance rate of qualified female and male candidataes).
We also vary the level of efficiency vs. fairness tradeoff (within-subject), with disparties in acceptance rate either low (0.97 ratio), medium (0.81, satisfying the 4/5 rule) or high (0.49 ratio). Conditions either favour female or male candidates (between-subject).
After the choices of the algorithms, we elicit (incentivized) beliefs on the ground truth differences in qualifications and background inqualities in gender composition of labour force in computer occupations. Finally, participants self-report general attitutes towards algorithmic and human decision making in HR context.
Experimental Design Details
Randomization Method
Randomization is done by the software (Qualtrics)
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
280
Sample size: planned number of observations
280
Sample size (or number of clusters) by treatment arms
70
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials