AI in personnel selection: the role of baseline algorithmic accuracy

Last registered on January 12, 2022

Pre-Trial

Trial Information

General Information

Title
AI in personnel selection: the role of baseline algorithmic accuracy
RCT ID
AEARCTR-0008804
Initial registration date
January 12, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 12, 2022, 7:26 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Zurich

Other Primary Investigator(s)

PI Affiliation
University of St.Gallen

Additional Trial Information

Status
On going
Start date
2022-01-12
End date
2022-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Hiring algorithms are designed to help companies save time and resources in recruitment. Higher efficiency, however, might come at a cost of fairness, i.e. create disparities across social groups. In this experiment, we investigate how people perceive efficiency vs. fairness trade-off in personnel selection in a STEM context. We vary the degree of the efficiency-fairness tradeoff and the benchmark accuracy of the algorithm. We use gender as a social group.
External Link(s)

Registration Citation

Citation
Leicht-Deobald, Ulrich and Serhiy Kandul. 2022. "AI in personnel selection: the role of baseline algorithmic accuracy." AEA RCT Registry. January 12. https://doi.org/10.1257/rct.8804-1.0
Experimental Details

Interventions

Intervention(s)
We use the same basic set-up of the study pre-registered as AEARCTR-0007912.
Intervention Start Date
2022-01-13
Intervention End Date
2022-01-26

Primary Outcomes

Primary Outcomes (end points)
The frequency of choices of the fair/efficient algorithm; fairness perceptions
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Participants reasoning behind their choices; their beliefs regarding the relative capability of algorithms in HR context
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The basic design follows the previous study AEARCTR-0007912. We drop one of the fairness metrics and instead vary the benchmark accuracy of the algorithm (between-subject).
Experimental Design Details
We keep only statistical parityas fairness metric and vary the accuracy of the algorithm: half of the people face low-performing algorithm and the other half face high-performing algorithm. The level of the trade-off is now the between-subject manipulation.
Randomization Method
randomization is done by the software (Qualtrics)
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
40 for the pilot+200 for the main study: 240 in total
Sample size: planned number of observations
240
Sample size (or number of clusters) by treatment arms
60
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials