x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
OK Computer: Job applicants' choice regarding algorithmic screening
Last registered on May 26, 2020

Pre-Trial

Trial Information
General Information
Title
OK Computer: Job applicants' choice regarding algorithmic screening
RCT ID
AEARCTR-0004411
Initial registration date
May 22, 2020
Last updated
May 26, 2020 5:06 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Utrecht University
Other Primary Investigator(s)
PI Affiliation
Utrecht University
PI Affiliation
Utrecht University
PI Affiliation
Utrecht University
Additional Trial Information
Status
On going
Start date
2019-07-03
End date
2021-08-31
Secondary IDs
Abstract
The screening of job applicants using data-driven technologies such as artificial intelligence (AI) has grown rapidly in the U.S. and is starting to pick up pace in Europe. We aim to study the decision process of job-applicants when offered a choice between algorithmic and human evaluation, using a behavioral experiment. An emerging literature suggests that such algorithmic screening technologies are indeed valuable hiring tools for firms. However, concerns are being raised that algorithmic screening technologies may perpetuate or exacerbate existing labor market biases, exclude vulnerable groups from the labor market, provide no legal basis for appeals, and are poorly understood by the people judged by them (Barocas and Selbst, 2016). We study how job-applicants perceive and value algorithmic vs. human screening, and how they change their behavior in response to algorithmic settings. These questions have so far been neglected.

We would like to answer the following questions: Do job applicants prefer to be evaluated by an algorithmic or by a human recruiter? Which demographic characteristics, perceptions, and attitudes are correlated with this choice? Are job applicants willing to pay to change the recruiting method they have been assigned to? Does job applicants' skill level correlate with their choice of a recruiter?

External Link(s)
Registration Citation
Citation
Fumagalli, Elena et al. 2020. "OK Computer: Job applicants' choice regarding algorithmic screening." AEA RCT Registry. May 26. https://doi.org/10.1257/rct.4411-1.0.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2020-05-25
Intervention End Date
2020-07-25
Primary Outcomes
Primary Outcomes (end points)
- A discrete choice variable of participants choosing either the algorithmic or human recruitment method (0 - human; 1 - algorithmic)
- Willingness to pay for their preferred recruitment method
Primary Outcomes (explanation)
- Task performance (absolute, relative to own average performance, and relative to performance of others)
- Individual Risk Aversion
- Subjective beliefs ( fairness, transparency, simplicity, their ability to understand the recruiter, the variability of their prediction, their familiarity with both recruitment methods, to what extent either method might discriminate against them and might make errors )
- Big 5 personality traits (extroversion, conscientiousness, agreeableness, neuroticism, and openness to experience)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We experimentally mimic a hiring setting in which participants perform a number-finding task and then choose whether to be evaluated by an algorithmic or a human recruiter. We also elicit their willingness to pay to obtain their favourite recruiter. We randomly provide information about the median performance in the number finding-task to a subsample of the participants (treatment group) and we do not provide this piece of information to another subsample (control group).
Experimental Design Details
Not available
Randomization Method
Randomization done by a computer
Randomization Unit
Individual level
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
0
Sample size: planned number of observations
500 individuals, recruited on Amazon's Mechanical Turk (MTurk)
Sample size (or number of clusters) by treatment arms
300 treatment, 200 control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number