NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
OK Computer: Job applicants' choice regarding algorithmic screening
Initial registration date
May 22, 2020
May 26, 2020 5:06 PM EDT
This section is unavailable to the public. Use the button below
to request access to this information.
Other Primary Investigator(s)
Additional Trial Information
The screening of job applicants using data-driven technologies such as artificial intelligence (AI) has grown rapidly in the U.S. and is starting to pick up pace in Europe. We aim to study the decision process of job-applicants when offered a choice between algorithmic and human evaluation, using a behavioral experiment. An emerging literature suggests that such algorithmic screening technologies are indeed valuable hiring tools for ﬁrms. However, concerns are being raised that algorithmic screening technologies may perpetuate or exacerbate existing labor market biases, exclude vulnerable groups from the labor market, provide no legal basis for appeals, and are poorly understood by the people judged by them (Barocas and Selbst, 2016). We study how job-applicants perceive and value algorithmic vs. human screening, and how they change their behavior in response to algorithmic settings. These questions have so far been neglected.
We would like to answer the following questions: Do job applicants prefer to be evaluated by an algorithmic or by a human recruiter? Which demographic characteristics, perceptions, and attitudes are correlated with this choice? Are job applicants willing to pay to change the recruiting method they have been assigned to? Does job applicants' skill level correlate with their choice of a recruiter?
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
- A discrete choice variable of participants choosing either the algorithmic or human recruitment method (0 - human; 1 - algorithmic)
- Willingness to pay for their preferred recruitment method
Primary Outcomes (explanation)
- Task performance (absolute, relative to own average performance, and relative to performance of others)
- Individual Risk Aversion
- Subjective beliefs ( fairness, transparency, simplicity, their ability to understand the recruiter, the variability of their prediction, their familiarity with both recruitment methods, to what extent either method might discriminate against them and might make errors )
- Big 5 personality traits (extroversion, conscientiousness, agreeableness, neuroticism, and openness to experience)
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
We experimentally mimic a hiring setting in which participants perform a number-finding task and then choose whether to be evaluated by an algorithmic or a human recruiter. We also elicit their willingness to pay to obtain their favourite recruiter. We randomly provide information about the median performance in the number finding-task to a subsample of the participants (treatment group) and we do not provide this piece of information to another subsample (control group).
Experimental Design Details
Randomization done by a computer
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
500 individuals, recruited on Amazon's Mechanical Turk (MTurk)
Sample size (or number of clusters) by treatment arms
300 treatment, 200 control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)