OK Computer: Job applicants' choice regarding algorithmic screening

Last registered on May 26, 2020

Pre-Trial

Trial Information

General Information

Title
OK Computer: Job applicants' choice regarding algorithmic screening
RCT ID
AEARCTR-0004411
Initial registration date
May 22, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 26, 2020, 5:06 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Utrecht University

Other Primary Investigator(s)

PI Affiliation
Utrecht University
PI Affiliation
Utrecht University
PI Affiliation
Utrecht University

Additional Trial Information

Status
On going
Start date
2019-07-03
End date
2021-08-31
Secondary IDs
Abstract
The screening of job applicants using data-driven technologies such as artificial intelligence (AI) has grown rapidly in the U.S. and is starting to pick up pace in Europe. We aim to study the decision process of job-applicants when offered a choice between algorithmic and human evaluation, using a behavioral experiment. An emerging literature suggests that such algorithmic screening technologies are indeed valuable hiring tools for firms. However, concerns are being raised that algorithmic screening technologies may perpetuate or exacerbate existing labor market biases, exclude vulnerable groups from the labor market, provide no legal basis for appeals, and are poorly understood by the people judged by them (Barocas and Selbst, 2016). We study how job-applicants perceive and value algorithmic vs. human screening, and how they change their behavior in response to algorithmic settings. These questions have so far been neglected.

We would like to answer the following questions: Do job applicants prefer to be evaluated by an algorithmic or by a human recruiter? Which demographic characteristics, perceptions, and attitudes are correlated with this choice? Are job applicants willing to pay to change the recruiting method they have been assigned to? Does job applicants' skill level correlate with their choice of a recruiter?

External Link(s)

Registration Citation

Citation
Fumagalli, Elena et al. 2020. "OK Computer: Job applicants' choice regarding algorithmic screening." AEA RCT Registry. May 26. https://doi.org/10.1257/rct.4411-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-05-25
Intervention End Date
2020-07-25

Primary Outcomes

Primary Outcomes (end points)
- A discrete choice variable of participants choosing either the algorithmic or human recruitment method (0 - human; 1 - algorithmic)
- Willingness to pay for their preferred recruitment method
Primary Outcomes (explanation)
- Task performance (absolute, relative to own average performance, and relative to performance of others)
- Individual Risk Aversion
- Subjective beliefs ( fairness, transparency, simplicity, their ability to understand the recruiter, the variability of their prediction, their familiarity with both recruitment methods, to what extent either method might discriminate against them and might make errors )
- Big 5 personality traits (extroversion, conscientiousness, agreeableness, neuroticism, and openness to experience)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We experimentally mimic a hiring setting in which participants perform a number-finding task and then choose whether to be evaluated by an algorithmic or a human recruiter. We also elicit their willingness to pay to obtain their favourite recruiter. We randomly provide information about the median performance in the number finding-task to a subsample of the participants (treatment group) and we do not provide this piece of information to another subsample (control group).
Experimental Design Details
We run our experiment on Amazon’s Mechanical Turk (MTurk), an online labor market where employers offer real-wage tasks and exercises to a large pool of potential workers (see Horton et al., 2011). MTurk permits to reach a diversified pool of participants in terms of demographic characteristics: Age, gender, ethnicity, education as well as risk aversion, competitiveness and personality traits (measured using a five-factor model as in Gerlitz and Schupp, 2005). MTurk participants are the job-applicants in our setting. Job-applicants’s quality is measured as their performance in a number-finding task: Finding two numbers, out of a 3x3 matrix, that add up to one hundred (see also Buser et al., 2014). The job-applicants perform the task 10 times. The faster a job-applicant is, the better her performance. Both the algorithmic and the human recruiter have imperfect information: They can partially observe the job-applicants’ performance and some of their demographic characteristics (age, gender, education, ethnicity). The experiment consists of two phases: The pilot experiment, that took place in spring 2019, and the main experiment, that will take place in June 2020. The pilot experiment is used to train the algorithmic and human recruiters. The algorithm is trained on the performance of 345 MTurk job-applicants. It is an OLS regression whose coefficients are used to predict the performance of the job-applicants who will choose the algorithmic recruiter in the main experiment. For the human recruiter, we asked 22 Utrecht University students to evaluate the performance of 83 Mturk job-applicants. Each of the 22 students individually evaluated half of the 83 Mturk job-applicants. For each of the 22 human recruiters, we run a OLS regression to compute the weights each of the 22 recruiters assigns to the demographic characteristics and the speed. We then randomly assign one of the 22 sets of weights to the job-applicants who choose to be evaluated by a human.

In the main experiment we recruit around 500 job-applicants. As in the pilot experiment, job-applicants have to perform the number-finding exercise and choose between algorithmic and human recruiter. Then the computer assigns one of the recruiters to the job-applicants. We elicit the job-applicants’ willingness to pay for their favorite recruiter by giving them the opportunity to change the recruiter assigned by the computer. The job-applicants also have the opportunity to explain their choice of recruiter, their beliefs regarding how well both the algorithmic and the human recruiter would score them and their willingness to pay.

We also would like to study how job-applicants' perception of their ability in the number-finding exercise affects their choice of the recruiter and their willingness to pay. To do this, we randomly provide information about the median performance in the number finding-task to a sub-sample of the participants (treatment group) and we do not provide this piece of information to another sub-sample (control group).
Randomization Method
Randomization done by a computer
Randomization Unit
Individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
500 individuals, recruited on Amazon's Mechanical Turk (MTurk)
Sample size (or number of clusters) by treatment arms
300 treatment, 200 control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials