The Supply Effects of Additional Screening in Recruitment

Last registered on April 16, 2024

Pre-Trial

Trial Information

General Information

Title
The Supply Effects of Additional Screening in Recruitment
RCT ID
AEARCTR-0013356
Initial registration date
April 12, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 16, 2024, 3:15 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 16, 2024, 8:39 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Monash University

Other Primary Investigator(s)

PI Affiliation
Monash University
PI Affiliation
University of Gothenburg
PI Affiliation
University of Exeter

Additional Trial Information

Status
In development
Start date
2024-04-22
End date
2024-05-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In this project, we study the impact of screening stages in recruitment on applicant behavior.
External Link(s)

Registration Citation

Citation
Avery, Mallory et al. 2024. "The Supply Effects of Additional Screening in Recruitment." AEA RCT Registry. April 16. https://doi.org/10.1257/rct.13356-1.1
Experimental Details

Interventions

Intervention(s)
In the project we study whether job applicants respond to different screening methods during the recruitment process for a real job. To study this we experimentally vary the mode of screening applicants.

We consider this as the supply side experiment. A future demand side experiment will be pre-registered prior to collecting the demand side data.
Intervention Start Date
2024-04-22
Intervention End Date
2024-05-30

Primary Outcomes

Primary Outcomes (end points)
We collect the following primary outcomes on the supply side:
• Initial Dropout
• Screening Dropout
Primary Outcomes (explanation)
Initial Dropout – upon receiving an email to take part in a screening task (or in the control continued interest in the job), applicants must click a link/button that confirms their interest. This variable is equal to one if the applicant does not click the button/link and zero if they click the link/button.

Screening Dropout: This is a dummy variable equal to one if the candidate does not complete the screening. A candidate is said to have completed the assessment if they answer all interview questions.

We will analyze these outcomes overall and by job.

Secondary Outcomes

Secondary Outcomes (end points)
To understand possible mechanisms, we elicit the following secondary outcome variables:

Evaluation scores: This is the score of the assessment calculated by the algorithm/human.

Time taken: This is calculated as the time the candidate spends completing the whole screening.

Type of language used: We will use an algorithm to assess the type of language used in applicants’ responses.

Fairness/bias perceptions : We will elicit applicants’ perceptions of whether the interview process they experienced was fair or biased

Interview anxiety: We will elicit applicants’ interview anxiety through the index of McCarthy and Goffin (2004).

Measure of perceived meritocracy of the interview process: We will elicit applicants’ perceptions of meritocracy for different types of interview methodologies.

Recommendation intention: We will elicit the applicants’ willingness to recommend the employer as an indicator of perception about the employer through a modified Reichheld (2003) measure.
Secondary Outcomes (explanation)
See above

Experimental Design

Experimental Design
Our design aims to measure the impact various types of additional screening stages in the recruitment process on the behavior of job applicants.
Experimental Design Details
Not available
Randomization Method
Randomization will be carried out by a computer.
Randomization Unit
The randomization unit will be the individual for all treatments.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Treatment is assigned at the level of the individual so clustering will take place at the individual level.
Sample size: planned number of observations
The clusters will be equal to the number of legitimate candidates. Based on previous experience we expect to have at least 2400 legitimate job applicants across 3 jobs, but this will likely vary depending on a number of factors. However, we plan to keep the jobs open until we have at least 2400 applicants. A legitimate candidate is someone who resides in the United States and completes the initial application form. We plan to assign an equal proportion of the sample to each treatment except for the zoom treatment, which will have a smaller assignment depending on the available evaluating resources. Further, we will stratify by gender such that there is the same proportion of men/women in each treatment.
Sample size (or number of clusters) by treatment arms
We plan to have at least 2400 applicants. We will split the applicants equally across the treatments except for the zoom treatment, which will be based on available resources.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Assuming a sample of 400 in each treatment, power of 80% and assuming the proportion of the study population that would have a value of 1 for the binary outcome in the absence of the treatment is 0.50, we expect a MDE of 0.10. The MDE for the zoom treatment will be larger than the other treatments, which implies a potential underpowering in this treatment. This situation arises due to the practical challenges associated with conducting interviews for a substantial number of candidates.
IRB

Institutional Review Boards (IRBs)

IRB Name
Monash University Ethics Committee
IRB Approval Date
2018-03-07
IRB Approval Number
N/A