Pay for applications

Last registered on August 29, 2019

Pre-Trial

Trial Information

General Information

Title
Pay for applications
RCT ID
AEARCTR-0004625
Initial registration date
August 26, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 29, 2019, 8:44 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Gothenburg University

Other Primary Investigator(s)

PI Affiliation
Monash University
PI Affiliation
Victoria University of Wellington
PI Affiliation
Griffith University

Additional Trial Information

Status
In development
Start date
2019-09-02
End date
2020-01-15
Secondary IDs
Abstract
In this project, we study whether the competitive nature of the job application process disadvantages certain applicants in a field experiment where a programmer is recruited after completing an assessment task
External Link(s)

Registration Citation

Citation
Feld, Jan et al. 2019. "Pay for applications." AEA RCT Registry. August 29. https://doi.org/10.1257/rct.4625-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2019-09-23
Intervention End Date
2019-10-14

Primary Outcomes

Primary Outcomes (end points)
Key outcome variables include:

The number of applicants (men and women) that complete their assessment tasks in the different treatments.

Assessment task scores (men and women) in the different treatments.

The quality of the applicants that complete their assessment tasks in the different treatments.

Gender difference in the effect that monetary incentive has on the quantity and quality of the applications.

The number of women reaching shortlisting criteria (e.g. top 25% score in the assessment test score) in the different treatments.
Primary Outcomes (explanation)
We are interested in the number of completed applications (assessment tasks) between treatments, and any gender difference in the control group and in the treatment group, and whether different treatments have different level of gender difference (Male Control - Female Control vs Male Treatment - Female Treatment). Similarly, we are interested in the quality of applications between treatments, which is measured by a candidate’s score in a Python programming test which forms a part of the assessment task (see AEA RCT registry pre analysis titled Understanding Skill Gap for more details on the test and scoring). Using the test score, we will also study whether there are gender differences in treatment effects between and within treatments.

To further compare whether the quality of applicants differs between the treatments, we will compare the difference between the treatment and control group in applicants’ “predicted programming skill” based on characteristics that we measured in the applicant survey. We compare predicted instead of actual test performance, because the incentives may have a direct effect on candidates’ performance. To calculate candidates’ “predicted programming skill” we will i) regress candidates’ python skill on characteristics taken from the applicant survey in the base treatment, ii) save the regression coefficients from this regression, and iii) multiply the characteristics of candidates in all treatments with their characteristics taken from the survey.

We also are interested in the characteristics of applicants with completed applications in general. Characteristics are taken from the applicant baseline survey in stage 1 (e.g. education, years of programming experience; age, risk aversion and competitiveness). Considering both the changes in quantity and quality, we would also like to examine how the characteristics of shortlisted candidates differ according to the treatments. Shortlisting is done according to the Python programming test score.

Secondary Outcomes

Secondary Outcomes (end points)
We will test whether the effect of the incentives depend on candidates’ risk aversion and competitiveness.

We will also interact treatments with aptitude and personality test performance
Secondary Outcomes (explanation)
We will measure candidates’ risk aversion and competitiveness using the candidate survey.

Experimental Design

Experimental Design
In the experiment there will be a control and treatment randomly assigned in order to test potential instruments that aim to reduce gender differences in STEM fields.
Experimental Design Details
In stage 1, we will post a job ad for a programming job (1 month, 80 hours) across the United States at several major job portals (e.g. joinhandshake.com, dice.com, crunchboard.com, github.com) and ask job applicants to send their CV and fill out a short survey (e.g. years of experience programming). After applications close, in stage 2, we will invite a subset of applicants to do an assessment task (Python programming test and an aptitude test (half of the applicants) or a Python programming test and a personality test). To make sure that there are a similar number of male and female applicants in each treatment, we will invite a random sample of male applicants and a random sample of female applicants to complete the assessment task. The key feature of the experiment is that we will randomly assign all invited applicants to one of two treatments: control, inventive.

In the control treatment, applicants will not be incentivized to complete the assessment task. In the incentive treatment, when applicants are invited to take the assessment, they will be informed that after completion of the assessment task they will be paid a base pay of USD 5 and a pay that depends on their test performance. Total pay will range between 5-10 USD.
Randomization Method
Randomization will be carried out by a computer.
Randomization Unit
The randomization unit will be the individual applicant for all treatments.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The number of clusters will equal the number of observations.
Sample size: planned number of observations
We aim to invite 720 applicants (360 male and 360 female), which would give us a maximum number of 720 observations (of these invited applicants, only a subset will complete the assessment test). The actual number of invited applicants depends on the number and gender of applicants.
Sample size (or number of clusters) by treatment arms
We aim to have 360 applicants in each treatment group:
• Control: 360 applicants (180 male and 180 female)
• Incentive: 360 applicants (180 male and 180 female)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Monash University
IRB Approval Date
2019-11-14
IRB Approval Number
14985

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials