Job Searcher Responses to Information About Vacancy Competition

Last registered on July 21, 2022

Pre-Trial

Trial Information

General Information

Title
Job Searcher Responses to Information About Vacancy Competition
RCT ID
AEARCTR-0009344
Initial registration date
July 21, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 21, 2022, 12:32 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Boston University

Other Primary Investigator(s)

PI Affiliation
Massachusetts Institute of Technology
PI Affiliation
Facebook

Additional Trial Information

Status
In development
Start date
2022-07-21
End date
2022-07-22
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study how job seekers respond to information about how many other people have already applied to a job vacancy. To do so, we conduct a survey in which job seekers are asked questions about three hypothetical choices of which jobs to apply to.
External Link(s)

Registration Citation

Citation
Bhole, Monica, Andrey Fradkin and John Horton. 2022. "Job Searcher Responses to Information About Vacancy Competition." AEA RCT Registry. July 21. https://doi.org/10.1257/rct.9344
Experimental Details

Interventions

Intervention(s)
This is an informational treatment in a hypothetical choice experiment. The specific information concerns the number of current applicants to the job and the AI prediction of receiving a job offer if applying.
Intervention Start Date
2022-07-21
Intervention End Date
2022-07-22

Primary Outcomes

Primary Outcomes (end points)
Whether the participant picks Blank Co, Brown Co, or has no preference for each of the choices.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The reasons stated behind the explanations for the choice.
Secondary Outcomes (explanation)
We will look for n-grams related to two potential mechanisms: competition avoidance and herding.

Experimental Design

Experimental Design
We will randomize which information is shown across individuals and choices.
Experimental Design Details
We are fielding a choice-based survey of individuals on Mechanical Turk. Each survey taker will be given 3 hypothetical choice scenarios, each consisting of two vacancies (Blank Co and Brown Co). For each comparison, the participant is asked ‘which company do you value applying to most’?

The options differ in several characteristics:
A) The name of the company (Blank Co, Brown Co). The first option is always Blank Co.
B) The hourly wage. For the first comparison, this is $18 for Blank Co and $20 for Brown Co. For the second comparison it is $19 for Blank Co and $21 for Brown Co. For the third comparison it is $22 for Blank Co and $24 for Brown Co.
C) The current number of applicants. For each comparison, there are two preset values and these are allocated with 50/50 probability to Blank Co and Brown Co or vice versa. For the first comparison the values are 5 - 20 and 200+, for the second comparison, 200+ and 0-4, and for the third comparison 0-4 and 5-20.
D) Whether an additional line in green is shown. Whether this line is shown is randomize at the participant level with a 50% probability. This line states that ‘AI probability you get an offer: X%’ and is identical across options within a choice, but differs across choices. For choice one, this probability is 25%, for choice two it is 13%, and for choice three it is 30%. If participants see this line the instructions also state:
“Our job search engine uses an artificial intelligence (AI) algorithm that uses all information to predict whether you would receive an offer if you applied. This probability is also included in the job description.”


We also ask an open ended question about each choice after all choices are made. The prompt displays the information about the choice and asks the following, in comparison (1, 2, 3), you picked (Blank Co, No Preference, Brown Co). Please explain why you made this choice?

Lastly, we ask participants whether they responded randomly and if they have any comments about the survey.

Randomization Method
We use a computer to randomize values.
Randomization Unit
At both a participant level (whether the AI prediction line is shown) and at a choice level (the specific values of the current applicants).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1200 Individuals
Sample size: planned number of observations
1200 Individuals 1200*3 choice sets = 3600 choice sets 12100 * 3 options per choice set = 33,600 potential choices.
Sample size (or number of clusters) by treatment arms
600 participants who see the line about the AI prediction and 600 who do not.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Massachusetts Institute of Technology
IRB Approval Date
2022-04-29
IRB Approval Number
E-4047: Hypothetical Job Preferences
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials