Discrimination, Rejection and Job Search

Last registered on May 13, 2024

Pre-Trial

Trial Information

General Information

Title
Discrimination, Rejection and Job Search
RCT ID
AEARCTR-0011771
Initial registration date
May 02, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 13, 2024, 11:52 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
INSEAD

Other Primary Investigator(s)

PI Affiliation
Harvard Business School
PI Affiliation
INSEAD & Sciences Po (LIEPP)
PI Affiliation
Erasmus University Rotterdam, Tinbergen Institute & Sciences Po (LIEPP)

Additional Trial Information

Status
In development
Start date
2024-04-23
End date
2025-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The purpose of this project is to examine how job seekers react to rejection of their applications in the labor market when recruiters can or cannot discriminate. We design a randomized online survey experiment, where we assign participants to the role of recruiter or job candidate. First, we use this simulated job market to examine recruiters' willingness to hire and candidates' willingness to apply to a job opportunity under a blind or a non-blind hiring process. Second, we examine to what factors do job seekers attribute rejection of their application, and how these factors relate to recruiters' actual discrimination on the market. Finally, we examine whether rejection and the reasons that job seekers attribute to rejection impact their willingness to apply to a new job opportunity. Our goal is to determine the potential discouraging effect of perceived discrimination on job search behaviors. In this document, we provide details about our planned study protocol, including our plans regarding experimental design, data collection, sample restrictions, and main research questions.
External Link(s)

Registration Citation

Citation
Boring, Anne et al. 2024. "Discrimination, Rejection and Job Search." AEA RCT Registry. May 13. https://doi.org/10.1257/rct.11771-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-04-23
Intervention End Date
2024-05-31

Primary Outcomes

Primary Outcomes (end points)
Recruiters:
- Willingness to Hire (WTH)
- Beliefs about performance of individuals from different groups: gender (men and women), age (under 45 and 45+), educational attainment (high school, bachelor, and advanced degree), favorite subject (humanities, social science, and STEM)
Candidates:
- Willingness to Apply (WTA)
- Beliefs about likelihood to be hired
- Beliefs about qualifications
- Attitudes towards disappointment
- Beliefs about recruiters' reasons for hiring or rejection decisions
Primary Outcomes (explanation)
See PAP

Secondary Outcomes

Secondary Outcomes (end points)
Candidates:
- Attitudes regarding discrimination and blind hiring procedures
Secondary Outcomes (explanation)
see PAP

Experimental Design

Experimental Design
We conduct a randomized online survey experiment, where we simulate a job market with a high rejection rate. We run two sessions: one with participants who act as recruiters, and one with participants who act as job candidates. Recruiters make a series of independent hiring decisions over candidates. We randomize recruiters into one of two conditions: hiring with blind resumes or hiring with non-blind resumes. In the candidate survey, participants decide whether to apply or not to four job opportunities. First, we randomize candidates into one of two conditions: applying with a blind resume or applying with a non-blind resume. Then, we switch the treatment for the second application decision. After that, we randomize candidates again to receive (negative) feedback for their blind or non-blind resume. After rejection, we ask candidates to apply again with the resume they were rejected. Finally they switch treatment again for the fourth application decision.

All resumes contain information on a candidate's educational attainment, favorite field of study, and a sample performance on a technical test. The dimensions on which recruiters could discriminate in the non-blind treatment are gender and age. Blind resumes exclude information about a candidates’ gender and age, which prevents recruiters from discriminating on the basis of these characteristics. Blind resumes also prevent candidates from expecting discrimination based on these characteristics. Non-blind resumes include information about gender and age, which allows recruiters to discriminate and candidates to expect discrimination. We measure the causal effect of blind resumes on hiring and application behavior. To do so, we design a multiple price list experiment to measure recruiters' and candidates' decisions. This experiment allows us to quantify the willingness to hire of recruiters depending upon a candidate's age and gender, as well as the willingness to apply of these candidates when discrimination is possible or not, both prior to and after a rejection.
Experimental Design Details
Not available
Randomization Method
Randomization is done by computer on the Prolific platform.
Randomization Unit
Recruiters and candidates are randomly assigned to treatment at the individual level.

Jobseekers are randomly assigned to treatment at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
2,000 participants as recruiters and 4,000 as candidates (unrestricted sample, see "sample size" section for restrictions)
Sample size: planned number of observations
2,000 as recruiters and 4,000 as candidates (unrestricted sample).
Sample size (or number of clusters) by treatment arms
We plan to collect data from 2,500 Prolific participants who act as recruiters. We base our planned sample size on power calculations using a pilot. We will require that all participants who act as recruiters have completed at least 100 past studies with an approval rate of 95% or above. We randomly assign 50% of recruiters to evaluate blind resumes, and the other 50% to evaluate non-blind resumes.
We also plan to collect data from 4,000 Prolific participants who act as job candidates. We will require that all candidates have completed at least 100 past studies with an approval rate of 95% or above. We will use Prolific's filters to aim for balance on our key variables. In particular, we will impose that Prolific field candidate data as follows:
- 1,000 men, age 45+
- 1,000 men, age under 45
- 1,000 women, age 45+
- 1,000 women, age under 45
Given the available participant pool on Prolific, we anticipate being able to fulfill our full data collection while maintaining these restrictions. However, after 2 weeks of data collection for the candidate survey, if we have not hit 4,000 candidate participants, then we will drop these demographic restrictions and recruit the remainder of the sample. We anticipate that collecting 4,000 candidate responses will yield enough high quality data (as identified by our exclusion restrictions described below) to remain well-powered for our main hypotheses.
Importantly, these sample sizes correspond to the unrestricted sample, i.e. the responses we collect before eliminating participants who do not pass our attention and understanding checks.
We include attention questions in both recruiter and candidate surveys, as suggested by Haaland et al. (2023). In both surveys, we ask: ``The next question is about the following problem. In questionnaires like ours, sometimes there are participants who do not carefully read the questions and just quickly click through the survey. This means that there are a lot of random answers which compromise the results of research studies." To recruiters, we then ask ``To show that you read our questions carefully, please enter twenty as your answer to the following question. How many resumes did you just evaluate?". To candidates, we ask ``To show that you read our questions carefully, please enter twenty as your answer to the following question. How many different resumes did we show you?". We plan to exclude from our final sample any respondent who does not answer this question correctly (we accept typos). We ask this question after the price lists in both surveys.
We also include questions to measure whether respondents understand basic instructions.
For recruiters, we ask after presenting the list instructions: ``If you are selected for extra payment and you chose to hire the candidate from the computer's chosen row, what bonus payment will you receive?
- $0
- 50 cents ($0.50) per question answered correctly by the candidate I matched with
- 100 cents ($1) per question answered correctly by the candidate I matched with"
For candidates, we ask after presenting the first resume: ``What information was on your resume for this opportunity?
- Your favorite subject, Your educational attainment, Sample performance on the technical test
- Your favorite subject, Your age, Your educational attainment, Your sex, Sample performance on the technical test"
Respondents have the opportunity to modify their answer if they fail to provide a correct answer. However, we plan to exclude from our main analysis any respondent who did not answer this question correctly the first time.
Finally, in the lists in both surveys, we added strictly dominated choices to check whether respondents provided rational responses. We plan to restrict our main analysis to participants who submitted turning points that are not strictly dominated. For recruiters, the maximum payment they can receive when hiring the candidate is 500 cents (in case candidates answered all ten test questions correctly). A rational recruiter who understood the list instructions correctly should not provide a turning point above that threshold. We plan to exclude recruiters who submit at least two answers above 500 cents. For candidates, the rational threshold is 100 cents. We plan to exclude any candidate whose turning point is above that threshold. An irrational turning point on any list will lead to the exclusion of all answers from the participant in our main analysis. For recruiters, we allow one mistake (that is, an answer above 500), but we plan to exclude the list where that mistake is.
Finally, we added a timer to measure how much time participants spend on each instruction page (respondents do not know about the timer). We plan to exclude from the main analysis any respondent who does not spend sufficient time reading the main instructions.
To determine a reasonable threshold, we use findings from research in reading and cognitive psychology that highlights the trade-off between speed and accuracy in reading. This research estimates that the average silent reading speed for English readers of non-fiction is around 250 words per minute, and that thorough comprehension drops past two or three times that reading speed (Brysbaert, 2019; Rayner et al., 2016). Qualtrics states the average human reading speed is 300 words per minute and uses this speed to estimate the survey duration (see https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/survey-methodology-compliance-best-practices/). We plan to restrict our final sample to a reading speed of approximately 400 words per minute on the main instructions. We use Qualtrics timer embedded within our survey pages in order to measure how long participants spend on each page of the survey. Participants who do not spend the required time on the price list instructions page are excluded from the final sample:
- For recruiters: the price list instructions page includes 266 words (excluding the list example), which corresponds to a minimum reading speed of 40 seconds.
- For candidates, the price list instructions page includes 242 words (excluding the list example), which corresponds to a minimum reading speed of 36 seconds.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Erasmus School of Economics IRB-E
IRB Approval Date
2024-03-26
IRB Approval Number
ETH2324-0654
IRB Name
IRB at Harvard Business School
IRB Approval Date
2024-03-19
IRB Approval Number
MOD22-1697-02
IRB Name
INSEAD INSTITUTIONAL REVIEW BOARD
IRB Approval Date
2024-05-02
IRB Approval Number
Protocol: Self-censorship in the Labor Market , ID: 2023-05A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information