Rejection timing in the job market and candidate satisfaction

Last registered on May 04, 2023


Trial Information

General Information

Rejection timing in the job market and candidate satisfaction
Initial registration date
April 27, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 03, 2023, 4:20 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 04, 2023, 2:46 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator


Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Preparing job applications is time consuming. Applicants expect their application files to be reviewed carefully and fairly for the effort they put in. A fast rejection soon after submitting the application understandably generates frustration and leads to the perception of being unfairly treated, negative reciprocity follows. After making the determination that a certain candidate is unsuitable, the timing as to when the rejection letter should be sent is then of importance for a good company image, positive candidate experience and job-seekers' morale.
External Link(s)

Registration Citation

Huang, Lidingrong and Paul Smeets. 2023. "Rejection timing in the job market and candidate satisfaction." AEA RCT Registry. May 04.
Experimental Details


Job applicants constantly apply to this nationally leading firm in the debt recovery industry. Applicants are first assessed by computer algorithm based on the online tests and scores before being passed onto (human) HR staffers. Most of the applicants would be automatically rejected for being objectively 'unqualified' for the role.

Randomisation is done at the individual applicant level based on their unique candidate IDs, the only intervention is to vary the timing of sending the generic rejection letter.

The control variables will be extracted from their application packages and the dependent variables are obtained from their responses to a survey contained within the rejection letter. Their decision to whether complete the survey by exerting some effort would also be used as a binary outcome variable.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Only a small number of rejected applicants would fill the survey but it serves as the only way for them to share their dissatisfactions and complaints directly with the firm - a way of emotion venting. We will keep the experiment running until a sufficient amount of sample is received.

Questions that included in the survey are conventional candidate application experience questions with Likert scales:
1. How satisfied are you with the application experience?
2. How likely would you encourage your peers and friends to apply to us?
3. We hope to be in touch again when more suitable roles appear, how likely would you accept our invitation to reapply in the future?
4. Open question on what we can do better to improve the recruitment process.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Binary variable on whether applicants decide to answer the survey.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The subjects are unqualified job applicants that have to be rejected. The rejection letters are generic and the only experimental manipulation here is the timing as to when we send the rejection letters.

Experimental Design Details
Randomization Method
Randomisation at the individual applicant level using a compute algorithm in the office.
Randomization Unit
Individual applicant
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
The Economics & Business Ethics Committee (EBEC) at the University of Amsterdam
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials