Worth the Effort: Measuring and Predicting Investment in a Costly Application Process

Last registered on June 23, 2023


Trial Information

General Information

Worth the Effort: Measuring and Predicting Investment in a Costly Application Process
Initial registration date
April 07, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 13, 2023, 3:54 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 23, 2023, 1:22 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

Boston College

Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Before firms can hire workers from a pool of applicants, they must attract the right candidates to apply for the job in the first place. On the worker’s side, applying for a job takes time and effort, and (in states where this is legal) sometimes even costs money. While these costs are guaranteed, the desired outcome of an application (a job offer) is usually not; so workers must weigh the real, present cost against the expected future benefit of applying. This project frames applying for a job as a type of investment, and studies how the factors that influence a person’s investment decisions might also affect the decision to apply for a job. We test whether changing the cost of applying for a job affects who decides to apply.
External Link(s)

Registration Citation

Opanasets, Alexandra. 2023. "Worth the Effort: Measuring and Predicting Investment in a Costly Application Process." AEA RCT Registry. June 23. https://doi.org/10.1257/rct.11182-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


"Applicant" subjects will face random variation in the marginal cost of applying for a particular job. They may face no marginal cost, an effort cost, a monetary cost, or both an effort and a monetary cost.

"Hiring recruiter" subjects will be asked to hire a fixed proportion of applicants from one of the four treatment arms.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
1) Whether or not a subject applies for each of the job opportunities they are shown.
2) Which resumes a recruiter selects.
3) Whether or not hired applicants actually follow through and attempt the task they are hired for.
4) Whether or not hired applicants are able to successfully complete the task they are hired for.
Primary Outcomes (explanation)
The tasks for which subjects will apply are modified GRE exam questions in either English or math. The math task will consist of two multiple choice tests, and success will be defined as answering over 50% of questions correctly on both tests. For the English task, there is a multiple choice test followed by an Analytical Writing prompt. Success on the English task will involve answering over 50% of questions correctly on the multiple choice test and receiving a score of at least 4/6 on the Analytical Writing prompt. The prompt responses will be reviewed by external evaluators recruited by the PI.

Note: Original pre-registration had "success" measured as scoring exactly 50% or greater on any particular task. AFTER hiring decisions have been made, but BEFORE hired subjects are invited to actually complete the assigned task(s), the indicator of success has been amended as scoring ABOVE 50% on any particular task (at least 51% on any given multiple choice test and at least 4/6 on the English essay task).

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will randomly vary the cost of applying for each of two job opportunities shown to survey respondents. We will observe their resume and demographic information as well as their application decisions. We will then conduct a hiring experiment wherein we will randomly vary the treatment-type of resumes shown to "recruiter" subjects and then observe the recruiters' hiring decisions. Finally, we will observe hired applicants' performance on the assigned tasks.
Experimental Design Details
The target population is current full-time U.S.-based undergraduate students aged 18-25 AND individuals aged 18-25 who already have undergraduate degrees but have not engaged in any graduate study. There are a total of just under 3,000 such subjects on the platform. The limitations of the prescreening options offered on the subject recruitment platform being used for this experiment preclude recruiting from two disjoint populations for the same study at the same time. Thus, we will recruit exclusively from the population current undergraduate students for the first week of the experiment; after one week, we will begin recruiting from a similar sample of recent college graduates, with no graduate education, aged 25 and under, then we will return to recruiting from undergraduate students for the following week, and so on until 800 applicant subjects have been recruited.

For the application stage, the experiment will be run in consecutive "waves" where the total number of subjects to be recruited will alternate between 50 and 100 subjects. The PI will flip a coin, and if it lands on heads, the first wave will have 100 subjects in it; if it lands on tails, the first wave will have 50 subjects in it. The wave size will alternate after that point until subject total is at maximum. This is done to test whether the size of the visible recruitment pool affects a subject's willingness to apply.

Hiring recruiter subjects will be randomly assigned to view a random subset of resumes from one treatment arm (control, money, effort, money + effort). The chances of being assigned to each of these treatment arms will be scaled based on how many applications we receive in each treatment arm (e.g. if 15% of applications come from the money + effort treatment arm, the chance of being assigned to evaluate resumes from that arm will be 15%).

In addition to "hiring recruiters", there will also be "evaluating recruiters" whose only task will be to review resumes and describe how qualified they feel they are. Since evaluating recruiters will only see resumes regardless of the resume-holder's treatment status, the resumes they see will be randomly selected from the full subject pool.

There will be two "attention checks" included in the first portion of the applicant survey, before applicants see which treatment arm they have been sorted into. If a participant incorrectly answers both attention check questions, they will be immediately excluded from the survey.

There will also be four comprehension questions included in the applicant survey after treatment is assigned. Any respondent who responds to at least two of these four comprehension questions incorrectly will still be allowed to complete the entire survey, but will be classified as a "confused" subject for the purposes of analysis. Analyses will be presented separately for the full respondent pool and for the non-"confused" subject pool.
Randomization Method
Randomization performed on a computer and via coin flip.
Randomization Unit
Randomization into treatment will be done at the individual level.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
800 "applicant" subjects
31* "hiring recruiter" subjects
27 "evaluating recruiter" subjects

*This quantity is an estimate based on estimated application rates gleaned from a pilot study.
Sample size: planned number of observations
Each applicant subject will view two job postings, so there will be two primary outcome (apply or don't apply) observations per subject, for a total of 1600. Each "hiring" recruiter will view 30 resumes, so there will be ~28x30 = 840 primary outcome (hire or no hire) observations. Each "evaluating" recruiter will view ~30 resumes, such that there will be one evaluation of every subject resume, for a total of 800 observed resume evaluations.
Sample size (or number of clusters) by treatment arms
Applicant subjects:
200 control
200 effort cost
200 monetary cost
200 monetary AND effort cost

Hiring recruiter subjects (estimates based on pilot):
11 for control group
6 for monetary cost treatment
8 for effort cost treatment
6 for monetary AND effort cost treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Boston College IRB
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials