Search Frictions and Self-Selection in Job Matching

Last registered on July 11, 2023

Pre-Trial

Trial Information

General Information

Title
Search Frictions and Self-Selection in Job Matching
RCT ID
AEARCTR-0011711
Initial registration date
July 03, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 10, 2023, 9:15 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 11, 2023, 4:55 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Oxford

Other Primary Investigator(s)

PI Affiliation
University of Oxford
PI Affiliation
University of Oxford
PI Affiliation
University of Oxford

Additional Trial Information

Status
On going
Start date
2023-07-03
End date
2023-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project aims to investigate the relationship between the cost of a job application, from the viewpoint of the jobseeker, and match quality. The main mechanism we wish to investigate is self-selection into the decision to apply for a job. When jobs vary in their cost of application, we hypothesize that different types of people will self-select into applying for the position depending on their relative cost of effort and/or their outside options, which in turn impacts match quality.

We run an experiment on the online task matching platform, Prolific, where we invite jobseekers to apply for short-term work and randomly vary both the time and effort costs associated with submitting an application. We hypothesize that a jobseeker who is a good "fit" for a position will incur a lower cost of effort to submit an application. However, high ability jobseekers also have a higher degree of outside options and may be less willing to undertake applications with higher costs as a result. We measure the overall impact on match quality, as proxied by task performance.
External Link(s)

Registration Citation

Citation
Krkoska, Eduard et al. 2023. "Search Frictions and Self-Selection in Job Matching ." AEA RCT Registry. July 11. https://doi.org/10.1257/rct.11711-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
This will be available on completion of the study.
Intervention Start Date
2023-07-03
Intervention End Date
2023-10-31

Primary Outcomes

Primary Outcomes (end points)
1. Self-selection into applying (application rates)
2. Quality of the pool of applicants (as measured by a Raven’s test)
3. Task performance (as measured by number of correct entries made in the data entry task)
Primary Outcomes (explanation)
1. Application rates: For each treatment group we compute the percentage of prospective applicants who choose to apply for the advertised position.

2. Match quality:

(a) Quality of the pool of applicants: We measure the average ability of jobseekers who self-select into applying for the data entry task using the score from a short set of Raven’s progressive matrices in the baseline survey, following Abebe, Caria, and Ortiz-Ospina (2021).

(b) Task performance: We measure performance on the data entry task by the total number of correct entries made by the individual. This takes into account both speed and accuracy as relevant dimensions of interest.

Please see the pre-analysis plan for more details on variable construction.

Secondary Outcomes

Secondary Outcomes (end points)
1. Hypothetical tenure: openness to completing a similar task again.

2. Top performers: number of top ability applicants as measured by the Raven’s test.
Secondary Outcomes (explanation)
  1. Hypothetical tenure: Job tenure is an important indication of match quality in the labor market. While our data entry task is a one-off, we ask those who complete the task if they would also be interested in doing similar work in the future, after they have completed the digitization task.

2. Top performers: As well as measuring average ability, we look at the proportion of top ability applicants, defined as the proportion of applicants having a Raven’s score above the upper quartile and the 90th percentile value, following Abebe et al. (2021), from the control group’s score distribution.

Experimental Design

Experimental Design
This will be available on completion of the study.
Experimental Design Details
We advertise a task on the Prolific platform, described as a survey on participants’ work experiences and preferences. This is used to collect background information on participants, and then advertise a future task for them. The task that we advertise is a data digitization task, where participants are asked to read from scanned archives of data records and manually transcribe the data into a digital format, all integrated within Qualtrics survey software. As the task is advertised to participants, we vary the screening procedure that the applicant will face, as described in the Interventions section. For those who self-select into applying, we send them the task, also hosted on Qualtrics, through Prolific.

All assumptions made have been purposefully conservative and assuming the comparison of treatment and control groups is done through testing the difference in means. In practice, this will be done using the regression based empirical strategy outlined in the PAP, which improves the precision of estimates through the use of control variables.
Randomization Method
Randomization will be carried out by a computer (Qualtrics, where the survey is hosted).
Randomization Unit
Individual 
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Randomization is not clustered
Sample size: planned number of observations
2240
Sample size (or number of clusters) by treatment arms
320
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The treatments and outcomes of interest in this project are novel in the literature and for this reason there is a scarcity of parameters which can be taken from the literature for the purpose of power calculations. As such, we make conservative estimates on baseline means and standard deviations of outcomes, and calculate minimum detectable effects (MDEs) which our experiment could statistically detect. Our choice of statistical significance level (alpha) and power are standard in power calculations. We choose alpha=0.05 and power=0.8. For the primary outcome of application rates, assuming a control group mean of 0.5, n of 320 per group and a standard deviation in both groups of 0.25 (the most conservative possible, for a binary outcome), the MDE from a test of two proportions is 0.1099. As an application is a binary outcome this is interpreted as a change in the likelihood of applying by 10.99 percentage points. For the primary outcome of Ravens scores, which are measured as the proportion of correct answers, the measurement is between a score of zero and one, but the outcome is not binary. Assuming a control group score of 0.75 and a standard deviation of 0.4 in each treatment group, with n of 320 per group, the MDE is 0.0887. This is interpreted as a change in Raven’s score of 0.0887 or increasing the percentage of correctly answered puzzles by 8.87 percentage points. For the primary outcome of number of entries of correctly digitized data, we assume a control mean of 100 entries and a standard deviation of 30 in both treatment groups. By design, some individuals screen themselves out of applying for the role. Conservatively assuming half of the individuals continue with the application for the role and complete the task, there are then 160 individuals per treatment group. The MDE is 9.4253, interpretable as the treatment effect being a change in the number of entries of correctly digitized data by 9.4253 cells.
IRB

Institutional Review Boards (IRBs)

IRB Name
CUREC 
IRB Approval Date
2023-05-01
IRB Approval Number
ECONCIA22-23-16
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials