Intervention(s)
The study has two waves of experiments.
In the first wave, employers evaluate profiles of workers and workers evaluate job offers of employers.
In the second wave, employers predict how workers evaluated job offers and workers predict how employers evaluated worker profiles in the first wave. In an additional version of the experiments, employers instead predict how other employers evaluated worker profiles and workers predict how other workers evaluated job offers in the first wave. Based on their predictions, participants are then provided with an information treatment to inform them about their prediction errors and test how correcting missperceptions affects intended behavior on the platform.
See the attached document for the two full questionnaire versions of the first wave and the four full questionnaire versions of the second wave.
Overview
The experiments are incentivized and without deception.
We construct hypothetical worker profiles, independently randomizing worker characteristics on each profile and hypothetical job offers independently randomizing employer/job characteristics on each offer. To populate fields on profiles/offers, we draw on administrative data from real profiles and job offers on the platform. For example, the range of wages indicated on profiles follows the range of wages on profiles on the platform for each job category, and work experiences shown on a profile all represent real experiences of workers on the platform, but standardized and reformatted to be comparable and consistent with the rest of the profile. For some relatively rarer characteristics of interest, we deviate from their frequency on the platform in order to test their impact on evaluations/predictions, balancing concerns of realism with statistical power. Details on the construction of profiles and randomization of each characteristic are included in the attached supplemental documentation.
In the first wave of experiments, we recruit real freelance workers and employers on the platform and have employers evaluate worker profiles as well as workers evaluate job offers. Each participant makes 25 evaluations. Similar to the Incentived Resume Rating approach (Kessler et al., 2019), we incentivize truthful reporting by recommending, based on their evaluations, 5 real freelancers on the platform to participating employers. For freelancers, for 25% of respondents, we provide information to the platform's matching team based on each freelancer's answers, in order for them to be recommended for one real job on the platform. See below for details on how recommendations were issued.
The first wave provides us with preferences of employers over worker characteristics and of workers over employer/job characteristics. Employers (workers) also receive 50 (20) Euros for completing the experiment and a chance to win a bigger prize of 1,000 Euros.
In the second wave of experiments, we recruit another set of workers and employers on the platform split into four groups which complete a distinct variant of the experiment.
Half of participating workers are tasked with first predicting, for 25 profiles, how employers evaluated such a profile (the mean) in the first wave and then to give their direct prediction of the impact of each randomized profile characteristic, ceteris paribus, influenced employers' evaluations. They are then presented with an information treatment based on their predictions. Some of these participants may be recruited from those who completed wave 1.
Half of participating workers are tasked with first predicting, for 25 offers, how workers evaluated such a job offer (the mean) in the first wave and then to give their direct prediction of the impact of each randomized offer characteristic, ceteris paribus, influenced workers' evaluations. They are then presented with an information treatment based on their predictions. These participants are recruited from those who did not complete wave 1.
Half of participating employers are tasked with first predicting, for 25 profiles, how employers evaluated such a profile (the mean) in the first wave and then to give their direct prediction of the impact of each randomized profile characteristic, ceteris paribus, influenced employers' evaluations. They are then presented with an information based on their predictions. These participants are recruited from those who did not complete wave 1.
Half of participating employers are tasked with first predicting, for 25 offers, how workers evaluated such a job offer (the mean) in the first wave and then to give their direct prediction of the impact of each randomized offer characteristic, ceteris paribus, influenced workers' evaluations. They are then presented with an information based on their predictions. Some of these participants may be recruited from those who completed wave 1.
All predictions, either through profile/offer evaluations or through directly predicting the impact of characteristics, are incentivized using the real evaluations and impacts of each characteristic from the first wave through Binarized Scoring Rule approaches, as detailed below.
The second wave provides us with beliefs of workers about employers' preference over worker characteristics and about other workers' preferences over employer/job characteristics as well as the beliefs of employers about workers' preferences over employer/job characteristics and about other employers' preferences over worker characteristics. Using these two waves of data, we estimate "wedges" between what employers (workers) value and what workers (employers) think they value through simple regressions defined below. Similarly, we estimate "wedges" between what employers (workers) value and what other employers (workers) think they value.
The second wave also provides us estimates of the information treatment on intended behavior on the platform.
Job categories in the experiments
When making evaluations and predictions, participants are segmented into job categories. For employers, these are jobs they are interested in hiring for. For workers, these are jobs they do on the platform. These categories determine the profiles/offers they are shown, as characteristics are randomized with different distributions across categories.
We consider the following job categories, which are among the biggest on the platform:
A. Tech, separated into
1. Backend Development
2. Frontend Development
3. Fullstack Development
4. Mobile Development
5. Web Integration / CMS Development / Webmaster
B. Marketing and Communication, separated into
1. Growth Marketing
2. Content Manager
3. Copywriting
C. Translation
Employers/workers who report not being interested in any of the above categories will be excluded from the experiment (we do not count these in our targeted sample sizes). In wave 2, we remove translation jobs, since they were too infrequently selected in wave 1 to have participants predict wave 1 evaluations.