Back to History Current Version

Do workers and firms understand their labor market?

Last registered on November 17, 2025

Pre-Trial

Trial Information

General Information

Title
Do firms and workers know what the other side wants?
RCT ID
AEARCTR-0016570
Initial registration date
November 13, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 17, 2025, 2:21 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Stockholm University

Other Primary Investigator(s)

PI Affiliation
Stockholm University
PI Affiliation
Université Paris 1 Panthéon-Sorbonne

Additional Trial Information

Status
In development
Start date
2025-06-01
End date
2026-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The study consists of a series of experiments on a large online platform for freelance work. The objectives are to i) identify employer preferences for worker characteristics and worker preferences for employer/job characteristics, ii) identify beliefs of employers about worker preferences and beliefs of workers about employer preferences, iii) identify differences between employer (worker) preferences and worker (employer) beliefs about those preferences, and iv) evaluate interventions to mitigate information frictions on the platform. This will be achieved by creating hypothetical worker profiles and hypothetical employer job offers in which a rich set of characteristics are independently randomized. Real workers and employers recruited on the platform will evaluate these profiles/offers. We then plan to compare evaluations of profiles by employers to predictions of those evaluations made by workers and evaluations of job offers by workers to predictions of those evaluations made by employers, isolating incorrect beliefs of each side of the market about the preferences of the other side. Lastly, we plan to construct an information treatment to correct incorrect beliefs of participants.
External Link(s)

Registration Citation

Citation
Deschamps, Pierre, Morgane Laouénan and Louis-Pierre Lepage. 2025. "Do firms and workers know what the other side wants?." AEA RCT Registry. November 17. https://doi.org/10.1257/rct.16570-1.0
Experimental Details

Interventions

Intervention(s)
The study will have two waves of experiments and potentially additional experiments designed based on their results.

In the first wave, employers evaluate profiles of workers and workers evaluate job offers of employers.

In the second wave, employers predict how workers evaluated job offers and workers predict how employers evaluated worker profiles in the first wave. The second wave will be pre-registered in a subsequent version of this document.

Based on these findings, we will study the possibility for designing and conducting interventions on the platform to correct missperceptions and/or their scope for affecting outcomes on the platform. These interventions will be pre-registered in a subsequent version of this document.
Intervention Start Date
2025-11-13
Intervention End Date
2026-05-01

Primary Outcomes

Primary Outcomes (end points)
In the first wave, employers evaluate profiles of workers and workers evaluate job offers of employers. The two primary outcomes are the evaluations made by employers/workers regarding their interest in hiring the worker/accepting the job, each on a 10-point likert scale. See the attached document for the full questionnaire of the first wave.
Primary Outcomes (explanation)
In the first wave, our primary specifications will be a linear regression of each of our two primary outcomes on the set of variables randomized in profiles and briefs.

In particular, for employers, we regress their evaluation of a profile (1-10) on the following profile characteristics:
- Asked wage (continuous)
- Number of completed projects on the platform (continuous)
- At least one completed project (discrete indicator)
- Share of completed projects with a review (continuous, 0 for 0 projects completed)
- Experience badge (discrete indicator, compared to no badge, no badge if 0 projects completed)
- Years of work experience (continuous)
- Number of professional experiences on the profile (discrete, 2 versus 1)
- Prestigious professional experiences (discrete indicator, compared to non-prestigious experiences)
- Master's degree (discrete indicator, compared to less than a Master's)
- Preference for remote work (discrete indicator, compared to availability both remote and in person)
- Full-time availability (discrete indicator, compared to only part-time availability)
- Female (discrete indicator)
- Arab (discrete indicator)

For freelancers, we regress their evaluation of a job offer (1-10) on the following offer/employer characteristics:
- Offered wage (continuous)
- Number of completed projects by employer on the platform (continuous)
- At least one completed project by employer (discrete indicator)
- Share of completed projects by employer with a review (continuous, 0 for 0 projects completed)
- Prestigious employer mention (discrete indicator, compared to non-prestigious mention)
- Availability of remote work (discrete indicator, compared to in-person requirement)
- Full-time requirement (discrete indicator, compared to part-time requirement)
- Job duration (continuous)
- Teamwork (discrete indicator, compared to no mention on whether there is or is not teamwork)
- Urgent/tight deadlines (discrete indicator, compared to no mention of deadlines)
- Female employer (discrete indicator)
- Arab employer (discrete indicator)

Details on the randomization/distribution of each characteristic are included in the attached supplemental documentation.

We will also consider specifications which include respondent fixed effects and fixed effects for the order in which a profile/offer was shown.

Standard errors will be clustered at the respondent level.

To assess sensitivity to the linearity assumption along the likert scale imposed by these linear regressions, we will also estimate ordered probit regressions.

Since these estimates represent average effects, we will also describe how effects vary across the distribution of our primary outcome.

Secondary Outcomes

Secondary Outcomes (end points)
Employers will also evaluate the competence of each profile as well as how likely a worker such as the one in the profile would be to accept if they made them an offer, also on 10-point likert scales.

When evaluating job offers, workers also have to evaluate to what extent they feel competent for the job and how likely they would be to receive such an offer, each on a 10-point likert scale.
Secondary Outcomes (explanation)
The auxiliary evaluation outcomes will be used as outcome variables in analogous regressions as the ones for the main outcomes.

We will also investigate specifications using auxiliary evaluations as additional controls in the main specifications outlined above to investigate channels through which hiring/accepting jobs operate.

Questions regarding likelihood of a worker accepting/receiving a job offer are incentivized by also factoring in recommendations received from completing the experiment. Questions regarding competence are not.

Experimental Design

Experimental Design
Overview

The experiments are incentivized and without deception.

We construct hypothetical worker profiles, independently randomizing worker characteristics on each profile and hypothetical job offers independently randomizing employer/job characteristics on each offer. To populate fields on profiles/offers, we draw on administrative data from real profiles and job offers on the platform. For example, the range of wages indicated on profiles follows the range of wages on profiles on the platform for each job category, and work experiences shown on a profile all represent real experiences of workers on the platform, but standardized and reformatted to be comparable and consistent with the rest of the profile. For some relatively rarer characteristics of interest, we deviate from their frequency on the platform in order to test their impact on evaluations/predictions, balancing concerns of realism with statistical power. Details on the construction of profiles and randomization of each characteristic are included in the attached supplemental documentation.

In the first wave of experiments, we recruit real freelance workers and employers on the platform and have employers evaluate worker profiles as well as workers evaluate job offers. Each participant makes 25 evaluations. Similar to the Incentived Resume Rating approach (Kessler et al., 2019), we incentivize truthful reporting by recommending, based on their evaluations, 5 real freelancers on the platform to participating employers. For freelancers, for 25% of respondents, we provide information to the platform's matching team based on each freelancer's answers, in order for them to be recommended for one real job on the platform. The first wave provides us with preferences of employers over worker characteristics and of workers over employer/job characteristics. Employers (workers) also receive 50 (20) Euros for completing the experiment and a chance to win a bigger prize of 1,000 Euros.


Job categories and technical expertise in the experiments

When making evaluations, participants are segmented into job categories. For employers, these are jobs they are interested in hiring for. For workers, these are jobs they do on the platform. These categories determine the profiles/offers they are shown, as characteristics are randomized with different distributions across categories. Within each category, participants are also matched to profiles/offers based on specific expertise keywords chosen by participants from a list specific to each job category, to ensure relevance. For example, a client looking for a Spanish translator is only shown profiles consistent with the capacity to do such translations.

We consider the following job categories, which are among the biggest on the platform:

A. Tech, separated into
1. Backend Development
2. Frontend Development
3. Fullstack Development
4. Mobile Development
5. Web Integration / CMS Development / Webmaster

B. Marketing and Communication, separated into
1. Growth Marketing
2. Content Manager
3. Copywriting

C. Translation


Heterogeneity, additional analyses, and data quality

Employers/workers who report not being interested in any of the above categories will be excluded from the experiment (we do not count these in our targeted sample sizes).

We will also investigate potential poor-quality answers by participants, namely excluding participants who spend an average of less than 10 seconds per profile/offer from the analysis.

We plan to conduct the following heterogeneity analyses:

- Differences in client preferences for Arab/female freelancers between those who report that diversity is important in their hiring decisions and those who do not
- Differences in preferences between men and women (both workers and employers).
- Differences in preferences between Arab and non-Arab (both workers and employers).
- Differences in preferences between profiles with and without completed projects on the platform.
- Differences in preferences between tech and non-tech job categories (both workers and employers).
- Differences in preferences between participants by number of completed projects on the platform (both workers and employers).
- Differences in preferences between participating freelancers with more or less years of experience.



Recommendations

Recommendations are implemented following the established procedure in Kessler et al. (2019). We use ridge regressions, allowing us to estimate preferences for attributes at the individual level while disciplining coefficients by shrinking them towards 0. We select optimal penalization parameters for employers and workers through cross validation by splitting our samples into estimation and hold-out samples with probability 0.5. We run pooled regressions in the estimation samples with different values of the penalization parameter, and select the one which minimizes prediction error in the hold-out samples. We repeat this process 100 times for employers and for workers with different estimation and hold-out samples, and use the average of the best-performing penalization parameters as the optimal penalization parameters, one for employers and one for workers. We then run ridge regressions at the individual level to recover preferences of each client and freelancer.

For each employer, we use the resulting estimates combined with information on the pool of workers available on the platform to assess how suitable each of them would be to the client, excluding gender and ethnicity. To further guarantee relevant recommendations, we incorporate additional filters based on specific expertise keywords chosen by the employer (we sort workers base on share of matched expertise keywords, and keep workers with the same or higher level of expertise keywords as the hundredth worker) and questions that we ask employers at the end of the experiment: minimum years of experience, whether they require either remote or in person work, their maximum budget, and the city in which the job would be done if they request on site work. Finally, we use the ridge regression estimates to predict the five best matched workers for that employer from this subsample of workers, weighing parameters estimated from the question about interest to hire 2/3 and worker's probability to apply 1/3.

For each worker, we standardize the distribution of the weighted parameters corresponding to each attribute in order to identify relative preferences of workers. Again, we weight parameters estimated from the question about interest in the job 2/3 and employer's probability of selecting them 1/3. We set 0.75 and -0.75 standard deviation as thresholds determining whether a worker has a relative preference or aversion for a particular attribute, compared to other workers. We inform the platform's matching team, who are in charge of recommending workers for jobs, of these relative preferences and that they should be used to recommend the worker for one project.
Experimental Design Details
Not available
Randomization Method
Each characteristic on a profile/job offer that we construct is independently randomized by computer.
Randomization Unit
The randomization is at the profile/job offer level and participants have to provide evaluations of 25 profiles/offers.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
350 employers and 800 workers
Sample size: planned number of observations
8750 profile evaluations and 20000 job offer evaluations in the first wave.
Sample size (or number of clusters) by treatment arms
N/A
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

Documents

Document Name
Randomized characteristics
Document Type
other
Document Description
File
Randomized characteristics

MD5: de0ac0a6c343a7cef922b0a7f7bdd49b

SHA1: 2a7cec0054d3449a9b30f13bdb745b9551a9ba75

Uploaded At: November 06, 2025

Document Name
Questionnaire
Document Type
survey_instrument
Document Description
File
Questionnaire

MD5: bde6a8e9b0f4c8f8a191a2d628f611a1

SHA1: 2b51fd66252969fdbbfe01e4aec2eec747e2c685

Uploaded At: November 13, 2025

IRB

Institutional Review Boards (IRBs)

IRB Name
Paris School of Economics Institutional Review Board
IRB Approval Date
2025-09-21
IRB Approval Number
2025-021