Do workers and firms understand their labor market?

Last registered on February 09, 2026

Pre-Trial

Trial Information

General Information

Title
Do workers and firms understand their labor market?
RCT ID
AEARCTR-0016570
Initial registration date
November 13, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 17, 2025, 2:21 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
February 09, 2026, 12:03 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Stockholm University

Other Primary Investigator(s)

PI Affiliation
Stockholm University
PI Affiliation
Université Paris 1 Panthéon-Sorbonne

Additional Trial Information

Status
On going
Start date
2025-06-01
End date
2026-09-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The study consists of a series of experiments on a large online platform for freelance work. The objectives are to i) identify employer preferences for worker characteristics and worker preferences for employer/job characteristics, ii) identify beliefs of employers about worker preferences and beliefs of workers about employer preferences, iii) identify differences between employer (worker) preferences and worker (employer) beliefs about those preferences, and iv) evaluate the importance of these information frictions on the platform and interventions to mitigate them.

This will be achieved by creating hypothetical worker profiles and hypothetical employer job offers in which a rich set of characteristics are independently randomized. Real workers and employers recruited on the platform will evaluate these profiles/offers. We then compare evaluations of profiles by employers to predictions of those evaluations made by workers and evaluations of job offers by workers to predictions of those evaluations made by employers, isolating incorrect beliefs of each side of the market about the preferences of the other side. Similarly, we also compare evaluations of profiles by employers to predictions of those evaluations made by other employers and evaluations of job offers by workers to predictions of those evaluations made by other workers, isolating incorrect beliefs of each side of the market about the preferences of their competitors in the market.

We then construct an information treatment that informs participants about their incorrect beliefs and provides them with related advice. We test the impact of the information on participants' intentions to change their behavior on the platform.

Lastly, we present a model of the labor market which helps interpret and quantify the importance of missperceptions.
External Link(s)

Registration Citation

Citation
Deschamps, Pierre, Morgane Laouénan and Louis-Pierre Lepage. 2026. "Do workers and firms understand their labor market?." AEA RCT Registry. February 09. https://doi.org/10.1257/rct.16570-2.0
Experimental Details

Interventions

Intervention(s)
The study has two waves of experiments.

In the first wave, employers evaluate profiles of workers and workers evaluate job offers of employers.

In the second wave, employers predict how workers evaluated job offers and workers predict how employers evaluated worker profiles in the first wave. In an additional version of the experiments, employers instead predict how other employers evaluated worker profiles and workers predict how other workers evaluated job offers in the first wave. Based on their predictions, participants are then provided with an information treatment to inform them about their prediction errors and test how correcting missperceptions affects intended behavior on the platform.

See the attached document for the two full questionnaire versions of the first wave and the four full questionnaire versions of the second wave.

Overview

The experiments are incentivized and without deception.

We construct hypothetical worker profiles, independently randomizing worker characteristics on each profile and hypothetical job offers independently randomizing employer/job characteristics on each offer. To populate fields on profiles/offers, we draw on administrative data from real profiles and job offers on the platform. For example, the range of wages indicated on profiles follows the range of wages on profiles on the platform for each job category, and work experiences shown on a profile all represent real experiences of workers on the platform, but standardized and reformatted to be comparable and consistent with the rest of the profile. For some relatively rarer characteristics of interest, we deviate from their frequency on the platform in order to test their impact on evaluations/predictions, balancing concerns of realism with statistical power. Details on the construction of profiles and randomization of each characteristic are included in the attached supplemental documentation.

In the first wave of experiments, we recruit real freelance workers and employers on the platform and have employers evaluate worker profiles as well as workers evaluate job offers. Each participant makes 25 evaluations. Similar to the Incentived Resume Rating approach (Kessler et al., 2019), we incentivize truthful reporting by recommending, based on their evaluations, 5 real freelancers on the platform to participating employers. For freelancers, for 25% of respondents, we provide information to the platform's matching team based on each freelancer's answers, in order for them to be recommended for one real job on the platform. See below for details on how recommendations were issued.

The first wave provides us with preferences of employers over worker characteristics and of workers over employer/job characteristics. Employers (workers) also receive 50 (20) Euros for completing the experiment and a chance to win a bigger prize of 1,000 Euros.

In the second wave of experiments, we recruit another set of workers and employers on the platform split into four groups which complete a distinct variant of the experiment.

Half of participating workers are tasked with first predicting, for 25 profiles, how employers evaluated such a profile (the mean) in the first wave and then to give their direct prediction of the impact of each randomized profile characteristic, ceteris paribus, influenced employers' evaluations. They are then presented with an information treatment based on their predictions. Some of these participants may be recruited from those who completed wave 1.

Half of participating workers are tasked with first predicting, for 25 offers, how workers evaluated such a job offer (the mean) in the first wave and then to give their direct prediction of the impact of each randomized offer characteristic, ceteris paribus, influenced workers' evaluations. They are then presented with an information treatment based on their predictions. These participants are recruited from those who did not complete wave 1.

Half of participating employers are tasked with first predicting, for 25 profiles, how employers evaluated such a profile (the mean) in the first wave and then to give their direct prediction of the impact of each randomized profile characteristic, ceteris paribus, influenced employers' evaluations. They are then presented with an information based on their predictions. These participants are recruited from those who did not complete wave 1.

Half of participating employers are tasked with first predicting, for 25 offers, how workers evaluated such a job offer (the mean) in the first wave and then to give their direct prediction of the impact of each randomized offer characteristic, ceteris paribus, influenced workers' evaluations. They are then presented with an information based on their predictions. Some of these participants may be recruited from those who completed wave 1.

All predictions, either through profile/offer evaluations or through directly predicting the impact of characteristics, are incentivized using the real evaluations and impacts of each characteristic from the first wave through Binarized Scoring Rule approaches, as detailed below.

The second wave provides us with beliefs of workers about employers' preference over worker characteristics and about other workers' preferences over employer/job characteristics as well as the beliefs of employers about workers' preferences over employer/job characteristics and about other employers' preferences over worker characteristics. Using these two waves of data, we estimate "wedges" between what employers (workers) value and what workers (employers) think they value through simple regressions defined below. Similarly, we estimate "wedges" between what employers (workers) value and what other employers (workers) think they value.

The second wave also provides us estimates of the information treatment on intended behavior on the platform.

Job categories in the experiments

When making evaluations and predictions, participants are segmented into job categories. For employers, these are jobs they are interested in hiring for. For workers, these are jobs they do on the platform. These categories determine the profiles/offers they are shown, as characteristics are randomized with different distributions across categories.

We consider the following job categories, which are among the biggest on the platform:

A. Tech, separated into
1. Backend Development
2. Frontend Development
3. Fullstack Development
4. Mobile Development
5. Web Integration / CMS Development / Webmaster

B. Marketing and Communication, separated into
1. Growth Marketing
2. Content Manager
3. Copywriting

C. Translation

Employers/workers who report not being interested in any of the above categories will be excluded from the experiment (we do not count these in our targeted sample sizes). In wave 2, we remove translation jobs, since they were too infrequently selected in wave 1 to have participants predict wave 1 evaluations.
Intervention Start Date
2025-11-13
Intervention End Date
2026-04-01

Primary Outcomes

Primary Outcomes (end points)
In the first wave, employers evaluate profiles of workers and workers evaluate job offers of employers. The two primary outcomes are the evaluations made by employers/workers regarding their interest in hiring the worker/accepting the job, each on a 10-point likert scale.

In the second wave, participants predict evaluations from the first wave by being shown profiles/job offers similarly to the first wave and being asked to predict how each would have been evaluated by employers/workers in the first wave. The first primary outcomes are the four sets of predictions made by employers and workers regarding the interest of employers in hiring the worker and the interest of workers in accepting the job. Participants are presented with the same 10-point likert scales, now allowing one decimal for predictions.

The second primary outcomes in the second wave are the intentions of participants to change their behavior on the platform after seeing the information treatment. Workers are asked the likelihood that they will modify their profiles, their project selection criteria, and their job modalities, each as a separate question with a 5-point answer scale ranging from very improbable to very probable. We will report the share of participants who report that it is probable or very probable that they will change at least one of the three aspects.

Combining the two waves, another set of primary outcomes is the differences between evaluations from the first wave and predicted evaluations from the second wave, for both profiles and job offers and for both employers and workers, corresponding to four sets of "wedges" between preferences and expected preferences.
Primary Outcomes (explanation)
In the first wave, our primary specifications will be a linear regression of each of our two primary outcomes on the set of variables randomized in profiles and briefs.

In particular, for employers, we regress their evaluation of a profile (1-10) on the following profile characteristics:
- Asked wage (continuous)
- Number of completed projects on the platform (continuous)
- At least one completed project (discrete indicator)
- Share of completed projects with a review (continuous, 0 for 0 projects completed)
- Experience badge (discrete indicator, compared to no badge, no badge if 0 projects completed)
- Years of work experience (continuous)
- Number of professional experiences on the profile (discrete, 2 versus 1)
- Prestigious professional experiences (discrete indicator, compared to non-prestigious experiences)
- Master's degree (discrete indicator, compared to less than a Master's)
- Preference for remote work (discrete indicator, compared to availability both remote and in person)
- Full-time availability (discrete indicator, compared to only part-time availability)
- Female (discrete indicator)
- Arab (discrete indicator)

For freelancers, we regress their evaluation of a job offer (1-10) on the following offer/employer characteristics:
- Offered wage (continuous)
- Number of completed projects by employer on the platform (continuous)
- At least one completed project by employer (discrete indicator)
- Share of completed projects by employer with a review (continuous, 0 for 0 projects completed)
- Prestigious employer mention (discrete indicator, compared to non-prestigious mention)
- Availability of remote work (discrete indicator, compared to in-person requirement)
- Full-time requirement (discrete indicator, compared to part-time requirement)
- Job duration (continuous)
- Teamwork (discrete indicator, compared to no mention on whether there is or is not teamwork)
- Urgent/tight deadlines (discrete indicator, compared to no mention of deadlines)
- Female employer (discrete indicator)
- Arab employer (discrete indicator)

Details on the randomization/distribution of each characteristic are included in the attached supplemental documentation.

We will also consider specifications which include respondent fixed effects and fixed effects for the order in which a profile/offer was shown.

Standard errors will be clustered at the respondent level.

To assess sensitivity to the linearity assumption along the likert scale imposed by these linear regressions, we will also estimate ordered probit regressions.

Since these estimates represent average effects, we will also describe how effects vary across the distribution of our primary outcome.

In the second wave, our main specifications to present prediction results will be analogous to those from the first wave, including how variables are defined, but now using as outcome the predictions of profiles/offers rather than their ratings.

To estimate wedges, first we will use analogous specifications, but adding an indicator for whether a respondent was part of wave 1 or wave 2 interacted with each randomized characteristic. In these specifications, we will also reweight observations from wave 2 to match the distributions of participants from wave 1 across different job categories, to ensure that wedges don't arise from a different mix of job categories across respondents in wave 1 versus wave 2. Second, to obtain a single simplified estimate of the average prediction error (wedge) for each characteristic, taking into account potential heterogeneity across job types, we will calculate the absolute value of job category-specific wedges and average them across categories.


Secondary Outcomes

Secondary Outcomes (end points)
In wave 1, employers will also evaluate the competence of each profile as well as how likely a worker such as the one in the profile would be to accept if they made them an offer, also on 10-point likert scales. When evaluating job offers, workers also have to evaluate to what extent they feel competent for the job and how likely they would be to receive such an offer, each on a 10-point likert scale. In the second wave, participants do not have to predict these auxiliary evaluations.

In wave 2, to make the information treatment more salient and after they complete their predictions, we also tell participants which characteristics were randomized and ask them to tell us explicitly what they think was the impact of a change in each characteristic on the evaluations from wave 1. These constitute a second set of predictions, where participants directly predict the estimated impact of different characteristics on profiles/offers, rather than predicting the overall rating. We will present this auxiliary set of predictions as a secondary set of outcomes along with the wedges that arise from using these direct predictions.
Secondary Outcomes (explanation)
Wave 1

The auxiliary evaluation outcomes will be used as outcome variables in analogous regressions as the ones for the main outcomes. We will also investigate specifications using auxiliary evaluations as additional controls in the main specifications outlined above to investigate channels through which hiring/accepting jobs operate. Questions regarding likelihood of a worker accepting/receiving a job offer are incentivized by also factoring in recommendations received from completing the experiment. Questions regarding competence are not.

Wave 2

When presenting results using direct predictions, we will proceed in two ways. First, we will compute and present the mean predicted impact of each characteristic across participants along with 95% confidence intervals and test whether it differs statistically significantly from the estimated impact of each characteristic estimated in wave 1. Second, since the first approach may underestimate wedges by combining positive and negative prediction errors, we will compute and present, for each characteristic, the mean of the absolute value of the difference between the predicted impact of each characteristic and their estimated impact from wave 1. In addition, to take into account potential heterogeneity across job types, we will calculate the absolute value of these measures for each job category and average them across categories.

Experimental Design

Experimental Design
Technical expertise in the experiments

Within each category, in wave 1, participants are also matched to profiles/offers based on specific expertise keywords chosen by participants from a list specific to each job category and these keywords are shown on profiles/offers, as they are on the platform, to ensure relevance of evaluations to the participants. For example, a client looking for a Spanish translator is only shown profiles consistent with the capacity to do such translations.

The characteristics on the profiles/briefs shown to participants in wave 2 follow the same distribution of characteristics as those shown on profiles/briefs from wave 1. Expertise keywords selected by participants in wave 1 are not shown on profiles/briefs of wave 2, but participants are told that wave 1 participants were only shown profiles/offers which fit their expertise for workers and desired expertise for employers.


Recommendations (wave 1)

Recommendations are implemented following the established procedure in Kessler et al. (2019). We use ridge regressions, allowing us to estimate preferences for attributes at the individual level while disciplining coefficients by shrinking them towards 0. We select optimal penalization parameters for employers and workers through cross validation by splitting our samples into estimation and hold-out samples with probability 0.5. We run pooled regressions in the estimation samples with different values of the penalization parameter, and select the one which minimizes prediction error in the hold-out samples. We repeat this process 100 times for employers and for workers with different estimation and hold-out samples, and use the average of the best-performing penalization parameters as the optimal penalization parameters, one for employers and one for workers. We then run ridge regressions at the individual level to recover preferences of each client and freelancer.

For each employer, we use the resulting estimates combined with information on the pool of workers available on the platform to assess how suitable each of them would be to the client, excluding gender and ethnicity. To further guarantee relevant recommendations, we incorporate additional filters based on specific expertise keywords chosen by the employer (we sort workers base on share of matched expertise keywords, and keep workers with the same or higher level of expertise keywords as the hundredth worker) and questions that we ask employers at the end of the experiment: minimum years of experience, whether they require either remote or in person work, their maximum budget, and the city in which the job would be done if they request on site work. Finally, we use the ridge regression estimates to predict the five best matched workers for that employer from this subsample of workers, weighing parameters estimated from the question about interest to hire 2/3 and worker's probability to apply 1/3.

For each worker, we standardize the distribution of the weighted parameters corresponding to each attribute in order to identify relative preferences of workers. Again, we weight parameters estimated from the question about interest in the job 2/3 and employer's probability of selecting them 1/3. We set 0.75 and -0.75 standard deviation as thresholds determining whether a worker has a relative preference or aversion for a particular attribute, compared to other workers. We inform the platform's matching team, who are in charge of recommending workers for jobs, of these relative preferences and that they should be used to recommend the worker for one project.


Incentivization of predictions (wave 2)

For 5 randomly selected evaluations out of 25, employers (workers) receive a bonus of 5 (2) euros with probability decreasing in their quadratic prediction error compared to the predicted evaluation of a random wave 1 participant. For 3 randomly selected characteristics, employers (workers) also receive a bonus of 3 (1.5) euros with probability decreasing in their quadratic prediction error compared to the predicted effect of that characteristic on the evaluations of a random wave 1 participant. Employers (workers) also receive 10 (5) Euros for completing the experiment and a chance to win a bigger prize of 1,000 Euros.

To aid in their predictions, participants in wave 2 are given job-category-specific information on participants from wave 1, namely the number of participants who made evaluations for their job categories, the share of them with at least one completed project on Malt, the average budget of projects completed on Malt (employers) or average daily wage indicated on the Malt profile (workers), years of freelancing experience (workers), and firm size category (employers).


Information treatment (wave 2)

After making their direct predictions, i.e. giving the predicted impact of each characteristic on the evaluation of profiles/offers from wave 1, participants are then shown the true estimated impact of each characteristic from wave 1 side by side with their prediction, along with the difference (prediction error), whether their prediction is statistically significantly different at the 90% level, and whether the estimated impact of each characteristic in wave 1 was itself statistically significantly different from 0.

They are then given information about what their prediction errors might imply for their behavior on the platform. For example, for a client who underestimates freelancers' valuation for remote work, this information would tell them that if they don't offer remote work, they might have to pay more than they thought for freelancers or it may be harder to recruit for their jobs. They are then asked if they find the information useful, if they are likely to modify their behavior on the platform as described above when presenting outcomes, and optionally, if they learned anything else from the information we provided them.

Heterogeneity, additional analyses, and data quality (waves 1 and 2)

We will investigate potential poor-quality answers by participants, namely excluding participants who spend an average of less than 10 seconds per profile/offer in the first wave and less than 6 seconds per profile/offer in the second wave from the analysis. We use a shorter cutoff for wave 2 since we ask 1 question per profile/offer rather than 3 and because the content of each item is slightly reduced in the absence of expertise tags.

We plan to conduct the following heterogeneity analyses for wave 1:

- Differences in client preferences for Arab/female freelancers between those who report that diversity is important in their hiring decisions and those who do not
- Differences in preferences between men and women (both workers and employers).
- Differences in preferences between Arab and non-Arab (both workers and employers).
- Differences in preferences between profiles with and without completed projects on the platform.
- Differences in preferences between tech and non-tech job categories (both workers and employers).
- Differences in preferences between participants by number of completed projects on the platform (both workers and employers).
- Differences in preferences between participating freelancers with more or less years of experience.

And for wave 2:
- Differences in accuracy of predictions by number of completed projects on the platform (both workers and employers).
- Differences in accuracy of predictions between tech and non-tech job categories (both workers and employers).
- Differences in accuracy of predictions about gender discrimination between men and women and about ethnic discrimination between workers of European versus Arab/Muslim origin (workers).

Furthermore, after the experiments are complete, if we can access the necessary platform data on project histories and freelancers’ profile updating, we will also explore additional analyses, namely relating preferences and/or predictions elicited from the IRR data to realized outcomes on the platform and testing the impact of the information treatment on freelancers’ profile updating.
Experimental Design Details
Not available
Randomization Method
Each characteristic on a profile/job offer that we construct is independently randomized by computer.
Randomization Unit
Wave 1: The randomization is at the profile/job offer level and participants have to provide evaluations of 25 profiles/offers.

Wave 2: The randomization is at the profile/job offer level and participants have to provide predictions of 25 profiles/offers.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Wave 1: 350 employers and 800 workers

Wave 2: 700 employers and 1600 workers
Sample size: planned number of observations
8750 profile evaluations and 20000 job offer evaluations in the first wave. 28750 profile predictions (25 for each of 350 employers and 25 for each of 800 workers) and 28750 job offer predictions (25 for each of 350 employers and 25 for each of 800 workers), as well as 700 direct predictions of employers and 1600 direct predictions of workers for each randomized characteristic (one prediction per characteristic per participant) in the second wave.
Sample size (or number of clusters) by treatment arms
N/A
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

Documents

Document Name
Randomized characteristics
Document Type
other
Document Description
File
Randomized characteristics

MD5: de0ac0a6c343a7cef922b0a7f7bdd49b

SHA1: 2a7cec0054d3449a9b30f13bdb745b9551a9ba75

Uploaded At: November 06, 2025

Document Name
Questionnaires Wave 2
Document Type
survey_instrument
Document Description
File
Questionnaires Wave 2

MD5: 6c819d75031de34df8bfd16205fbc969

SHA1: 66d6acc5202d25d6bd3865074c4d6ee62fba659a

Uploaded At: February 09, 2026

Document Name
Questionnaires Wave 1
Document Type
survey_instrument
Document Description
File
Questionnaires Wave 1

MD5: 1870d53eaf4a02acaf61e00645c9fa1a

SHA1: a8478c2f742cbd790b95e0923b62d73b9aeb4840

Uploaded At: February 09, 2026

IRB

Institutional Review Boards (IRBs)

IRB Name
Paris School of Economics Institutional Review Board
IRB Approval Date
2025-09-21
IRB Approval Number
2025-021