Learning in Job Search

Last registered on February 19, 2026

Pre-Trial

Trial Information

General Information

Title
Learning in Job Search
RCT ID
AEARCTR-0017896
Initial registration date
February 16, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 19, 2026, 7:25 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Southampton

Other Primary Investigator(s)

PI Affiliation
University of Bologna

Additional Trial Information

Status
In development
Start date
2026-02-17
End date
2026-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates how individuals adjust their job search decisions when learning from others or from artificial intelligence (AI). We conduct an online experiment with 1,600 participants from the US, adapting the sequential job search design of Cortés et al. (2023). The experiment begins with a real-effort typing task that determines participants’ likelihood of receiving higher or lower wage offers. Participants then engage in up to five rounds of simulated job search, using the strategy method: in each round, they state beliefs about their typing ability and a minimum acceptable wage (reservation wage), but receive no feedback until the end of the study.
We introduce new information treatments to examine how people learn from different sources. Some participants receive information about top performers of the same or different gender (“social learning”), while others are shown an algorithm-generated optimal strategy (AI strategy) for a risk-neutral agent. Before information is revealed, participants state their willingness to pay (WTP) for access, using an incentivized Becker–DeGroot–Marschak (BDM) mechanism.
The experiment measures how social and algorithmic information influence belief accuracy, reservation wage choices, and job search outcomes, and whether gender identity moderates learning and the perceived value of information.
External Link(s)

Registration Citation

Citation
Rattini, Veronica and Martina Vecchi. 2026. "Learning in Job Search." AEA RCT Registry. February 19. https://doi.org/10.1257/rct.17896-1.0
Experimental Details

Interventions

Intervention(s)
The experiment consists of four main parts: a real-effort task, a five-round simulated job search task, a risk elicitation task using the multiple price list framework of (Andersen et al. 2006), and a final questionnaire.

Real-effort typing task:
Participants type 15 sequences of randomly generated letters as quickly as possible. Based on typing speed, participants are classified as fast typists or slow typists, using the top quartile cutoff from a pilot sample. This classification determines the probability of receiving higher or lower wage offers in the subsequent job search task.

Job search task (five rounds, strategy method):
After the typing task, participants complete a job search task lasting five rounds. In each round, they set a reservation wage and report their belief about being a fast typist. Wages are drawn from a fixed discrete distribution, with higher draws being more likely for fast typists. Participants receive no feedback about wage offers until the end of the experiment.
The structure of the job search task is as follows:
1. Baseline beliefs and reservation wage (round 1 only): participants report their belief about being a fast typist and their reservation wage for the round.
2. Willingness-to-pay (WTP) for information (round 1 only): participants state their WTP to access one of three types of information: reservation wage and beliefs of a same-gender top performer, reservation wage and beliefs of a different-gender top performer, or the optimal strategy for a risk-neutral fast-typist agent determined by an algorithm. For a randomly selected subset of participants, WTP is implemented through a Becker-DeGroot-Marschak (BDM) mechanism to determine whether the information is purchased.
3. Information treatment (rounds 1 to 5): Participants are randomly assigned to one of four treatment arms. Depending on treatment, they receive no information, social information about peers, or algorithmic information.
4. Beliefs and reservation wage (rounds 1 to 5, strategy method): After receiving information (if applicable), participants report their belief about being a fast typist and their reservation wage for that round (in round 1, they may revise them). In rounds 2–5, they continue to report these variables under the assumption that previous offers were below their reservation wage. No feedback about actual wage draws is provided until the end of the study.
5. Outcome determination: After all five rounds, wage offers are drawn according to each participant’s true typist classification. The first round in which the offer equals or exceeds the reservation wage determines the accepted job and final payment. If no offer meets the reservation wage by round 5, participants receive a fixed outside option.
Intervention Start Date
2026-02-17
Intervention End Date
2026-12-31

Primary Outcomes

Primary Outcomes (end points)
Reservation wage (RW)
Optimality of the reservation wage

Primary Outcomes (explanation)
• Reservation wage (RW):
In each round of the job-search task, we elicit participants’ reservation wage, defined as the minimum wage at which they would be willing to accept an offer. This variable will be analysed across all five rounds.
• Optimality of the reservation wage:
For each round, we calculate the difference between the participant’s stated reservation wage and the optimal reservation wage given their type and risk-neutrality. This measure captures whether participants’ decisions deviate from the optimal benchmark.

Secondary Outcomes

Secondary Outcomes (end points)
A. Job search outcomes:
1. Reservation wage updating (round 1)
2. Earnings (final accepted wage)
3. Rounds of acceptance
4. Reservation-wage dynamics

B. Belief updating and accuracy:
5. Beliefs about being a fast typist
6. Beliefs updating
7. Belief accuracy - Simple binary correctness (dummy)
8. Belief accuracy – Brier score

C. Information demand and valuation:
9. Willingness-to-pay (WTP) for information
10. Net benefit of information
11. BDM purchase indicator / compliance

D. Heterogeneity and mechanisms:
Treatment effect heterogeneity.
We will explore whether treatment effects vary by:
• participant gender
• participant beliefs
• baseline typing quartile,
• baseline risk aversion,
• and by whether the participant paid for information under the BDM mechanism.
Secondary Outcomes (explanation)
A. Job search outcomes
1- Reservation wage updating (round 1)
Change in reservation wage between first (pre-information) and final (post-information) choice in round 1, capturing the immediate effect of information.
2- Earnings (final accepted wage)
The final accepted wage, determined by the first round in which the wage offer is greater than or equal to the stated reservation wage, or the outside option if no offer is accepted.
3- Rounds to acceptance
The round in which the participant first accepts an offer, reflecting search duration.
4- Reservation-wage trajectory
The dynamic of reservation wage choices across rounds 1–5. This may be summarized as the slope of the line connecting reservation wage choices from round 1 to round 5.

B. Belief updating and accuracy
5- Beliefs about being a fast typist and beliefs updating
Participants’ subjective probability of being a fast typist. After receiving information (if applicable), participants could revise their belief about being a fast typist, so we create an indicator variable equal to 1 if the participant update upward/downward their beliefs, and a variable measuring the distance between prior and the posterior beliefs.
6- Belief accuracy - Simple binary correctness (dummy)
Indicator equal to 1 if the participant’s belief correctly predicts their true typing classification (e.g., “fast” if actual typing speed ≥ threshold)
7- Belief accuracy – Brier score
The Brier score is the mean squared error between stated probability and actual outcome:
Belief Accuracy_i= (p_i - y_i ) ^2
Where p_i is participant i’s stated probability of being a fast typist (between 0 and 1), and y_i the objective outcome for participant i: y_i=1 if actual typing speed ≥ threshold (top quartile in pilot), else y_i=0. Lower score indicates higher accuracy (0 is perfect accuracy). This measure uses full probabilistic information, penalising both overconfidence and underconfidence.
Where p_i is participant i’s stated probability of being a fast typist (between 0 and 1) and the actual outcome for participant i: y_i=1 if actual typing speed ≥ threshold (top quartile in pilot), else =0. Lower score indicates higher accuracy (0 is perfect accuracy). This measure uses full probabilistic information, penalising both overconfidence and underconfidence.

C. Information demand and valuation
8- Willingness-to-pay (WTP) for information
In round 1, participants state their WTP for each type of information (same-gender top performer, different-gender top performer, optimal strategy). This is a continuous measure.

D. Heterogeneity and mechanisms
9- Treatment effect heterogeneity
We will explore whether treatment effects vary by:
participant gender
participant beliefs
typing speed
baseline risk aversion
WTP for information under the BDM mechanism (testing whether higher WTP predicts stronger belief updating or behavioural adjustment)

Experimental Design

Experimental Design
The experiment consists of four main parts: a real-effort task, a five-round simulated job search task, a risk elicitation task using the multiple price list framework of (Andersen et al. 2006), and a final questionnaire.

Participants complete a real-effort typing task, typing 15 sequences of random letters as quickly and accurately as possible. Based on typing speed relative to a pilot sample, participants are classified as fast or slow typists. This classification determines their probabilities of receiving higher or lower wage offers in the subsequent job-search task.

The job-search task consists of five rounds, implemented using the strategy method. Each round, participants state a reservation wage (the minimum acceptable wage) and their beliefs about the probability of being a fast typist. Wage offers are then drawn from a fixed wage grid (e.g., 2–32 tokens, we kept the same grid as Cortes et al), conditional on their typist type. The first offer above the stated reservation wage is accepted; otherwise, participants receive an outside option (2 tokens, we kept the same value of Cortes et al). No feedback on realized offers is given until the end, so participants at each round state their reservation wage in case the drawn wage is lower than their reservation wage.
Before round 1, participants report:
• their beliefs about being a fast typist
• their reservation wage for the round
• their willingness-to-pay (WTP) for access to different information
Participants are randomly assigned to one of four treatments:
1. Control (no information)
2. Same-gender top performer – reservation wage of a top performer of the same gender
3. Different-gender top performer – reservation wage of a top performer of the opposite gender
4. Optimal strategy (AI) – algorithm-based optimal reservation wage for a risk-neutral fast typist
About 90% receive assigned information for free; 10% face a BDM mechanism, where information is shown only if WTP ≥ a randomly drawn price.
After seeing information (if any), participants may revise their beliefs and reservation wage (in round 1) and continue reporting them for rounds 2–5.
Payment: Final earnings are determined by the first accepted wage offer (or the outside option). BDM purchase costs of the information are deducted if applicable.
Experimental Design Details
Not available
Randomization Method
For the first wave, all recruited participants will be assigned to the control and AI condition initially. This will allow us to collect data on peers to then show participants in the other information treatments. In a second wave, participants will be assigned to all four groups.

The study will use a random incentive system. Each participant will be informed ex ante that only one out of every two participants will be randomly selected for payment for one of the activities (either the risk-elicitation task or the job-search task). For participants selected for payment, earnings will be determined according to the rules of either the risk-elicitation task or the job search task. The random selection for payment will be conducted independently of treatment assignment, choices, and outcomes. All participants face the same expected probability of payment, and no deception is used.
Randomization Unit
Individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1600 participants
Sample size: planned number of observations
We aim to recruit around 1600 participants.
Sample size (or number of clusters) by treatment arms
We aim to recruit around 1600 participants, evenly spread across treatment arms. 400 participants in the control group, 400 participants in the Optimal strategy (AI) group, 400 participants in the Same-gender top performer group, and 400 participants in the Different-gender top performer group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations were conducted using Monte Carlo simulations that mirror the planned experimental design and analysis. The study includes N = 1,600 participants, each observed over 5 decision rounds, with randomization within gender to one of four arms (control group, Same-gender top performer, Different-gender top performer group, or Optimal strategy (AI)), with equal allocation across arms. The outcome is the reservation wage, which is bounded between 2 and 32 tokens. Following Cortez et al, we assume a total outcome standard deviation of 5. All tests are two-sided with α = 0.05, and power is estimated from 1,000 simulation replications. Under these assumptions, the study has approximately 80% power to detect treatment effects of about 1 token in reservation wages, corresponding to 0.20 standard deviations. A 1 token effect represents about 5% of the baseline mean reservation wage. The design is well powered, about 80%, to detect economically meaningful changes in reservation wages of 1 token or more.
IRB

Institutional Review Boards (IRBs)

IRB Name
Faculty of Social Sciences Research Ethics Committee at the Unievrsity of Southampton
IRB Approval Date
2025-12-18
IRB Approval Number
110336
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information