Overconfidence and the Gender Application Gap: A Laboratory Experiment

Last registered on December 26, 2024

Pre-Trial

Trial Information

General Information

Title
Overconfidence and the Gender Application Gap: A Laboratory Experiment
RCT ID
AEARCTR-0014968
Initial registration date
December 11, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 26, 2024, 12:47 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Technische Universität Dortmund

Other Primary Investigator(s)

PI Affiliation
Goethe University Frankfurt

Additional Trial Information

Status
In development
Start date
2024-12-13
End date
2025-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Women apply for jobs that yield relatively lower pay than men (Fluchtmann, Glenny, Harmon, Maibom, 2024). Despite its relevance for the gender wage gap, the mechanisms driving the application gap remain largely underexplored. This study investigates whether gender differences in overconfidence about one’s relative ability can partially explain this phenomenon.
From a directed search model, we derive two predictions about the role of overconfidence for the gender application gap. First, since overconfident job seekers overestimate their chances of securing high-wage jobs, they have a higher incentive to apply there – the direct effect. Second, job seekers who anticipate others' overconfidence also expect higher competition for high-wage jobs, discouraging them to apply there – the externality of overconfidence. Hence, if men are more overconfident than women, they apply more for high-wage jobs, while women’s anticipation of male overconfidence would further deter them.
To test these effects, we conduct a lab experiment. In part one, participants complete a logic task, followed by the measurement of their overconfidence and second-order beliefs about others' overconfidence. In part two, pairs of participants compete in a stylized labor market where job allocation probabilities depend on their relative performances in the logic task. Pairs are randomized into treatment and control groups: in treatment pairs, one participant is informed about its actual relative performance in the logic task, while the other remains uninformed but knows its competitor has this information. Control group participants receive no performance feedback. This design allows for identifying the direct effect of overconfidence by comparing informed treatment participants to controls, and the externalities of overconfidence by comparing uninformed treatment participants to controls.

External Link(s)

Registration Citation

Citation
Jost, Gregor and Vincent Selz. 2024. "Overconfidence and the Gender Application Gap: A Laboratory Experiment." AEA RCT Registry. December 26. https://doi.org/10.1257/rct.14968-1.0
Experimental Details

Interventions

Intervention(s)
Groups of two compete for two different jobs, which differ in wages. The job allocation probabilities depend on the relative performances of the two group members in a prior logic task. In control group pairs, neither group member receives feedback on its relative performance. In treatment group pairs, one member is informed of its actual performance relative to the entire session. This allows us to identify the direct effect of overconfidence. The other group member is informed that its competitor received such feedback, enabling us to identify the externality of overconfidence.
Intervention Start Date
2024-12-13
Intervention End Date
2025-06-30

Primary Outcomes

Primary Outcomes (end points)
Application weight x (between 0 and 1) that participants assign to the high-paying job and application weight (1 - x) that participants assign to the low-paying job.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Whether participants obtained a job offer
2. If so, what are the realized wages of participants
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Part 1: All participants have 15 minutes to solve up to 11 Raven matrices. Participants are not directly incentivized for this but are informed that their payoff in Part 2 depends positively on the number of correctly solved matrices and the time taken to solve them.
After this, participants are randomized into groups of two and assigned a rank High or Low based on their relative performance within their pair. Then, we elicit how likely they think they are rank High within their pair (Q1). Additionally, we ask participants about what they think was the average response of the other participants of the session to Q1 (Q2). We incentivize truthful responses by monetarily rewarding answers close to the actual values.

Part 2: Each pair competes for two jobs differing only in the wages they offer. Participants compete for these jobs by allocating application weights (x, 1-x) among them. Each job offers one position to one of the two participants in the group. The assignment rule is set up in a way that the likelihood of receiving an offer from a particular job increases with both the subject's weight assigned to that job and its rank. If a subject receives only one job offer, the wage from that job is realized. If a subject receives offers from both jobs, the higher wage is realized. If a subject receives no offers, its payoff for Part 2 is zero.
Afterwards, all participants are presented with two hypothetical scenarios. In these vignettes, we ask participants how they would have applied, if they had been given the information from each respective scenario. Within-subjects, we keep the participant's hypothetical performance and that of its competitor constant across both scenarios. The only variation within-subjects is the competitor's belief about its own performance. Across subjects, we vary the own performance and the order of the vignettes.

Part 3: We elicit risk preferences via a mutliple price list. Specifically, participants make decisions between a varying amount they receive with certainty and a lottery between 0€ and 20€. One decision is randomly drawn; conditional on the choice the lottery is played and the payoff is realized. After Part 3, a post-experimental survey is administered.

Treatments: In Part 2, pairs are randomized into a treatment and control group. Pairs in the control group do not obtain any feedback on performance in the logic task. In the treatment group, one of the two pair members receives information about its relative standing within the whole session. This removes any erroneous beliefs about its relative performance and thereby allows us to isolate the direct effect of overconfidence. The other person in the treatment group group does not get any information about its own performance but is informed that its competitor obtains this information, allowing us to evaluate the externality overconfidence imposes on others.


Measures: Our primary measures for overconfidence and anticipated overconfidence are
1. Overconfidence: we use the answers to Q1 and subtract the actual probability to be ranked High.
2. Anticipated overconfidence: we use the answers to Q2 and subtract the actual average probability to be ranked High of the other participants within the session.
In addition to our primary measures, we elicit how many matrices they think they solved correctly with which we can construct an additional measure of overconfidence. Also, we survey their general risk attitudes using the general risk question from Dohmen et al. (2011) in the post-experimental survey. These duplicate measures are elicited to account for measurement error (Gillen, Snowberg and Yariv, 2019).

Payoffs: The payoffs of either Part 2 or Part 3 are randomly drawn by the computer in addition to the show-up fee and the Part 1 payoffs.
Experimental Design Details
Not available
Randomization Method
Randomization is done by a computer.
Randomization Unit
The computer randomly assigns participants into pairs of two. Then, the computer randomly assigns all pairs to either treatment or control group with probabilities 2/3 and 1/3 respectively. Lastly, the computer randomly assigns one pair member of each treated pair to the treatment of receiving information about own performance, while the other pair member receives the treatment of obtaining the information that the other pair member was informed accordingly.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
600 individuals (or until the subject pool is exhausted)
Sample size: planned number of observations
600 individuals (or until the subject pool is exhausted). From this sample, we exclude any individuals who are performing worse than random chance in the Raven matrices (less than 2 correctly solved matrices).
Sample size (or number of clusters) by treatment arms
200 individuals in control, 200 individuals in each of the two treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Gemeinsame Ethikkommission Wirtschaftswissenschaften der Goethe-Universität Frankfurt und der Johannes Gutenberg-Universität Mainz
IRB Approval Date
2024-05-21
IRB Approval Number
N/A