Equal Pay for Equal Work when Assignments Aren't Equal

Last registered on November 18, 2022


Trial Information

General Information

Equal Pay for Equal Work when Assignments Aren't Equal
Initial registration date
November 15, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 18, 2022, 11:20 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

University of Pittsburgh

Other Primary Investigator(s)

PI Affiliation
University of Melbourne
PI Affiliation
University of Pittsburgh

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We explore how to modify institutional designs to provide information needed to give workers equal opportunities in the labor market. For example, performance evaluations often discount efforts on non-promotable tasks (committee service, helping others with their work, etc.) that do not impact the firm's key performance indicators. As women hold a larger share of this unrecognized work, their performance reviews will be worse than men's due to differences in work assignments, not effort. We explore performance evaluation options to lessen the impact of differential work assignments on pay. We use a laboratory experiment to look at how providing managers time usage and productivity measures for both promotable and non-promotable tasks to impacts wages and if managers avoid these disaggregated sources of information as an excuse to better compensate efforts on promotable tasks. This project provides concrete policy recommendations to reduce the gender gap in pay and promotion.
External Link(s)

Registration Citation

Lepper, Marissa, Maria Recalde and Lise Vesterlund. 2022. "Equal Pay for Equal Work when Assignments Aren't Equal." AEA RCT Registry. November 18. https://doi.org/10.1257/rct.10418-1.0
Experimental Details


We will change the amount of information over worker productivity we give managers.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Share of joint earnings given to each worker
Primary Outcomes (explanation)
The managers split the earnings of the two workers between them. Our main outcome is looking at what percent of shared earnings are given to each worker.

We hypothesize that in the control, the worker with the assignment that has fewer sliders will be given a larger share of joint worker earnings, and that the intervention will attenuate this and decrease the difference between the two shares.

Secondary Outcomes

Secondary Outcomes (end points)
-Information acquisition choice
-Task assignment
-Difficulty of task
Secondary Outcomes (explanation)
After completing the main task, workers and managers will complete various other tasks.
- Both workers and the manager will re-assign tasks (slider-intensive vs letter-intensive). We hypothesize that the worker who was initially given the letter-intensive task will be given it in this round again more often
- We hypothesize that workers will assess their relative rank differently based on assignment, and that workers assigned the letter-intensive task will be more confident of how they rank compared to others in the session.
- We think how difficult workers assess the task to be will differ by assignment
- In another treatment, we allow managers to pick if they want to learn how workers performened. The information they reveal is the same as in our main intervention. We hypothesize they won't always get the information.

Experimental Design

Experimental Design
Managers split joint earnings that two workers contributed to between the two workers. Workers differ in terms of work assignment. We vary the level of information the manager has about worker productivity.
Experimental Design Details
Not available
Randomization Method
Any randomization is done through zTree
Randomization Unit
The unit of observation is the group. We collect data in sessions, each in a different treatment.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
15 sessions (5 of each of the three treatments) each consisting of 4 - 7 groups of 3 participants
Sample size: planned number of observations
150 participants
Sample size (or number of clusters) by treatment arms
5 sessions for each of the three treatments. Each session will consist of 4 - 7 groups of 3 participants.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We are hypothesizing that there will be a 20% reduction in the share assigned to the

Institutional Review Boards (IRBs)

IRB Name
University of Pittsburgh
IRB Approval Date
IRB Approval Number