Mobility and Dynamics of Competition

Last registered on October 13, 2023


Trial Information

General Information

Mobility and Dynamics of Competition
Initial registration date
September 06, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 04, 2023, 5:10 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 13, 2023, 12:57 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Texas &MUniversity

Other Primary Investigator(s)

PI Affiliation
Texas A&M University
PI Affiliation
Texas A&M University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We study mobility and dynamics of competition in the lab using multi-tiered competitive tournament environments that have different performance requirements using a real-effort task -- the addition of five two-digit numbers. Our design allows us to study the extent to which we observe efficient sorting across genders and under different informational provisions over repeated interactions with the same group members. We investigate whether feedback eliminates or reduces the gender gap in willingness to compete in a dynamic setting, and if there are gender differences in persistence and staying in a comfort zone. Lastly, we investigate whether there are differences in the dynamics of upward and downward mobility by gender.
External Link(s)

Registration Citation

Palma, Marco, Brian Toney and Valon Vitaku. 2023. "Mobility and Dynamics of Competition." AEA RCT Registry. October 13.
Experimental Details


The intervention we employ is revealing information feedback about the performance of other group members in the four-group competitive environment. We compare this to a control where participants receive information about their own performance and competitive thresholds only if competing. This question is important because there are many Government, companies, and other entities evaluating the implementation of fully transparent wage policies as a way to promote competition and reduce gender and other wage inequities.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Performance in the real-effort task, efficient sorting based on tier levels, the rate of upward and downward mobility, persistence (sticking with a tier after losing), comfort zone (moving up a tier after winning), confidence based on rank-belief elicitation. This analysis is performed over 10 competitive choice periods to understand the dynamics and mobility of competition.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
risk-aversion, facial-expression valence and emotions, demographics
Secondary Outcomes (explanation)
Eckel-Grossman task, and the quality and strength of emotions

Experimental Design

Experimental Design
Subjects are randomly assigned to groups of four members and groups are randomly assigned to one of two feedback conditions. In the Baseline condition, subjects learn the outcome of the round if competing, their own score and the score they had to beat, earnings for the round and the payment scheme they selected. In the Tier-Score-Disclosure treatment, subjects receive information about the scores of the other group members (and tier-scores) regardless of the endogenously selected payment scheme along with the information provided in Baseline. In both treatments, subjects complete a forced piece-rate in round 1, followed by a forced tournament in round 2. Following these two rounds, subjects make endogenous competitive-tiered or piece-rate entries for the next 10 rounds with feedback following round 3.
Experimental Design Details
Not available
Randomization Method
Randomization done by a computer. Subjects randomly seated in the lab.
Randomization Unit
Experimental groups.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
60 groups.
Sample size: planned number of observations
240 student subjects.
Sample size (or number of clusters) by treatment arms
120 student subjects for each feedback condition for a total of 240 subjects.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We rely on the meta-analysis by Markowsky & Beblo (Journal of Economic Behavior and Organization, 2022) for sample size estimation based on power analysis. Cohen's d in the seminal Niederle & Vesterlund (The Quarterly Journal of Economics, 2007) paper is roughly 0.77 (73% and 35% competitive entry rates of men and women, respectively, 40 men and 40 women) and Markowsky & Beblo's meta-analysis suggests that informational feedback or advice reduces the gender gap as an intervention moderator by about 8 percentage points. Accounting for the reduced gap due to informational feedback reduces the effect size to roughly 0.61. Using a significance level of 5% and a power of 80% as the basis for power calculation, we obtain results that suggest a sample size of 88 subjects (44 men, 44 women) is necessary for detecting similar effects for each of the two feedback conditions. The novelty of our design introduces a secondary (less competitive) level of competition between the piece-rate and the tournament prize which could further reduce the effect size in competing at the highest tier. While we expect more efficient sorting in the treatment where we disclose tier scores relative to the baseline, we only expect small (0.3) to medium (0.5) effect sizes. We take a conservative approach by committing to a total sample size of 240 subjects (120 subjects per feedback treatment) which assumes the ability to detect minimum effect sizes of 0.364.

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information