x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Signals We Give: Gender, Feedback, and Competition
Last registered on January 27, 2021

Pre-Trial

Trial Information
General Information
Title
Signals We Give: Gender, Feedback, and Competition
RCT ID
AEARCTR-0006966
Initial registration date
January 06, 2021
Last updated
January 27, 2021 8:36 AM EST
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Nova School of Business and Economics
Other Primary Investigator(s)
PI Affiliation
University of Portsmouth
PI Affiliation
University of East Anglia
Additional Trial Information
Status
In development
Start date
2021-02-01
End date
2021-12-31
Secondary IDs
Abstract
How do managers convey information to employees about their performance? Are they averse to giving bad news, or are they more likely to withhold good information? This project examines the biases that managers hold when providing feedback to their employees. Biased feedback communication can distort employees’ perceptions about their abilities, which may, in turn, alter their willingness to compete for promotion or roles with better career prospects. Using a series of controlled experiments, we examine biases in feedback provision, and how these interact with the employee’s gender. Understanding these biases and their impact on behavior is an important step to breaking down the barriers inhibiting women from taking up leadership roles. Our experimental design will allow us to examine the relationship between feedback provision and the impact that biased performance feedback may have on employees’ beliefs and behavior.
External Link(s)
Registration Citation
Citation
Coutts, Alexander, Boon Han Koh and Zahra Murad. 2021. "Signals We Give: Gender, Feedback, and Competition." AEA RCT Registry. January 27. https://doi.org/10.1257/rct.6966-2.0.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2021-02-01
Intervention End Date
2021-12-31
Primary Outcomes
Primary Outcomes (end points)
Our primary outcome variables are:
1. Managers’ choice of feedback to be sent to their Worker [FeedbackSent]
2. Worker’s choice of payment scheme [ChoiceTournament]
Primary Outcomes (explanation)
Manager feedback is ordinal (precise, moderate, vague). For part of our analysis we will consider binary indicators for whether feedback is precise, as well as whether feedback is vague.
Secondary Outcomes
Secondary Outcomes (end points)
Our secondary outcome variables are:
1. Manager’s prior and posterior beliefs about the Worker’s performance [BeliefWorkerPrior; BeliefWorkerPosterior]
2. Worker’s prior and posterior beliefs about their own performance [BeliefPrior; BeliefPosterior]
Secondary Outcomes (explanation)
During the experiment, beliefs will be elicited over a distribution, where subjects will allocate 10 tokens across four possible quartile ranks. Hence, two possible measures of beliefs are used: (i) number of tokens assigned to each quartile rank, separately; and (ii) the expected rank (i.e., mean of the distribution of tokens).
Experimental Design
Experimental Design
Participants will be divided into groups of two. One participant is assigned the role of the Manager while the other is assigned the role of the Worker. Managers will receive a signal about their matched Worker’s task performance and choose how they would like to provide feedback to their Worker. The Worker then chooses how they would like to be compensated for their task performance.

NOTE: This project is a follow-up experiment from a previously registered trial (https://www.socialscienceregistry.org/trials/5543), which was designed as a laboratory experiment. That experiment was abandoned due to the COVID-19 pandemic. In this study, the experiment has been redesigned to be conducted online and to address a revised set of research questions.
Experimental Design Details
Not available
Randomization Method
Our experiment follows a 2 × 2 × 2 between-subject treatment design and conducted entirely using participants recruited from Prolific. We will recruit a gender-balanced sample for both Workers and Managers.

All treatment assignment is determined randomly using the following protocols:
1. Worker’s gender: Managers will be matched with either a male Worker or a female Worker. This is randomly determined at the individual level by the computer software, such that there are an equal number of male and female Workers to be matched to Managers.
2. Instrumentality of Manager’s feedback (i.e., timing of feedback): Each Manager-Worker pair will be randomly assigned to either the Instrumental treatment or the Non-Instrumental treatment. This will again be randomly determined at the individual level by the computer software, such that there are an equal number of Managers in each treatment.
3. Precision of signal received by Managers: Each Manager will either receive a precise, moderate, or vague signal about their Worker’s performance. This is randomly determined at the individual level using the computer software. The probability distribution of signal types {Precise, Moderate, Vague} is fixed at {0.65, 0.30, 0.05}. Note that managers receiving a vague signal will be excluded from the analysis.
Randomization Unit
The unit of randomization is at each Manager-Worker pair level.
Was the treatment clustered?
Yes
Experiment Characteristics
Sample size: planned number of clusters
1,200 (1,200 Workers and 1,200 Managers).
Sample size: planned number of observations
2,400 (1,200 Workers and 1,200 Managers).
Sample size (or number of clusters) by treatment arms
We plan to recruit 1,200 Managers that will be pre-assigned to one of these four treatment cells: (i) female Workers × instrumental feedback; (ii) female Workers × non-instrumental feedback; (iii) male Workers × instrumental feedback; and (iv) male Workers × non-instrumental feedback. Participants will be recruited via Prolific so that there is a gender-balanced sample of Workers to be matched to Managers. The experiment software will ensure that the Manger-Worker pairs are uniformly assigned to the instrumental and non-instrumental treatments.

Hence, within each treatment cell, we expect to have 300 Manager-Worker pairs (i.e., 600 participants). Given the distribution of signal precision, this implies that about 200 Managers within each cell will receive precise signals, while about 100 Managers will receive moderate signals. The probability of receiving a vague signal is negligible, and these observations will be excluded from the analysis.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The primary outcome variable is the Manager’s choice of feedback. Specifically, we consider whether Managers choose to increase the noise of the signal by sending a less precise message to the Worker, or to keep the signal precision as it is. Hence, the Manager’s feedback choice will be transformed into a binary outcome variable. As a main test of our theoretical predictions, we consider pairwise comparisons of the Manager’s feedback choice between the non-instrumental and instrumental treatments for a given signal that they have received. With 6 possible signal types (ranks between 1 and 4, top half, and bottom half), 2 treatments, and 1,200 Managers, this implies that we have 100 Managers in each cell. Our power calculation is based on: (i) baseline proportions of 0.01, 0.25, 0.50, or 0.75; (ii) one-tailed z-tests of differences between two independent proportions; (iii) Type I error rate of 0.05 and power of 0.80. Given these parameters, the minimum detectable effect size is an increase in proportion of between 0.074 and 0.172 in the treated group. The analysis of gender differences in feedback provision will primarily be conducted using parametric regressions. Using simple linear probability models, we first consider a baseline model of Manager’s feedback choice against the Worker’s gender, the Manager’s gender, the signal type received by the Manager, the treatment variable (instrumental vs. non-instrumental), as well as the Manager’s posterior and second-order beliefs about the Worker’s ability. This provides a total of 10 predictors with 1 predictor to be tested. Considering a F-test of an increase in R2 with 1,200 Managers, a Type I error rate of 0.05, and a power of 0.80, the minimum detectable effect size is 0.0066. To allow for heterogeneity in treatment effects, we also consider interaction terms between the Worker’s gender and Manager’s signals in the regression model. This gives a total of 15 predictors of which 6 are to be tested. Using the same parameters as above, the minimum detectable effect size is 0.0114. On the Workers’ end, the main outcome variable is a binary decision of incentive choice on their Part 3 rank (competition versus piece-rate). Our treatment comparisons will be similar to that for Managers, except that we will now consider the feedback sent by the Managers instead of the signal that they have received. Hence, the power calculations and minimum detectable effect sizes follow that of the preceding paragraphs.
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
UEA School of Economics Research Ethics Committee
IRB Approval Date
2020-12-18
IRB Approval Number
0347
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information