Sticky Stereotypes: Inaccurate Beliefs and Observability

Last registered on April 11, 2024


Trial Information

General Information

Sticky Stereotypes: Inaccurate Beliefs and Observability
Initial registration date
March 05, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 11, 2024, 9:38 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator


Other Primary Investigator(s)

PI Affiliation
Harvard University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Our study explores the impact of a supervisor’s racial identity on how hiring managers utilize new information about racial groups to revise their beliefs regarding worker productivity. This proposed intervention is grounded in literature indicating that the identity of supervisors can influence employees’ behavior and alter their perception of their supervisor’s priorities (Jeanquart-Barone 1996; Roberts 2005; Bradley et al. 2018). Additionally, we examine whether sharing a supervisor’s racial identity shifts how hiring managers express their racial attitudes. To determine whether non-Bayesian updating among hiring managers depends on the supervisor’s racial identity, we will conduct an experiment using the Prolific survey platform. In this experiment, we will randomly present a photo of one of the supervising researchers of this study, along with a prompt. In doing so, we will leverage the racial diversity of the two researchers to assess whether belief updating about racial gaps in worker productivity and explicit racial attitudes change conditional on which researcher is shown.
External Link(s)

Registration Citation

Opoku-Agyeman, Anna and Emma Rackstraw. 2024. "Sticky Stereotypes: Inaccurate Beliefs and Observability." AEA RCT Registry. April 11.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Hiring managers will be randomly assigned to see information about the race/ethnicity of the PI supervising their task.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Wage offers, reported beliefs, ranked importance of resume characteristics
Primary Outcomes (explanation)
We will first collect responses to the Implicit Association Test (IAT) embedded into the survey that includes the hiring task. Individuals are then shown half of the set of resumes they are asked to review before seeing an information intervention that includes 1) a photo and prompt of one the supervisors and 2) a noisy information signal that provides the average productivity measure for different demographic characteristics. This provides us with a baseline for each hiring manager's implicit bias prior to completing the hiring task.

1) We will measure the effect of revealing the supervisor's racial identity on hiring managers' posterior beliefs and wage offers:

PosteriorRacialGapBelief_j= beta_0 + beta_1 WhiteSupervisor_ji + beta_1 BlackSupervisor_ji

PosteriorRacialGapBelief will measure their posterior belief about the racial gap in scores. White Supervisor is binary term equal to one when a hiring manager, j, is assigned a white supervisor, and Black is binary term equal to one when a hiring manager, j, is assigned a Black supervisor, and the reference group for both is not shown the supervisor's racial identity.

2) We then consider the effect of the supervisor's race being made salient on the wage offers made to Black job seekers wage offer and White job seekers' wage offer. To minimize social desirability bias in making differential wage offers, individual hiring managers only view one racial identity in their resume pool. The comparisons will be made across subjects:

BlackWageOffer_ji = beta_0 + beta_1 WhiteSupervisor_ji + beta_2 BlackSupervisor_ji + alpha_j + lambda_i
WhiteWageOffer_ji = beta_0 + beta_1 WhiteSupervisor_ji + beta_2 BlackSupervisor_ji + alpha_j + lambda_i

where BlackWageOffer_ji and WhiteWageOffer_ji represents a wage offer to a Black or White job seeker, respectively. \alpha_j are hiring manager fixed effects, and \lambda_i are worker productivity level fixed effects.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)
3) Resume Features and Explicit Bias
We then consider the effect of the supervisor's race being made salient on 1) the relative rank of resume features and 2) willingness to express explicit biases against Black workers:
CharacteristicRank_j = beta_0 + beta_1 WhiteSupervisor_ji + beta_2 BlackSupervisor_ji + alpha_j + lambda_i

ExplicitBias_j = beta_0 + beta_1 WhiteSupervisor_ji + beta_2 BlackSupervisor_ji + alpha_j + lambda_i

CharacteristicRank measures the importance ranking of each resume characteristic as reported by hiring managers at the end of the survey. ExplicitBias_B is an index of average z-scores across Likert scales of explicit biases where a higher number indicates more explicit bias against group B (Black workers).

4) Relationship to Implicit bias:

We additionally examine the previous relationships interacted with implicit biases, as measured by the Implicit Association Test (IAT). ImplicitBias_B is the IAT score measured in standard deviations where a higher number indicates more implicit bias against Black workers in group B.

Experimental Design

Experimental Design
Our experiment seeks to understand how hiring managers behave when the race of their supervisor is made known to them. We measure how the salience of a supervisor's racial identity impacts the belief-updating process and explicit racial attitudes of hiring managers. We first ask participants on Prolific participating in a hiring task to complete a hiring task on Prolific based on information gathered from 250 Amazon Mechanical Turk workers. Each participant, as part of the experiment, is a hiring manager that then is randomly assigned 8 worker resumes, constructed from demographic data collected about the pool of workers during the task they completed. After the first four resumes are reviewed, the “hiring managers” are told there is a supervisor of the study and are either shown the Black supervising researcher, White supervising research, or a prompt that says there is a supervising researcher. Then they are shown new accurate information about the demographic productivity differences within the broader worker pool, and are asked to review the remaining resumes and provide their thoughts on wage strategy, resume features, and racial attitudes.
Experimental Design Details
Not available
Randomization Method
The random assignment takes place within the Qualtrics survey through various randomizers where numbers are randomly assigned to individuals who participate in the survey at various points: treatment arm assignment and whether hiring managers will view Black or White employees.
Randomization Unit
Randomization taking place is 1) which race of workers hiring managers will be expected to review, and 2) the treatment arm of the supervisor (Black, White, No Photo).
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
No clusters, only individual participants
Sample size: planned number of observations
3000 Prolific Participants participating in a hiring task (“hiring managers”)
Sample size (or number of clusters) by treatment arms
For each arm, we plan to have 1000 “hiring managers” (Prolific participants participating in hiring task) per arm:

Anna - Black Supervisor
Emma - White Supervisor
No Photo Shown - Unknown Race Supervisor
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on pilot studies, we assume a minimum detectable effect of 0.1 in the wage offer outcome. With equal probability of assignment to each of the three conditions, an alpha level of 0.05, and a power level of 80%, we calculated a necessary overall sample size of approximately 3000.
Supporting Documents and Materials


Document Name
IRB Protocol
Document Type
Document Description
IRB Protocol

MD5: 01d722a65ba12bd642a75fc4bc793e2f

SHA1: e3e3de765fc5791f7fc16ef73f75390020801cc6

Uploaded At: March 05, 2024


Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information