Back to History Current Version

Reducing perceptions of discrimination

Last registered on June 26, 2022

Pre-Trial

Trial Information

General Information

Title
Reducing perceptions of discrimination
RCT ID
AEARCTR-0009592
Initial registration date
June 22, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 26, 2022, 5:25 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
MIT

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-09-05
End date
2024-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This randomized experiment examines how individuals perceive discrimination under three job assignment mechanisms (with varying potential to discriminate) and the effects of the two mechanisms that reduce the scope for discrimination from the status quo on perceived discrimination, retention, effort, performance, cooperation with and reciprocity towards managers, and future labor supply. The study design and randomization ensure that the only differences between the three groups is what they believe about the job assignment process.
External Link(s)

Registration Citation

Citation
Ruebeck, Hannah. 2022. "Reducing perceptions of discrimination." AEA RCT Registry. June 26. https://doi.org/10.1257/rct.9592-1.0
Experimental Details

Interventions

Intervention(s)
The intervention varies what participants believe about how they were assigned to the easier, lower-paying of two tasks related to scientific communication. In the status quo arm, workers are told that managers who know worker demographics made the job assignment decisions. In two treatment arms, workers are told that the assignments were made using other mechanisms that are unable to discriminate.
Intervention Start Date
2022-10-03
Intervention End Date
2022-12-05

Primary Outcomes

Primary Outcomes (end points)
Perceived discrimination, cooperation with and reciprocity towards managers, effort, retention, and performance, and future labor supply
Primary Outcomes (explanation)
Several of the above variables can be combined into indices, which is described in more detail in the uploaded pre-analysis plan.

Secondary Outcomes

Secondary Outcomes (end points)
Self-efficacy in the work task and task-related skills, job satisfaction, affective well-being
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Workers will be recruited with a screening survey and then evaluated by three job assignment mechanisms. Workers who are assigned to the harder task by any of the mechanisms will be assigned to the harder task and exit the sample of interest. The remaining workers will be randomly assigned which mechanism they are told was the one responsible for assigning them to the easier, lower-paying task. They will then answer questions about their interest in future work, do the easy task, answer survey questions, and finish the experiment.
Experimental Design Details
Workers will be recruited with a “screening survey,” common on Prolific, that qualifies the worker to participate in future high-paying surveys. In this screening survey, their first session, workers will complete the three screening tasks, and answer questions about their demographics and job history. After all workers have completed the screening survey, their scores and demographics will be aggregated into worker profiles. Workers will be grouped into sets of approximately 120 workers with similar education levels and average quiz scores. Quiz scores are shown as 1-5 stars on a worker profile that are approximate quintiles.

Each worker profile will be viewed by two managers: one sees a profile that includes screening scores and education, and another sees a profile that also includes race, gender, and age. Managers will each assign 10 of the workers that they evaluate (8 percent) to the harder task. The algorithm will also be used to determine which ten workers in each group of 120 is likely to do the best job on the harder proofreading task given their scores on the screening tasks. Thus, each worker will be evaluated by all three job assignment mechanisms.

After all workers have been evaluated (2-3 weeks after workers complete the screening task), any worker who is assigned to the harder task by any of the three mechanisms will be assigned to the harder task. They will do the harder task and finish the experiment (expected to be 15-20 percent of workers, capped at 24 percent by construction). Primarily, this removes any concern about selection in the remaining sample, since workers in the remaining sample of interest were all assigned to the easier task by all three mechanisms.

Among this sample of interest, after agreeing to take the follow-up survey, workers will be randomly assigned which mechanism they are told was the one responsible for assigning them to the easier, lower-paying task. This revelation will be subtle: workers will be shown the profile of information that the manager or algorithm had available about them when they made their decision, along with the profiles of several of their coworkers, one of whom was assigned to the harder task by their manager and four of whom were also assigned to the easy task. Forty percent of the sample will be told they were assigned by a manager with access to demographics, forty percent will be told they were assigned by a manager without access to demographics, and twenty percent will be told they were assigned by an algorithm.This split is required to ensure that the two-stage least squares estimates (that use only the sample assigned to one of the manager arms) are well-powered. The design is also well-powered to detect intent-to-treat estimates of the effect of the algorithm. Randomization will be stratified by race and gender to ensure the feasibility of estimating heterogeneous treatment effects, described below.

After they are told about how they were assigned, workers will be asked how many stars they think they would have needed to score on the screening quizzes in order to be assigned to the harder task by their manager. After they answer this question, they will be asked to imagine that they are a worker with a different (fictitious) profile with randomly assigned characteristics, and asked how many stars they think they would have needed to score on the screening quizzes in order to be assigned to the harder task. Differences between these answers for fictitious workers of different races and genders provides a measure of implicit perceived discrimination.

Then, workers will do the easier proofreading task. Workers will know that they have to proofread at least six paragraphs to receive their completion payment and that they are able to proofread up to eighteen paragraphs (each for a bonus). If they proofread all eighteen paragraphs, they will be eligible to be evaluated again to do the harder task for a higher wage in a future survey (though they could also be assigned again to the easier task). After finishing the easier proofreading task, workers will be asked whether they would like to be evaluated again and assigned to a future proofreading survey. Ten percent of workers will be randomly selected and their choices implemented.

Next, workers in a manager arm will be asked at what wage they would want to work together with their manager on a similar task in the future, how much they would be willing to give up in wages to be able to choose their own manager (instead of a default of working with the same manager who assigned them in the main experiment), and how they would share a thank-you bonus with their manager. Each of these choices will be implemented for a randomly selected subset of participants.

Then, workers will answer questions about their self-efficacy to do the easier or harder job, job satisfaction, affective well-being, complaints about the promotion process, whether they think they would have been assigned to the harder task if they were evaluated by each of the two other mechanisms or if they were assigned by the same mechanism but had a different race or gender (explicit measures of perceived discrimination).
Randomization Method
Randomization is done in an office using Stata on a computer and treatment values are uploaded to Qualtrics for each participant when they return for the follow-up (experimental) survey. This allows stratification by race and gender, which is not possible when randomizing in Qualtrics directly.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
3,500 workers will be initially recruited. 2,100 are expected to be assigned to the easier task and return for the experimental session.
Sample size: planned number of observations
3,500 workers will be initially recruited. 2,100 are expected to be assigned to the easier task and return for the experimental session.
Sample size (or number of clusters) by treatment arms
840 workers status quo, 840 workers manager treatment, 420 workers algorithm treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
All calculations assume a power of 80% and a significance level of 0.95. Regressions to test whether the manager treatment (algorithm treatment) reduces perceived discrimination relative to the status quo are powered to detect effects larger than 10pp (14pp). The percent of the population perceiving discrimination in the status quo group is assumed to match the rate of perceived discrimination in each race*gender cell in a pilot study describing the main experiment as a hypothetical scenario -- see the uploaded pre-analysis plan for details. Given the results from a pilot study, the effect sizes are expected to be larger than these MDEs. The study will also be powered to detect a difference in these effects for the two treatments of 14pp or more, assuming the algorithm treatment reduces perceived discrimination by up to 4pp and the manager treatment is more effective. The study is also powered to detect that the effect of the manager treatment (algorithm treatment) differs for whites and non-whites or men and women by 16pp (20pp). Regressions to test whether the manager treatment increases labor supply (whether a worker completes all eighteen paragraphs and whether they opt in to future work), or any continuous outcome that is standardized to be zero in the control group (e.g. reservation wages for working more closely with their manager, willingness to pay to choose one's own manager, job satisfaction, effort, proofreading quality) are powered to detect effects of at least 16pp and 0.06-0.12sd, respectively (where control group means and distributions are predicted from the pilot data where possible, including that 60 percent of workers in the status quo group are assumed to complete all 18 paragraphs).
IRB

Institutional Review Boards (IRBs)

IRB Name
MIT Committee on the Use of Humans as Experimental Subjects
IRB Approval Date
2022-06-22
IRB Approval Number
2201000547
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials