Back to History Current Version

When is Discrimination Unfair?

Last registered on July 04, 2022

Pre-Trial

Trial Information

General Information

Title
When is Discrimination Unfair?
RCT ID
AEARCTR-0006409
Initial registration date
September 21, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 22, 2020, 7:46 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 04, 2022, 1:26 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of California, Santa Barbara

Other Primary Investigator(s)

PI Affiliation
UC Santa Barbara

Additional Trial Information

Status
Completed
Start date
2020-09-22
End date
2022-07-04
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We conduct a vignette-based survey experiment to assess the perceived fairness of considering race in a hiring decision. Our vignettes illustrate two canonical forms of discrimination studied by economists: taste-based and statistical discrimination, plus two sub-types of each. In addition, we will randomly reverse the races of the discriminator and discriminatee. These interventions will allow us to estimate how the type of discriminatory action and race of the persons involved affect the perceived fairness of discriminatory actions. They will also allow us to assess the appropriateness of three broad models of perceived fairness in this context: utilitarian social preferences, in-group bias, and rules-based ethics. To our knowledge our study will be the first to assess the conditions under which a large sample of respondents perceive discrimination as more versus less unfair.
External Link(s)

Registration Citation

Citation
Kuhn, Peter and Trevor Osaki. 2022. "When is Discrimination Unfair?." AEA RCT Registry. July 04. https://doi.org/10.1257/rct.6409-2.1
Experimental Details

Interventions

Intervention(s)
The interventions we administer are four fictitious vignettes describing incidents of discrimination. With equal probability, they describe either White-on-Black or Black-on-White discrimination.
Intervention Start Date
2020-09-22
Intervention End Date
2020-10-07

Primary Outcomes

Primary Outcomes (end points)
The outcome is the subject’s assessment of the fairness of the incident described in a vignette, on a seven-point scale.
Primary Outcomes (explanation)
In our main hypothesis tests, we plan to use standardized versions of the fairness ratings (with mean 0 and standard deviation 1).

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We randomly expose subjects to vignettes describing two broad types of discrimination (taste and statistical) and two sub-types of each. In addition we randomly vary the races of the discriminator and discriminatee. Treatments will vary both within and between subjects.
Experimental Design Details
Our survey will be posted as a task on Amazon’s Mechanical Turk (MTurk) for registered MTurk workers. Once a worker has accepted our task, they will be prompted to read and react to four vignettes that each illustrate incidents of discrimination. Each vignette is accompanied by a question that asks respondents to assess the fairness of the illustrated discriminatory action on a 1-7 scale. These four vignettes will be presented in two stages, each comprising two scenarios.

In stage one, respondents will be assigned to one of four treatments: SB, TB, SW and TW, where S and T denote Statistical and Taste-based discrimination, and W and B indicate that the person who is discriminated against is either White or Black. (The discriminator is always White when the discriminate is Black, and vice versa.) The two scenarios in stage one are distinguished by the sub-type of discrimination described (E versus C or L versus H) and are administered in random order.

In stage two, each respondent is re-assigned to one of the three treatments they did not experience in stage one, and again encounter two scenarios representing sub-types of either statistical- or taste-based discrimination.

After assessing the four vignettes, subjects will be answer two follow-up questions. The first will ask respondents to provide a written rationale for their fairness assessment of the last vignette they encountered. The second asks all respondents to assess the relative availability of economic opportunities between Black and White people in the U.S. using a rating scale similar to the vignette questions.

Finally, respondents will answer six background questions on their race, gender, age, education, party preference, and political leaning (e.g. liberal, moderate, conservative). These follow-up and background questions will be the same for all respondents to the survey.
Randomization Method
Randomization is performed by the Qualtrics survey platform.
Randomization Unit
The primary randomization unit is the individual survey respondent. Since each respondent will be exposed to multiple treatments, the order in which the treatments are received is randomly assigned as well.

In more detail, in stage one, respondents will be assigned with equal probability (0.25) to one of the four treatments: SB, TB, SW and TW. In stage two, each respondent is re-assigned with equal probability (0.33) to one of the three treatments they did not experience in stage one. Within each stage, the two scenarios are administered in random order.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
no clustering
Sample size: planned number of observations
We have funds to pay 714 subjects. Our power calculations assume 600 survey respondents to allow for unforeseen technical problems.
Sample size (or number of clusters) by treatment arms
150 respondents will be assigned to each of the SB, TB, SW and TW treatments in the first stage of the survey. In the second stage, each respondent will be assigned to one of the treatments they did not encounter in the first stage.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Depending on the amount of within-subject error correlation, we expect to be able to detect a perceived fairness differential of between 0.114 and 0.229 standard deviations between taste-based and statistical discrimination. For differences between sub-types of discrimination, and for the effects of discriminatee race with respondent racial groups, this range is from 0.162 to 0.323 standard deviations.
IRB

Institutional Review Boards (IRBs)

IRB Name
UCSB HUMAN SUBJECTS COMMITTEE
IRB Approval Date
2020-05-29
IRB Approval Number
16-20-0381
Analysis Plan

Analysis Plan Documents

When Is Discrimination Unfair? Pre-Analysis Plan

MD5: 8953ab294e2cb842fdb3e52c33bea20a

SHA1: 53e8b5f4a4834b0f5947e4ec4d056ccdbb12c044

Uploaded At: September 21, 2020

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
October 06, 2020, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
October 06, 2020, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
8
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
779 responses received; 642 retained for analysis
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
Using a vignette-based survey experiment on Amazon’s Mechanical Turk, we measure how people’s assessments of the fairness of race-based hiring decisions vary with the motivation and circumstances surrounding the discriminatory act and the races of the parties involved. Regardless of their political leaning, our subjects react in very similar ways to the employer’s motivations for the action, such as the quality of information on which statistical discrimination is based. Compared to conservatives, moderates and liberals are much less accepting of discriminatory actions, and consider the discriminatee’s race when making their fairness assessments. We describe four pre-registered models of fairness – (simple) utilitarianism, race-blind rules (RBRs), racial in-group bias, and belief-based utilitarianism (BBU) – and show that the latter two are inconsistent with major aggregate patterns in our data. Instead, we argue that a two-group framework, in which one group (mostly self-described conservatives) values employers’ decision rights and the remaining respondents value utilitarian concerns, explains our main findings well. In this model, both groups also value applying a consistent set of fairness rules in a race-blind manner.
Citation
Kuhn, Peter and Trevor Osaki (2023) "When is Discrimination Unfair?" NBER working paper no. 30236

Reports & Other Materials