Could affirmative action backfire?

Last registered on May 11, 2022

Pre-Trial

Trial Information

General Information

Title
Could affirmative action backfire?
RCT ID
AEARCTR-0007383
Initial registration date
March 18, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 22, 2021, 1:17 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 11, 2022, 1:02 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Primary Investigator

Affiliation
Queensland University of Technology

Other Primary Investigator(s)

PI Affiliation
Queensland University of Technology
PI Affiliation
Queensland University of Technology

Additional Trial Information

Status
In development
Start date
2022-01-17
End date
2022-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The aim of affirmative action (AA) policies is to increase the representation of minorities in candidate pools for hiring and/or promotions. In this study, we plan to use the controlled setting of a lab experiment to find evidence and understand the true size and nature of the spillover effects of a soft AA policy on employer discrimination. It allows us to determine 1) whether this effect is predominately positive or negative, and 2) whether it is primarily driven by behavioural preferences (taste-based discrimination) or rational choice (statistical discrimination). We do this by separating hiring decisions from output estimation decisions, and by comparing AA policies for an ethnic minority group with for a random “priority” group that has no distinct characteristics. Our findings aim to provide evidence and insights into the mechanisms of spillover effects of soft AA policies in the labour market.
External Link(s)

Registration Citation

Citation
Hu, Hairong, Changxia Ke and Gregory Kubitz. 2022. "Could affirmative action backfire?." AEA RCT Registry. May 11. https://doi.org/10.1257/rct.7383
Experimental Details

Interventions

Intervention(s)


Intervention Start Date
2022-01-18
Intervention End Date
2022-12-01

Primary Outcomes

Primary Outcomes (end points)
Overall effect: The main variable of interest is how a soft affirmative action (soft AA) policy impacts the percentage of candidates being hired of the type that is the target of the policy.





Primary Outcomes (explanation)
We will compare the percentage (%) of hired candidates that are minorities in the Baseline Type (2) and Soft AA minority (3) interventions. Similarly, we will compare the percentage of hired candidates that are “lucky” in the Baseline (1) and Soft AA Lucky (4) treatments. The difference in these treatment effects (the percent of hired that are minority in (3) vs. (2) vs. the percentage of the hired that are “lucky” in (4) vs (1)) will identify the role that minority status plays in the impact of a Soft AA policy.

Secondary Outcomes

Secondary Outcomes (end points)
1) Exposure Effect: How the soft AA policy increases the number of candidates of the targeted type in the candidate pool.
2) Signal Effect: How the soft AA policy impacts the estimation of scores of the targeted type as compared to the untargeted type.
3) Fairness Effect: How does the soft AA policy impact the willingness to hire targeted groups vs. non-targeted groups controlling for expected scores.
4) Token Effect: How the soft AA policy impacts the likelihood that a member of the targeted group is hired given they are selected from the candidate pool to be interviewed.

Hypothesis (estimated outcomes):
1) Hypothesis 1: Behaviour story (Large unfairness impact – hired less but estimated same).
- In the hiring decision: The negative spillover effect is much larger in the soft AA policy minority(3) than in the soft AA policy lucky(4).
- In the estimation decision: Expect no difference in soft AA policy minority(3) and in the soft AA policy lucky(4).

2) Hypothesis 2: Rational story (Small unfairness impact – hired more or indifferent but estimated less) within Soft AA minority(3).
- In the hiring decision: Exposure and frequency effects (positive spillover) dominate the signal effects (negative spillover).
- In the estimation decision: Signal effects (negative spillover) dominate the exposure effects (positive spillover).




Secondary Outcomes (explanation)
1) Exposure effects:
- Hiring: The percent of advantaged candidates in each treatment. We expect the percent of minority candidates is higher in the AA minority treatments than in the Baseline type treatments while the percent of lucky candidates is higher in the AA lucky treatments than in the Baseline treatments.

2) Singal effects:
- The difference in average scores of Task A between advantaged groups and disadvantaged groups. We expect the difference should be positive and significant in AA minority treatments (majority-minority) and in AA lucky treatments (unlucky - lucky) while the difference should be close to zero in Baseline treatments and Baseline type treatments.
- The difference in estimated scores between the disadvantaged group and advantaged group should be significantly greater in the AA minority and AA lucky treatments.
- The contribution of Task B's scores to the estimated scores should be smaller in AA minority and AA lucky treatments. This is because TaskB's score is less effective in the context of Affirmative action policy.

3) Fairness effects:
- The answer to the first question in the post-experimental survey. We expect participants will perceive the pre-screen is less fair in AA minority treatments and in AA lucky treatments than in Baseline treatments and Baseline type treatments.
- We will compare the percent of candidates hired that are minorities in the Baseline Type (2) and Soft AA minority (3) interventions. Similarly, we will compare the percent of candidates hired that are “lucky” in the Baseline (1) and Soft AA Lucky (4) treatments. We expect the percent of minority hired is higher in the AA minority treatments than in the Baseline type treatments while the percent of lucky hired is higher in the AA lucky treatments than in the Baseline treatments, if fairness effects is significant.

4) Token Effects: We will compare the percentage of minority candidates who are selected to be interviewed by the pre-screen process that are hired in the Baseline Type (2) and Soft AA minority (3) interventions. Similarly, we will compare the percentage of “lucky” candidates who are selected to be interviewed by the pre-screen process that are hired in the Baseline (1) and Soft AA lucky (4) interventions.


Experimental Design

Experimental Design
In this experiment, we have two phases: 1) preliminary phase, in which we aim to recruit 100 participants to complete a series of tasks. 2) The secondary phase, in which we aim to recruit 100 participants for every four treatments to complete a hiring game with a hiring decision and 4 estimating decisions.

The preliminary phase is designed to generate actual profiles of candidates for use in the second phase of the hiring game. The benefit of using actual profiles is to introduce actual costs for discriminatory behaviour and therefore capture the actual level of employer discrimination (Hedegaard & Tyran, 2018). During this phase, we are going to ask participants to finish an individual experiment, including five individual tasks with 2 minutes each.

The second phase is a hiring game, in which we will introduce four different treatments, a soft AA policy for an ethnic minority group, a soft AA policy for a randomly selected group, and baselines both with and without information about ethnicity. In the second part of a hiring game, we will only recruit the majority as our participants.

Prior to making a hiring decision, all profiles will go through a “pre-screen process" in which the computer will randomly select one of the five tasks completed by the individuals of profiles during the preliminary phase and rank all 12 profiles. Only 4 profiles will be selected as candidates during the hiring decision. During the hiring decision, participants will receive the profiles of four candidates, including scores of another drawn task (different from task used in “pre-screen process"), and age.


The pre-screen process and the given information in the profiles vary treatments by treatment (see interventions).
Experimental Design Details
Not available
Randomization Method
Randomisation was done by a computer through O-Tree
Randomization Unit
Individual participant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
100 individual participants per treatment (500 in total, 400 for the main experiment)
Sample size: planned number of observations
12 profiles per session. 100 sessions per treatment. Total observations are 12*100*4 = 4800.
Sample size (or number of clusters) by treatment arms
100 sessions per treatment (total are 400 sessions)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University Human Research Ethics Committee
IRB Approval Date
2021-12-08
IRB Approval Number
4631 - HE09
Analysis Plan

Analysis Plan Documents

Analysis plan

MD5: 105468d440fe0e02aff4b0946aa0eab7

SHA1: 2be53f3459776823edda61b1806ecac79d95c37b

Uploaded At: January 25, 2022