The Impacts of Soft Affirmative Action: Experimental Evidence

Last registered on June 12, 2023

Pre-Trial

Trial Information

General Information

Title
The Impacts of Soft Affirmative Action: Experimental Evidence
RCT ID
AEARCTR-0007383
Initial registration date
March 18, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 22, 2021, 1:17 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 12, 2023, 3:09 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Queensland University of Technology

Other Primary Investigator(s)

PI Affiliation
Queensland University of Technology
PI Affiliation
Queensland University of Technology

Additional Trial Information

Status
Completed
Start date
2022-09-26
End date
2022-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Affirmative action (AA) policies are used to increase the representation of minorities in candidate pools for hiring and/or promotions. In this study, we plan to use the controlled setting of a lab experiment to understand the true size and nature of the spillover effect of a soft AA (SAA) policy on employer’s discrimination. It allows us to determine 1) whether this effect is predominately positive or negative, and 2) if it is primarily driven by behavioural preferences (i.e., taste-based discrimination) or rational choices (i.e., statistical discrimination). We do this by comparing a soft AA policy based on minority ethnicity status with one for a random “priority” group that has no distinct characteristics, and also by separating hiring decisions from performance estimations. Our findings would provide insights into the mechanisms of the spillover effects of soft AA policies in the labour market.
External Link(s)

Registration Citation

Citation
Hu, Hairong, Changxia Ke and Gregory Kubitz. 2023. "The Impacts of Soft Affirmative Action: Experimental Evidence." AEA RCT Registry. June 12. https://doi.org/10.1257/rct.7383-4.6
Experimental Details

Interventions

Intervention(s)
1) Baseline_colour (Treatment 1): A treatment with randomly assigned colours (red or green) to each candidate, but with neither the soft AA policy nor the information about candidates’ ethnicities.
2) Baseline_minority (Treatment 2): A treatment without a soft AA policy but with revealed information on candidates’ ethnicities (White or East Asian).
3) Soft AA_lucky (Treatment 3): A treatment with a soft AA policy for a randomly selected “lucky” group, namely the candidates assigned to red colour (red or green).
4) Soft AA_minority (Treatment 4): A Treatment with a soft AA policy for the ethnic minority group (White or East Asian).
Intervention Start Date
2022-09-30
Intervention End Date
2022-10-31

Primary Outcomes

Primary Outcomes (end points)
Overall Effect: The main outcome of interest is how a soft affirmative action (SAA) policy impacts the percentage of candidates being hired of the type that is the target of the policy.





Primary Outcomes (explanation)
We will compare the percent of hired candidates that are minorities in Treatment 2 (Baseline_Minor) and Treatment 4 (SAA_Minor). Similarly, we will compare the percent of hired candidates that are “lucky” in Treatment 1 (Baseline_Colour) and Treatment 3 (SAA_Colour). The difference in treatment effects (the difference between the percent of hired candidates that are minorities in Treatment 4 vs. Treatment 2 and the percent of hired that are “lucky” in Treatment 3 vs Treatment 1) will identify the role that minority status plays in the impact of a SAA policy.

Secondary Outcomes

Secondary Outcomes (end points)
The primary outcome (overall effect) can be decomposed into a Frequency Effect (which is mechanically generated by the SAA policy) and a Token effect * (which is potentially confounded by several factors). The Frequency Effect measures how the SAA policy increases the number of candidates of the targeted type in the candidate pool, whereas the Token Effect1 measures how the SAA policy impacts the likelihood that a member of the targeted group is hired given that they are selected by the pre-screening process. Our design aims to further disentangle the confounding factors of the Token Effect.



* Tokenism is the practice of doing something purely symbolic to appear inclusive. A (numerically) positive Token Effect is a reduction in the probability that a targeted candidate is hired in the presence of an SAA policy conditional on them being in the candidate pool. In this case, additional representation in the candidate pool does not translate fully into improved hiring outcomes for the targeted candidates.
Secondary Outcomes (explanation)
We measure both the base-level belief bias (i.e., Stereotype Effect) and preference bias (i.e., Taste-base Discrimination Effect). We also measure the two effects that impact the Token Effect: the Backfire Effect and the Perception Effect. Our definitions of these effects are the following:
(1) Stereotype Effect (or Base-level belief bias): Whether and how much the managers perceive the minority group having lower expected employment performance due to existing stereotypical beliefs.
(2)Taste-base Discrimination Effect (or Base-level preference bias): Whether and how much the managers (who are majorities) are less willing to hire minority candidates given the same expected employment performance.
(3) Perception Effect (Policy induced belief bias): How the SAA policy impacts the managers’ beliefs about the expected performance of the targeted group vs. non-targeted group through changes in the pre-selection process and through additional exposure to candidates’ performances from the targeted group.
(4) Backfire Effect (Policy induced preference bias): How the SAA policy impacts the managers’ willingness to hire targeted groups vs. non-targeted groups controlling for expected (employment performance) scores.

Experimental Design

Experimental Design
This experiment consists of two phases: 1) the preliminary phase, in which we aim to recruit 150 participants to complete a series of anagram tasks; and 2) the main phase, in which we aim to recruit 100 participants for each of the four treatments to complete either hiring or performance estimation tasks (i.e., 800 participants in total).

In the preliminary phase, participants will be asked to complete five 2-minute anagram tasks individually and paid by piece-rate performance. This phase is designed to generate actual profiles of candidates to be used in the main phase of the experiment. The benefit of using actual profiles is to introduce real consequences for discriminatory behaviour and therefore capture the actual level of employer discrimination (Hedegaard & Tyran, 2018). To construct a balanced candidates pool for the second phase, 75 participants will be recruited from an ethnic minority group (i.e., East Asians) and the remaining 75 will be recruited from the ethnic majority group (i.e., Whites). Out of the five performances, we drop the lowest and the highest scores to form the final candidate profiles. For each candidate profile, we randomly name each of the three scores remaining as the pre-screening score, interview score and employment score.

In the main phase of the experiment, participants will be asked to either complete some hiring decisions or make performance estimations, given sets of pre-screened candidate profiles drawn from the data collected in the preliminary phase. Each participant in this phase will be assigned to one of the following four experimental treatments: a baseline_colour treatment with randomly assigned colours (red or green) to each candidate, but with neither the soft AA policy nor the information about candidates’ ethnicities (Treatment 1), a baseline-minority treatment without soft AA policy but with revealed information on candidates’ ethnicities (Treatment 2), a baseline_colour intervention treatment with soft AA policy for a randomly selected “lucky” group, namely the candidates assigned to red colour (Treatment 3), and a baseline-minority intervention Treatment with soft AA policy for the ethnic minority group (Treatment 4). Note that we will only recruit participants from the ethnic majority group as “employers/managers”. Each set of the four candidates’ profiles is pre-screened from 12 randomly selected participants (with similar average performances from the preliminary phase) through a treatment-specific “pre-screen process” based on the candidates’ pre-screening performances. The employer/manager is given the interview performances of all four candidates and is asked to either select one candidate to be hired (in the hiring task condition) or to give estimates of the actual employment performances for all four candidates (in the estimation task condition). On top of interview performances, the randomly assigned group colour (in Treatment 1 and 3) or ethnicity information (reflected through the surnames in Treatment 2 and 4), the potential employer/manager will also be given candidates’ age, prolific id, priority status (in Treatment 3 and 4).

Experimental Design Details
Hypotheses (on Performance Estimation Task):
(1) Baseline_Minor (Treatment 2) vs. Baseline_Colour (Treatment 1): On average, the minority candidates receive lower performance estimations than the majority candidates in the Baseline_Minor treatment due to existing Stereotype Effect in Treatment 2, whereas the estimations are similar across candidates with red and green colours in the Baseline_Colour treatment.
(2) SAA_Colour (Treatment 3) vs. Baseline_Colour (Treatment 1): The red candidates on average receive lower performance estimations than the green candidates in Treatment 3 due to a negative Perception Effect of the SAA policy, whereas this difference does not exist in treatment 1.
(3) SAA_Minor (Treatment 4) vs. Baseline_Minor (Treatment 2): The minority candidates in Treatment 4 on average receive lower performance estimations than the minority candidates in Treatment 2, due to a negative Perception Effect of SAA policy.

Hypotheses (on Hiring Decision):
(1) Baseline-Minor (Treatment 2) vs. Baseline_Colour (Treatment 1): The minority candidates are less likely hired than the majority candidates in the Baseline_Minor treatment potentially due to both the existing Stereotype Effect and Taste-base Discrimination Effect, whereas these effects do not exist in the Baseline_Colour treatment.
(2) SAA_Colour (Treatment 3) vs. Baseline_Colour (Treatment 1): The red candidates will be less likely hired than the green candidates in Treatment 3, due to a combination of the negative Perception Effect and Backfire Effect.
(3) SAA_Minor (Treatment 4) vs. Baseline_Minor (Treatment 2): The minority candidates in Treatment 4 are less likely hired than the minority candidates in Treatment 2, due to a combination of the negative Perception Effect and Backfire Effect.
Randomization Method
Randomisation was done by a computer through O-Tree
Randomization Unit
Individual participant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
150 participants in the preliminary phase and 100 participants per Treatment per task in the second phase (950 in total, 800 for the main experiment in phase 2)
Sample size: planned number of observations
4*100*4 = 1600 performance estimation decisions and 100*4=400 hiring decisions.
Sample size (or number of clusters) by treatment arms
150 participants in the preliminary phase, and 100 participants per Treatment per task (total are 800 clusters) in the main experiment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University Human Research Ethics Committee
IRB Approval Date
2021-12-08
IRB Approval Number
4631 - HE09
Analysis Plan

Analysis Plan Documents

Analysis plan for pre-registration.docx

MD5: c1b53715f319bbc094c0a7008314d871

SHA1: 96aef33ba9a88226146a8933ba7685e11a4b5aae

Uploaded At: September 26, 2022

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials