Racial Animus and Support for Labor Market Policies

Last registered on June 24, 2024

Pre-Trial

Trial Information

General Information

Title
Racial Animus and Support for Labor Market Policies
RCT ID
AEARCTR-0013774
Initial registration date
June 06, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 24, 2024, 12:19 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Middlebury College

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-06-15
End date
2025-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Once African Americans gained access to public provisions ---such as swimming pools, parks, and desegregated education--- support for these public provisions declined. A nascent literature argues that racism and a backlash to the civil rights movement also contributed to the decline in the U.S. social safety net. Using two parallel experiments, we study the causal effects of providing information about the number of Black people receiving welfare and unemployment insurance benefits on support for these policies. We use measures of individual confidence in their pre-treatment beliefs to measure treatment intensity and study the interaction of the treatment with individual-level racial bias, both explicit and implicit. Decreased program support following information that there are more Black recipients than expected (compared to a control) indicates a causal relationship between racism and policy preferences.
External Link(s)

Registration Citation

Citation
Carpenter, Jeffrey. 2024. "Racial Animus and Support for Labor Market Policies." AEA RCT Registry. June 24. https://doi.org/10.1257/rct.13774-1.0
Experimental Details

Interventions

Intervention(s)
Randomly providing true information about the racial composition of TANF and Unemployment Insurance recipients to a subset of participants and observing subsequent support for these programs.
Intervention (Hidden)
Using two parallel experiments, we intend to study the causal effects of providing information about the number of Black people receiving benefits from two labor market policies (Temporary Assistance for Needy Families (sometimes referred to as welfare) and Unemployment Insurance) on support for these policies.

Each experiment will be conducted online (using Connect or Prolific) and will consist of three parts, each separated by two weeks. In the first part (lasting about 6 minutes), we will elicit basic demographic information, whether participants have previously benefited from either policy, political preferences, and both implicit (measured using an Implicit Association Task) and explicit attitudes towards race (measured through a statement of relative preference for Black and White). Respondents will be paid $1.25 for their participation ($15 per hour).

In the second part (lasting 5 minutes and paying a base wage of $1), participants will randomly be sorted into one of four conditions comprising two parallel experiments. In the welfare policy experiment, participants will be asked to estimate the fraction of welfare recipients during 2021 that were Black, how confident they are of this estimate, and then they will be asked about their support for welfare. Before reporting their support, half of these people (selected at random) will be shown the correct answer, so we will be able to assess the causal effect of providing this information on their welfare support, while controlling for their priors. The unemployment insurance experiment will be similar; however, instead of eliciting beliefs about the number of Black people using welfare, here we will ask participants to state their beliefs about how many Black people used unemployment insurance in 2021, how confident they are of this estimate, and then randomly provide half of them with the correct number before eliciting their support to estimate the causal affect of this information on respondent support for unemployment insurance.

In each experiment, participants are incentivized to provide the correct belief, with a bonus of $1 (in a five-minute experiment) for providing a belief that is within two percentage points on either side of the true number.

The third part of the experiment will test the persistence of both the intervention and treatment effects found in the second part. We will elicit incentivized beliefs about, and support for, both policies, which will allow us, in exploratory analysis, to consider the role of belief spillovers. Returning participants will be paid another $1 in base compensation, with the opportunity to earn another $1 in incentives. All belief bonus payments will be paid after the third part of the study is completed so that bonus payment cannot signal information about the correct racial composition beliefs to people in the control conditions.
Intervention Start Date
2024-06-15
Intervention End Date
2024-12-31

Primary Outcomes

Primary Outcomes (end points)
For each experiment, there are two primary outcomes, measured at the end of the second part of the experiment. The first is (unincentivized) policy support: ``Compared to current levels, you think that welfare (unemployment) benefits should be ...," with responses collected using a slider ranging from -100\% (end the benefits) to +100\% (double them). The second is intended to reduce possible experimenter demand effects and to provide a real-stakes signal of support of welfare (unemployment) programs: participants will be able to donate some fraction of a second $1 bonus to another randomly selected participant who has either received cash assistance sometime during the last five years in the welfare experiment or to someone who has collected unemployment benefits sometime during the last five years in the unemployment insurance experiment.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
After first eliciting explicit and implicit measures of discrimination, online participants will be randomly assigned to either 1) receive information about the racial composition of recipients of TANF and unemployment insurance or 2) receive no information. Following this treatment, we elicit their support for TANF and unemployment insurance.
Experimental Design Details
Using two parallel experiments, we intend to study the causal effects of providing information about the number of Black people receiving benefits from two labor market policies (Temporary Assistance for Needy Families (sometimes referred to as welfare) and Unemployment Insurance) on support for these policies.

Each experiment will be conducted online (using Connect or Prolific) and will consist of three parts, each separated by two weeks. In the first part (lasting about 6 minutes), we will elicit basic demographic information, whether participants have previously benefited from either policy, political preferences, and both implicit (measured using an Implicit Association Task) and explicit attitudes towards race (measured through a statement of relative preference for Black and White). Respondents will be paid $1.25 for their participation ($15 per hour).

In the second part (lasting 5 minutes and paying a base wage of $1), participants will randomly be sorted into one of four conditions comprising two parallel experiments. In the welfare policy experiment, participants will be asked to estimate the fraction of welfare recipients during 2021 that were Black, how confident they are of this estimate, and then they will be asked about their support for welfare. Before reporting their support, half of these people (selected at random) will be shown the correct answer, so we will be able to assess the causal effect of providing this information on their welfare support, while controlling for their priors. The unemployment insurance experiment will be similar; however, instead of eliciting beliefs about the number of Black people using welfare, here we will ask participants to state their beliefs about how many Black people used unemployment insurance in 2021, how confident they are of this estimate, and then randomly provide half of them with the correct number before eliciting their support to estimate the causal affect of this information on respondent support for unemployment insurance.

In each experiment, participants are incentivized to provide the correct belief, with a bonus of $1 (in a five-minute experiment) for providing a belief that is within two percentage points on either side of the true number.

The third part of the experiment will test the persistence of both the intervention and treatment effects found in the second part. We will elicit incentivized beliefs about, and support for, both policies, which will allow us, in exploratory analysis, to consider the role of belief spillovers. Returning participants will be paid another $1 in base compensation, with the opportunity to earn another $1 in incentives. All belief bonus payments will be paid after the third part of the study is completed so that bonus payment cannot signal information about the correct racial composition beliefs to people in the control conditions.
Randomization Method
Randomization into treatment arms is performed by Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Approximately 2800 individuals.
Sample size: planned number of observations
Approximately 2800 individuals.
Sample size (or number of clusters) by treatment arms
Between 600 and 700 observations for each of 4 experimental arms (Experimental arms: TANF treatment; TANF control; Unemployment Insurance Treatment; Unemployment Insurance Control)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With the approval of the Middlebury College IRB (Protocol 125, Policy views and beliefs about racial composition), we conducted a pilot experiment in December of 2023 with a total of 380 participants who did parts one and two of the experiment. Our power calculations, based on the results of this pilot, are consistent with the rule of thumb offered in Haaland et al. (2023), who suggest gathering enough observations to detect a 0.15 standard deviation effect (with power = 0.8, significance = 0.05). Using this benchmark, our pilot indicates that we should gather between 600 and 700 observations per experimental arm or a total of approximately 2800.
IRB

Institutional Review Boards (IRBs)

IRB Name
Middlebury College
IRB Approval Date
2024-04-29
IRB Approval Number
Protocol #125
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials