The impact of narratives and conceptual frames on economic and government attitudes: Evidence regarding negative externality

Last registered on June 06, 2022

Pre-Trial

Trial Information

General Information

Title
The impact of narratives and conceptual frames on economic and government attitudes: Evidence regarding negative externality
RCT ID
AEARCTR-0008002
Initial registration date
July 25, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 28, 2021, 5:50 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 06, 2022, 3:22 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Bates College

Other Primary Investigator(s)

PI Affiliation
Santa Clara University

Additional Trial Information

Status
Completed
Start date
2021-07-26
End date
2021-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This work involves the online implementation of a survey experiment with random assignment. Each participant is presented with five different vignettes in which firms or consumers generate negative externalities and the government intervenes to address the external costs. Participants are asked to rate the fairness of the externality, the government intervention, and the situation that exists once the intervention is in place. The vignettes, inspired by the work of Kahneman, Knetsch, and Thaler (1986) are designed to test individuals’ responses to the framing and characteristics of the negative externality and the government response to the externality.
External Link(s)

Registration Citation

Citation
Goff, Sandra and John Ifcher. 2022. "The impact of narratives and conceptual frames on economic and government attitudes: Evidence regarding negative externality." AEA RCT Registry. June 06. https://doi.org/10.1257/rct.8002-3.0
Experimental Details

Interventions

Intervention(s)
Our survey presents participants with five vignettes involving negative externalities. Each vignette describes a different hypothetical scenario in which the details of a negative externality are varied through random assignment. Although many details of the externality are randomly varied, allowing us to understand the relationship between these details and fairness ratings, we are specifically interested in the effects of three framing interventions.
Intervention Start Date
2021-07-26
Intervention End Date
2021-08-27

Primary Outcomes

Primary Outcomes (end points)
Fairness ratings: Entity, Government, Situation
Ratings are expressed on a scale from 1 = Very Unfair to 7 = Very Fair. In some models, fairness is also recoded as 1 = Fair, 0 = Not Fair (Unfair or Neutral)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Step 1: Recruit participants via MTurk
Step 2: Present participants with five vignettes via an online survey hosted by Qualtrics, Inc. Each vignette details a scenario in which there is a negative externality. Aspects of the externalities are randomly varied.
Step 3: Participants are asked to respond to a set of questions regarding their attitudes and beliefs
Step 4: Participants answer a set of sociodemographic questions
Experimental Design Details
STEP 1: Recruitment
Participants are recruited through Amazon Mechanical Turk (MTurk) using the online service Cloud Research. A brief description of the task and compensation offered will be provided to the MTurk Worker. If a Worker chooses to participate, they will accept the task and click on the survey link.

STEP 2: Informed Consent
Participants who accept the task on MTurk will be directed to a survey experiment implemented using Qualtrics. Participants will first encounter the informed consent document. Participants who would like to proceed with the study will choose the appropriate option at the bottom of the screen. Those who do not wish to participate can choose to exit the study at this time. When they exit they are reminded that they will not receive compensation and are asked to please return the assignment (referred to on MTurk as a Human Intelligence Task (HIT)).

STEP 3: Context-Free Scenario
All participants first receive a generic scenario in which they are asked to rate the fairness of the actions of an externality-producing entity and a government entity attempting to address the externality on a scale from 1 (Very Unfair) to 7 (Very Fair). Phrases within the scenario are randomized to control for a set of characteristics and to test the effects of the framing of the externality. For example, we randomize (i) the entity creating the externality (consumer or producer (local business, multinational corporation, public utility)), (ii) the scale of the external costs (how many bystanders are affected/how far-reaching are the damages), and (iii) the ability of the entity to avoid creating the externality (cost of producing a clean good/availability of close substitutes).

To determine the effects of framing on fairness attitudes we vary the description of the externality in three ways. First, the externality-producing entity is described as (i) stealing bystanders’ well-being, (ii) expropriating bystanders’ well-being, (iii) reducing the bystanders’ well-being as a byproduct, and (iv) reducing the bystanders’ well-being while making the externality-producing entity better off. Second, the entity is described as either (i) neglecting to consider the external costs of their actions, or (ii) not considering the external costs of their actions. Finally, the effect of government intervention (randomly chosen from (i) implementation of a tax, (ii) implementation of a technological requirement, or (iii) prohibition of the externality-causing activity) is described in one of two ways: (i) after the government intervention, the social optimum is reached and the benefits to society as a whole are maximized, or (ii) after the government intervention, the social optimum is reached and the benefits to society as a whole are maximized, but some external costs and their consequences still exist.

A comprehension question is asked alongside the presentation of the scenario and participants are reminded that their bonus payment will be determined by these comprehension checks.

STEP 4: Real-Life Scenarios
Next, participants receive four additional vignettes that describe real-life scenarios: (i) the dumping of pesticides into a river in a developing country, (ii) the dumping of pesticides into a river in Wisconsin, (iii) air pollution created by an electricity-producer in Ohio, and (iv) the pollution of an estuary by consumers using phosphorus-containing detergents. intervene. As in the context-free scenario, participants will receive a random set of conditions within their vignette.

A comprehension question is asked alongside the presentation of each of these scenarios and participants are again reminded that their bonus payment will be determined by these comprehension checks.

STEP 5: Market and Environmental Attitudes (To control for order effects, half of participants will receive these blocks of questions prior to STEP 3)
During this step, participants answer 12 questions about their market attitudes and 8 questions about their environmental attitudes. The order of the market attitudes and environmental attitudes blocks are randomized and the question order within blocks is also randomized.

STEP 6: Socio-Demographic Information
Next, participants are asked to answer some questions about themselves, including educational attainment, race/ethnicity, gender, income, political ideology, religiosity, etc.

STEP 7: End of Study Message & Procedures
Finally, participants receive an end-of-study message containing a confirmation code that they are asked to enter into MTurk to receive payment for their task. At this time, participants are also informed about the amount of their bonus earned from answering the comprehension questions that were asked with each of the five scenarios.
Randomization Method
Randomization is performed by Qualtrics within the Survey Flow
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
200 in Stage 1, 1,000 in Stage 2
Sample size: planned number of observations
1200
Sample size (or number of clusters) by treatment arms
Because there are three different framing interventions, there are 18 possible combinations of the three framing conditions. There will be approximately 66 participants per unique framing intervention combination.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For a multiple regression model with approximately 22 predictors, a minimum sample size of 1,102 participants is required to achieve an anticipated effect size of 0.02 at α = 0.05 and a power of 0.80. Recruiting 1,200 participants increases the chances we will be well-powered with exclusions.
IRB

Institutional Review Boards (IRBs)

IRB Name
Skidmore College Institutional Review Board
IRB Approval Date
2021-07-16
IRB Approval Number
2107-972
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials