Panic and Preferences

Last registered on July 03, 2020

Pre-Trial

Trial Information

General Information

Title
Panic and Preferences
RCT ID
AEARCTR-0005639
Initial registration date
April 02, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 03, 2020, 9:11 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 03, 2020, 10:08 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
MIT

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2020-03-19
End date
2020-04-04
Secondary IDs
Abstract
We perform two experiments to measure how government declarations and actions affect mental states and behavior. To manipulate the (true) information that participants receive, we will randomize exposure to news about government actions. We will then analyze how the nature of this information affects mental states and behavior.
External Link(s)

Registration Citation

Citation
Rafkin, Charlie. 2020. "Panic and Preferences." AEA RCT Registry. July 03. https://doi.org/10.1257/rct.5639-2.0
Experimental Details

Interventions

Intervention(s)
We run two separate experiments, with distinct samples, simultaneously. Both experiments manipulate exposure to information about the government.
Intervention Start Date
2020-04-02
Intervention End Date
2020-04-04

Primary Outcomes

Primary Outcomes (end points)
Anxiety measured via the Beck Anxiety Inventory, aggregated into one index
Update in predictions about how severe COVID-19 will be (log predicted number of deaths)
Demand for information
Performance on a data entry task
Willingness to pay as a measure of hoarding behavior
Heterogeneity by:
- Baseline Trump support (three groups: oppose trump, support trump, undecided)
- Baseline belief that the government has or acts on own information
- Baseline belief about severity of COVID-19 (predicted number of deaths, predicted death rate).

Additional primary outcomes for Strong vs. Weak:
- Update in predictions about how severe COVID-19 will be (death rate)

Additional primary outcomes for Steady vs. Changing:
- Update in support for Donald Trump



Primary Outcomes (explanation)
Anxiety measure is a standard questionnaire commonly used in psychiatry, which we adapted to focus on Today
Support for Donald Trump is measured as the confidence, on a scale from 0 to 10, that the participant will vote for Trump in the 2020 Presidential election
Predictions about how severe COVID-19 will be (total number of deaths in the US, rate of death among younger and older people infected) are elicited before and after information provision. These predictions are incentivized for accuracy.
Willingness to pay for a notification service about when hand sanitizer, N95 masks, pasta, and sunscreen (as a placebo) become available again on Amazon
Our primary measure of willingness to pay uses a log or sinh transform to handle outliers
Demand for information: we promise to show participants a link to an article at the end of the study. The article can be on four subjects: it can provide: (i) cute animal pictures, (ii) information about COVID cases and deaths in the United States, (iii) information about the effect of the Senate CARES bill on health insurance coverage, and (iv) information about wellness and stress-reduction. Participants choose the article they most want to receive.
Data entry: In a first task, participants are given a list of metropolitan areas’ populations. We ask participants to sort the metro area by population from largest to smallest. In a second task, participants are given a list of positive COVID-19 tests in a list of states. We ask participants to sort the states by positive COVID-19 tests, from largest to smallest. For both tasks, we measure accuracy and speed, as well as a combined index. Participants are randomized to complete one of the two data entry tasks.
We study heterogeneous treatment effects by: (i) baseline Trump support, (ii) belief that the government acts on its own private information, and (iii) baseline beliefs about COVID severity. We bucket Trump support into three groups: opposed to Trump, undecided, and supports Trump.

Secondary Outcomes

Secondary Outcomes (end points)
Updates in perception of how the government is handling the crisis
Change in width of confidence intervals about predicting the number of deaths from coronavirus in the US
Update in predictions about the stock market
Change in self-reported uncertainty about the stock market
Anxiety decomposition
Risk aversion
An exploratory empirical strategy will use our treatments as instrumental variables for anxiety in order to explore the causal effects of anxiety on economic preferences as measured by the global preference survey and dictator game. Because the exclusion restriction seems imperfectly satisfied, this approach will be secondary
The number of self-reported people the person plans to meet with outside the household
Heterogeneity by the same sources as in primary outcomes:
- Baseline Trump support (three groups: oppose Trump, support Trump, undecided)
- Baseline belief that the government has or acts on own information
- Baseline belief about severity of COVID-19 (predicted number of deaths, predicted death rate)
Additional heterogeneity of primary and secondary outcomes by the baseline amount of news the participant has consumed about COVID-19.

Additional secondary outcomes for Strong vs. Weak:
- Update in support for Donald Trump and the government
Additional secondary outcomes for Steady vs. Changing:
- Update in predictions about how severe COVID-19 will be (death rate)

Secondary Outcomes (explanation)
We ask participants about how well the government is managing the crisis (e.g., overreacting vs. taking appropriate action)
Anxiety decomposition: We ask participants to self-report whether they are anxious because of: economic fallout from COVID (effect on themselves and others), consequences of quarantines, the health effects of COVID (becoming sick themselves or others becoming sick), political consequences of COVID, or general COVID-related chaos.
We elicit predictions about the value of the Dow Jones Industrial Average in October, 2020. We reward predictions for accuracy.
Risk aversion: participants choose between a binary choice between $15 for sure and a 50-50 gamble of 0 or $30
We ask participants to report the number of people they plan to meet for social purposes outside the household. Our objective is to study how the treatment affects how people adhere to self-reported social distancing guidelines.
We add additional heterogeneity by the self-reported amount of attention the participant has consumed about COVID-19; our information may be more surprising to people who have consumed less news.

Experimental Design

Experimental Design
We randomize (truthful) information about statements given and actions taken by the government.
Experimental Design Details
We will recruit participants through luc.id, an online platform which gathers a representative sample of the US population.
We assign participants to treatments using stratified random sampling. We define 18 strata, based on age (3 levels), political party (3 levels), and sex (2 levels). Within each stratum we perform a 3-step randomization of participants into one of two experiments, further into one of two treatment arms, and finally into one of several treatment variations.
In both experiments, the randomized portion of treatment consists of four distinct messages delivered in two doses of two messages. We deliver one dose of two messages and measure a first set of outcomes; then we deliver a second dose of messages and measure further outcomes. We randomize the order of these messages between one of 6 orderings in the Strong vs. Weak experiment and one of 2 orderings in the Steady vs. Changing experiment.
Within a stratum, each step of randomization is sequential based on arrival time and nested within the previous step. Among the first two participants to arrive in a stratum, exactly one will enter each experiment. Then, among the first two within each experiment, exactly one will enter each treatment arm. Finally, among the first six (in Strong/Weak) and first two (in Steady/Changing) in each treatment arm, exactly one will see each ordering of the treatments.
In the Strong vs. Weak experiment, we randomize participants to an information treatment about the extent of the federal government’s response to COVID-19 (stronger or weaker). For example, a participant in the strong response may be told that the Trump administration may use wartime powers to procure emergency medical supplies. We have four pieces of information in the “weak response” treatment and four pieces of information in the “strong response” treatment. In the Steady vs. Changing experiment, we randomize participants between information that does or does not highlight changes in government policy. In the Steady treatment arm, we provide participants with a piece of information that the federal government is taking a strong action against COVID-19. In the Changing arm, we provide the same information, but we also tell participants about the federal government’s prior response which differed from this strong action. Last, in both arms, we provide participants with information about the federal government’s prediction that there will be 100,000–240,000 deaths from COVID-19.
In both experiments, we solicit baseline knowledge and beliefs about the severity of the COVID-19 crisis in the US (in terms of health and economic effects) and the federal government’s response.
After the first dose of treatment, we record participants’ anxiety using a clinical anxiety measure (Beck Anxiety Inventory). In order to measure belief updating, we re-prompt participants to provide their knowledge and beliefs about the severity of the COVID crisis (in terms of health and economic effects). Next, we elicit demand for information by informing participants that we will provide them a link to an article on one of four subjects at the end of the survey: (i) cute animal pictures, (ii) information about COVID cases and deaths in the United States, (iii) information about the effect of the Senate CARES bill on health insurance coverage, and (iv) information about wellness and stress-reduction. After eliciting these outcomes, we then provide the remaining statements within the treatment group. In the Strong vs. Weak experiment, we provide another set of statements detailing strong (or weak) government actions. In the Steady vs. Changing experiment, we provide another example of Steady or Changing government action. Next, we elicit participants’ choice over a 50-50 lottery (where they receive either $0 or $30) vs. $15 for sure. The participants conduct a data entry task; in one task, they sort metropolitan areas by population, and in another task, they sort states by the number of positive COVID-19 tests in that state as of March 26. Participants are randomized into which data entry task they complete. We ask participants to provide other self-reports of possible concerns they may have about the coronavirus crisis, e.g. that it may cause them to get sick; the purpose of this exercise is to decompose why they may feel anxious. Following this elicitation, we conclude the survey by asking participants about the number of people they plan to see socially outside the household. The objective of this question is to determine whether the participant plans to adhere to social distancing guidelines.
We also elicit updates about participants’ views about the government reaction to the crisis. In the Weak vs. Strong experiment, we elicit these at the end of the study. In the Steady vs. Changing experiment, we elicit these after the second treatment dose. We elicit these directly after treatment in Steady vs. Changing because it is a primary outcome in that experiment.
Randomization Method
Randomization in Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
2,000 per experiment
Sample size: planned number of observations
2,000 per experiment
Sample size (or number of clusters) by treatment arms
1,000
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Massachusetts Institute of Technology Committee on the Use of Humans as Experimental Subjects
IRB Approval Date
2020-03-11
IRB Approval Number
E-2040

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials