Preferences and Political Incentives
Last registered on March 19, 2020

Pre-Trial

Trial Information
General Information
Title
Preferences and Political Incentives
RCT ID
AEARCTR-0005575
Initial registration date
March 19, 2020
Last updated
March 19, 2020 12:16 PM EDT
Location(s)
Region
Primary Investigator
Affiliation
MIT
Other Primary Investigator(s)
Additional Trial Information
Status
In development
Start date
2020-03-19
End date
2020-03-26
Secondary IDs
Abstract
We will perform a survey experiment to measure how government declarations and actions affect mental states and social preferences. To manipulate the (true) information that participants receive, we will randomize exposure to news about government declarations and actions. We will then analyze how the nature of this information affects mental states, social preferences, and behavior.
External Link(s)
Registration Citation
Citation
Rafkin, Charlie. 2020. "Preferences and Political Incentives ." AEA RCT Registry. March 19. https://doi.org/10.1257/rct.5575-1.0.
Experimental Details
Interventions
Intervention(s)
We cross-randomize participants between two binary treatments, resulting in four treatment arms.
Intervention Start Date
2020-03-19
Intervention End Date
2020-03-22
Primary Outcomes
Primary Outcomes (end points)
- Anxiety measured via GAD-7 and Beck Anxiety Inventory, collected as two sub-indices and one aggregate anxiety index.
- Willingness to pay for a notification service about when hand sanitizer and N95 masks become available again on Amazon
- We also ask about willingness to pay for notifications regarding availability of instant coffee. In our primary specification we will drop participants who state a willingness to pay for coffee notifications higher than 10 dollars.
- Update in support for Donald Trump and the government
- Update in predictions about how severe COVID-19 will be
- Heterogeneity by baseline Trump support and belief that the government has or acts on own information
- Ability to memorize and correctly recall information about contagion risk mitigation and demand for further such information
- We include a criterion to drop survey participants who appear not to be paying reasonable attention to the survey. We will drop participants who complete the entire survey in less than 2 minutes.
- We will drop participants who fail both of two simple attention checks:
Check 1:
The color test is simple, when asked for your favorite color you must enter the word puce in the text box below.
Based on the text you read above, what color have you been asked to enter?
Check 2: It’s important that you pay attention to this study. Please select 7.
Primary Outcomes (explanation)
- Anxiety measures are standard questionnaires commonly used in psychiatry
- Support for Donald Trump is measured as the confidence, on a scale from 0 to 10, that the participant will vote for Trump in the 2020 Presidential election
- Predictions about how severe COVID-19 will be (total number of deaths in the US, rate of death among people infected) are elicited before and after information provision. These predictions are incentivized for accuracy.
- Heterogeneous treatment effects by baseline Trump support and belief that the government acts on own private information
- Propensity to memorize and correctly recall information about contagion risk mitigation. Information from the CDC website is shown in the middle of the survey. Participants are asked to recall specific numbers from these messages at the end of the survey.

Secondary Outcomes
Secondary Outcomes (end points)
Updates in perception of how the government is handling the crisis
Change in width of confidence intervals about predicting the number of deaths from coronavirus in the US
- Global Preference Survey questions on risk aversion, time preferences, negative reciprocity, positive reciprocity, altruism
- Dictator game
- Heterogeneity by baseline beliefs about COVID-19 severity
- Heterogeneity by baseline uncertainty about COVID-19 severity, including width of confidence intervals and how much participants report following the news or government measures to combat COVID-19
- Willingness to pay for a notification service about when pasta (panic good, high availability) and instant coffee (panic neutral, high availability) become available on Amazon.
- An exploratory empirical strategy will use our treatments as instrumental variables for anxiety in order to explore the causal effects of anxiety on economic preferences as measured by the global preference survey and dictator game. Because the exclusion restriction is far from perfect, this approach will be secondary.
Secondary Outcomes (explanation)
- Global Preference Survey questions are directly taken from Falk et al. (2018) but exclude numerical tradeoffs for time and risk preferences
- Dictator game: participants can choose how much of their potential earnings to allocate to the Red Cross or their preferred charity
Experimental Design
Experimental Design
We cross-randomize (truthful) information about a future event and actions taken by the government.
Experimental Design Details
We will recruit participants through luc.id, an online platform which gathers a representative sample of the US population. After soliciting baseline knowledge and beliefs about the severity of the COVID-19 crisis in the US and the federal government’s response, we will cross-randomize participants to two information treatments. The first is about the severity of the crisis (higher or lower) and the second is about the extent of the federal government’s response (stronger or weaker). For example, a participant in the (lower risk, weaker response) treatments will be told that COVID-19 has killed relatively few people in the US to date and that the President Trump has stated no interest in national quarantine; a participant in the (higher risk, stronger response) treatments will be told that the projected deaths exceed 1 million and that President Trump has recommended restricting social gatherings to fewer than 10 people.
Directly after treatment exposure, we record participants’ anxiety using two clinical anxiety measures (GAD-7 and Beck Anxiety Inventory). In order to measure belief updating, we re-prompt participants to provide their knowledge and beliefs about the severity of the COVID crisis and the government’s response. We then elicit their willingness to pay for a service that will notify them as soon as hand sanitizer and masks become available on an online retail platform. Next, we inform participants about several CDC recommendations and offer them the opportunity to see one additional recommendation. We measure participants’ general preferences (ex: risk aversion, reciprocity) through a series of self-reports and a brief dictator game. We also elicit their intention to vote for Donald Trump in the presidential election. Finally, we test participants for recall of the previously shown CDC recommendations.
We conclude by providing participants a link to the CDC’s public health recommendations.
We incentivize all questions about knowledge, beliefs, and predictions by telling participants they may be eligible for an Amazon gift card whose value depends on the accuracy of their answers. We randomly lottery each participant to receive the gift card offer, and expect one participant to receive it. To determine the value of the gift card, we will randomly select one of the participant’s responses and assign higher rewards for answers closer to the truth. Where we ask for subjective confidence intervals, we also reward respondents for providing narrower intervals. We tell participants that we will compare their predictions to reality 6 months after the survey.
Our “policy information” treatment is composed of two arms: “Strong policy reaction” and “Weak policy reaction”. Within each arm, participants will be exposed to two statements about the government’s reaction to the COVID crisis. These two statements will be randomly chosen from a set of 4. This is done in order to identify the effect of strong vs. gentle policy reaction without relying on a single statement in each case. For the analysis, we pool participants from each of the two treatment arms, without considering specific messages as individual treatments.
Randomization Method
Randomization done by Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
2,000
Sample size: planned number of observations
2,000
Sample size (or number of clusters) by treatment arms
500
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Massachusetts Institute of Technology Committee on the Use of Humans as Experimental Subjects
IRB Approval Date
2020-03-11
IRB Approval Number
E-2040
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS