x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
How Do Policymakers Update?
Last registered on January 01, 2019

Pre-Trial

Trial Information
General Information
Title
How Do Policymakers Update?
RCT ID
AEARCTR-0001237
Initial registration date
May 18, 2016
Last updated
January 01, 2019 10:13 PM EST
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Australian National University
Other Primary Investigator(s)
Additional Trial Information
Status
In development
Start date
2016-05-19
End date
2019-12-31
Secondary IDs
Abstract
In order for policymakers to make evidence-based decisions, three things must happen. First, there must be evidence on which they can base their decisions. Second, they must accurately update their beliefs based on that evidence. Finally, they must have the incentive to make their decisions according to what the evidence shows. Most research focuses on generating new evidence or on the last question. We focus on the second step. It is well-known that there are a number of behavioural reasons why people may not accurately update their beliefs. We posit that some of them may be more relevant to policymakers, either because policymakers tend to have slightly different biases than other populations or because people are particularly susceptible to some biases when presented with the kind of information that policymakers typically receive. We design an experiment to explore whether updating biases exist and whether the type of information provided may be able to reduce the impact of these biases if they do exist. We also consider other factors that may affect how policymakers update in response to study results.
External Link(s)
Registration Citation
Citation
Vivalt, Eva. 2019. "How Do Policymakers Update?." AEA RCT Registry. January 01. https://doi.org/10.1257/rct.1237-4.0.
Former Citation
Vivalt, Eva. 2019. "How Do Policymakers Update?." AEA RCT Registry. January 01. https://www.socialscienceregistry.org/trials/1237/history/39830.
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2016-05-19
Intervention End Date
2018-12-31
Primary Outcomes
Primary Outcomes (end points)
- k and mu' in the updating model
- real-world allocations
- estimated coefficients in the discrete choice model

Further details provided in pre-analysis plan.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Policymakers, practitioners and researchers are surveyed at conferences and by e-mail. We elicit priors, present data, and elicit posteriors. Two information treatments are provided to help mitigate the biases we expect to observe. Participants are asked to make real-world allocations, and we also conduct a discrete choice experiment to estimate how they weight various types of evidence (e.g. evidence from RCTs vs. quasi-experimental studies, evidence from their country vs. others, etc.). Respondents' willingness-to-pay for information is elicited. The MTurk sample focuses on estimating the behavioural biases and their mitigation as a comparison group.
Experimental Design Details
Not available
Randomization Method
Randomization by computer as respondent participates.
Randomization Unit
Individual.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
400 policymakers, practitioners and researchers
1,600 MTurk participants
Sample size: planned number of observations
2,000
Sample size (or number of clusters) by treatment arms
200 policymakers, practitioners and researchers and 800 MTurk participants per each of two information treatment arms
300 policymakers, practitioners and researchers and 1,200 MTurk participants and separately 100 policymakers, practitioners and researchers and 400 MTurk participants for two other information treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Stanford University
IRB Approval Date
2016-04-29
IRB Approval Number
37371