Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Back to History Current Version
How Do Policymakers Update?
Last registered on May 18, 2016


Trial Information
General Information
How Do Policymakers Update?
Initial registration date
May 18, 2016
Last updated
May 18, 2016 1:53 PM EDT
Primary Investigator
University of Toronto
Other Primary Investigator(s)
Additional Trial Information
In development
Start date
End date
Secondary IDs
In order for policymakers to make evidence-based decisions, three things must happen. First, there must be evidence on which they can base their decisions. Second, they must accurately update their beliefs based on that evidence. Finally, they must have the incentive to make their decisions according to what the evidence shows. Most research focuses on generating new evidence or on the last question. We focus on the second step. It is well-known that there are a number of behavioural reasons why people may not accurately update their beliefs. We posit that some of them may be more relevant to policymakers, either because policymakers tend to have slightly different biases than other populations or because people are particularly susceptible to some biases when presented with the kind of information that policymakers typically receive. We design an experiment to explore whether updating biases exist and whether the type of information provided may be able to reduce the impact of these biases if they do exist. We also consider other factors that may affect how policymakers update in response to study results.
External Link(s)
Registration Citation
Vivalt, Eva. 2016. "How Do Policymakers Update?." AEA RCT Registry. May 18. https://doi.org/10.1257/rct.1237-1.0.
Former Citation
Vivalt, Eva. 2016. "How Do Policymakers Update?." AEA RCT Registry. May 18. http://www.socialscienceregistry.org/trials/1237/history/8283.
Experimental Details
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
- k and mu' in the updating model
- real-world allocations
- estimated coefficients in the discrete choice model

Further details provided in pre-analysis plan.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Policymakers, practitioners and researchers are surveyed at conferences and by e-mail. We elicit priors, present data, and elicit posteriors. Two information treatments are provided to help mitigate the biases we expect to observe. Participants are asked to make real-world allocations, and we also conduct a discrete choice experiment to estimate how they weight various types of evidence (e.g. evidence from RCTs vs. quasi-experimental studies, evidence from their country vs. others, etc.). Respondents' willingness-to-pay for information is elicited. The MTurk sample focuses on estimating the behavioural biases and their mitigation as a comparison group.
Experimental Design Details
Not available
Randomization Method
Randomization by computer as respondent participates.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
400 policymakers, practitioners and researchers
1,600 MTurk participants
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
200 policymakers, practitioners and researchers and 800 MTurk participants per each of two information treatment arms
300 policymakers, practitioners and researchers and 1,200 MTurk participants and separately 100 policymakers, practitioners and researchers and 400 MTurk participants for two other information treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
Stanford University
IRB Approval Date
IRB Approval Number