Evidence-Based Policymaking in Education

Last registered on April 19, 2022


Trial Information

General Information

Evidence-Based Policymaking in Education
Initial registration date
October 07, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 07, 2020, 9:48 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 19, 2022, 1:58 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Harvard University

Other Primary Investigator(s)

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Decisions made by education policymakers determine how schools and teachers are organized, and how students learn. In the era of evidence-based policymaking, education policymakers face pressure to use research to inform their decisions. This paper explores the mental models that policymakers use when integrating research evidence in their policy decisions. I conduct survey experiments on education policymakers. First, I examine policymakers’ preferences for research evidence. Using a discrete-choice experiment, I present policymakers with a series of research studies that vary along attributes of internal and external validity. They are asked about their preference between pairs of research studies as they make a hypothetical policy decision, requiring them to make trade-offs between different study attributes. Second, I explore what policymakers believe, what information they seek out, and how they update their beliefs about the effectiveness of education policies. I elicit policymakers’ predictions for the effect of an education policy in a particular setting. Then, I conduct an information experiment to study how policymakers update their beliefs in response to new information from researchers and from peers. Together, the results of my study will help us better understand how evidence-based decisions are made by education policymakers.
External Link(s)

Registration Citation

Nakajima, Nozomi. 2022. "Evidence-Based Policymaking in Education." AEA RCT Registry. April 19. https://doi.org/10.1257/rct.6563-1.2
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
For the discrete choice experiment, the primary outcomes of interest are: study choice (forced choice) and percent weight for each study. For the information experiment, the primary outcomes of interest are: posterior beliefs about the effectiveness of education policies (measured at the end of the survey and at the follow-up survey) and policy recommendations.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
For the information experiment, a secondary outcome of interest is the respondents' qualitative beliefs about what informs their poilicy views.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Discrete-choice experiment: Policymakers are presented with hypothetical scenarios in which they evaluate different research evidence to help guide policy decisions in their own local setting. Each scenario contains two potential research studies, which randomly vary along aspects of internal and external validity with the intent of creating realistic variation of study attributes.

Information experiment: The experimental design has three stages. In the first stage, I elicit policymakers’ prior beliefs by asking them to forecast the effect of an education policy in a specific setting. In the second stage of the experiment, policymakers are asked to rank their choices between different pieces of information that could be useful for making their own forecast: (1) a forecast made by researchers (split between a basic forecast and a forecast with explanations about research design), (2) a forecast made by peer policymakers, or (3) no information. In the third stage, policymakers are randomly assigned to receive the researcher forecast, peer forecast, or no additional information. In the fourth stage, I re-elicit policymakers’ beliefs about the policy effect asked in the first stage.
Experimental Design Details
Not available
Randomization Method
Randomization will be done in office by a computer.
Randomization Unit
Levels of study attributes (for the discrete-choice experiment) and individuals (for the information experiment).
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
2,000 policymakers
Sample size (or number of clusters) by treatment arms
1000 individuals receive researcher forecast (500 with basic forecast, 500 with forecast plus explanation about research design), 500 individuals receive peer forecast, and 500 receive no information (control).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information