Evidence-Based Policymaking in Education
Last registered on May 28, 2021

Pre-Trial

Trial Information
General Information
Title
Evidence-Based Policymaking in Education
RCT ID
AEARCTR-0006563
Initial registration date
October 07, 2020
Last updated
May 28, 2021 2:06 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Harvard University
Other Primary Investigator(s)
Additional Trial Information
Status
On going
Start date
2020-10-26
End date
2021-08-31
Secondary IDs
Abstract
Decisions made by education policymakers determine how schools and teachers are organized, and how students learn. In the era of evidence-based policymaking, education policymakers face pressure to use research to inform their decisions. This paper explores the mental models that policymakers use when integrating research evidence in their policy decisions. I conduct survey experiments on education policymakers. First, I examine policymakers’ preferences for research evidence. Using a discrete-choice experiment, I present policymakers with a series of research studies that vary along attributes of internal and external validity. They are asked about their preference between pairs of research studies as they make a hypothetical policy decision, requiring them to make trade-offs between different study attributes. Second, I explore what policymakers believe, what information they seek out, and how they update their beliefs about the effectiveness of education policies. I elicit policymakers’ predictions for the effect of an education policy in a particular setting. Then, I conduct an information experiment to study how policymakers update their beliefs in response to new information from researchers and from peers. Together, the results of my study will help us better understand how evidence-based decisions are made by education policymakers.
External Link(s)
Registration Citation
Citation
Nakajima, Nozomi. 2021. "Evidence-Based Policymaking in Education." AEA RCT Registry. May 28. https://doi.org/10.1257/rct.6563-1.1.
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2020-10-26
Intervention End Date
2021-08-31
Primary Outcomes
Primary Outcomes (end points)
For the discrete choice experiment, the primary outcomes of interest are: study choice (forced choice) and percent weight for each study. For the information experiment, the primary outcomes of interest are: posterior beliefs about the effectiveness of education policies (measured at the end of the survey and at the follow-up survey) and policy recommendations.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
For the information experiment, a secondary outcome of interest is the respondents' qualitative beliefs about what informs their poilicy views.
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Discrete-choice experiment: Policymakers are presented with hypothetical scenarios in which they evaluate different research evidence to help guide policy decisions in their own local setting. Each scenario contains two potential research studies, which randomly vary along aspects of internal and external validity with the intent of creating realistic variation of study attributes.

Information experiment: The experimental design has three stages. In the first stage, I elicit policymakers’ prior beliefs by asking them to forecast the effect of an education policy in a specific setting. In the second stage of the experiment, policymakers are asked to rank their choices between different pieces of information that could be useful for making their own forecast: (1) a forecast made by researchers (split between a basic forecast and a forecast with explanations about research design), (2) a forecast made by peer policymakers, or (3) no information. In the third stage, policymakers are randomly assigned to receive the researcher forecast, peer forecast, or no additional information. In the fourth stage, I re-elicit policymakers’ beliefs about the policy effect asked in the first stage.
Experimental Design Details
Not available
Randomization Method
Randomization will be done in office by a computer.
Randomization Unit
Levels of study attributes (for the discrete-choice experiment) and individuals (for the information experiment).
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
N/A
Sample size: planned number of observations
2,000 policymakers
Sample size (or number of clusters) by treatment arms
1000 individuals receive researcher forecast (500 with basic forecast, 500 with forecast plus explanation about research design), 500 individuals receive peer forecast, and 500 receive no information (control).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Harvard University
IRB Approval Date
2019-10-10
IRB Approval Number
IRB19-1623
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information