Back to History Current Version

Evidence-Based Policymaking in Education

Last registered on October 07, 2020

Pre-Trial

Trial Information

General Information

Title
Evidence-Based Policymaking in Education
RCT ID
AEARCTR-0006563
Initial registration date
October 07, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 07, 2020, 9:48 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-10-26
End date
2021-08-31
Secondary IDs
Abstract
Decisions made by education policymakers determine how schools and teachers are organized, and how students learn. In the era of evidence-based policymaking, education policymakers face pressure to use research to inform their decisions. This paper explores the mental models that policymakers use when integrating research evidence in their policy decisions. I conduct survey experiments on education policymakers. First, I examine policymakers’ preferences for research evidence. Using a discrete-choice experiment, I present policymakers with a series of research studies that vary along attributes of internal and external validity. They are asked about their preference between pairs of research studies as they make a hypothetical policy decision, requiring them to make trade-offs between different study attributes. Second, I explore what policymakers believe, what information they seek out, and how they update their beliefs about the effectiveness of education policies. I elicit policymakers’ predictions for the effect of an education policy in a particular setting. Then, I conduct an information experiment to study how policymakers update their beliefs in response to new information from researchers and from peers. Together, the results of my study will help us better understand how evidence-based decisions are made by education policymakers.
External Link(s)

Registration Citation

Citation
Nakajima, Nozomi. 2020. "Evidence-Based Policymaking in Education." AEA RCT Registry. October 07. https://doi.org/10.1257/rct.6563-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-10-26
Intervention End Date
2021-08-31

Primary Outcomes

Primary Outcomes (end points)
Selection of research studies; Posterior beliefs about the effectiveness of education policies.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Discrete-choice experiment: Policymakers are presented with hypothetical scenarios in which they evaluate different research evidence to help guide policy decisions in their own local setting. Each scenario contains two potential research studies, which randomly vary along aspects of internal and external validity with the intent of creating realistic variation of study attributes.

Information experiment: The experimental design has three stages. In the first stage, I elicit policymakers’ prior beliefs by asking them to forecast the effect of an education policy in a specific setting. In the second stage of the experiment, policymakers are asked to rank their choices between different pieces of information that could be useful for making their own forecast: (1) a forecast made by researchers, (2) a forecast made by peer policymakers, or (3) no information. In the third stage, policymakers are randomly assigned to receive the researcher forecast, peer forecast, or no additional information. In the fourth stage, I re-elicit policymakers’ beliefs about the policy effect asked in the first stage.
Experimental Design Details
Not available
Randomization Method
Randomization will be done in office by a computer.
Randomization Unit
Levels of study attributes (for the discrete-choice experiment) and individuals (for the information experiment).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
2,000 policymakers
Sample size (or number of clusters) by treatment arms
667 individuals receive researcher forecast, 667 individuals receive peer forecast, and 666 receive no information (control).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
2019-10-10
IRB Approval Number
IRB19-1623