Understanding and Improving Policymakers' Sensitivity to Program Impact
Last registered on May 21, 2021

Pre-Trial

Trial Information
General Information
Title
Understanding and Improving Policymakers' Sensitivity to Program Impact
RCT ID
AEARCTR-0007659
Initial registration date
May 21, 2021
Last updated
May 21, 2021 4:20 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Harvard University
Other Primary Investigator(s)
PI Affiliation
Florida State University
Additional Trial Information
Status
In development
Start date
2021-05-24
End date
2021-09-01
Secondary IDs
Abstract
When making decisions about which programs to implement, policymakers must assess impact in the face of bounded rationality constraints when processing relevant information. These constraints may result in “under-sensitivity” to impact-relevant information about evidence-based programs, ultimately leading to support for less impactful interventions, as compared to a utilitarian benchmark. This study uses a lab-in-the-field experiment among federal employees of the US government to estimate and seek to improve sensitivity. We will first document any under-sensitivities in government decision making across three key program attributes relevant to impact: scope, outcome type, and effect persistence. The primary goal of this study is to test modes of presenting program information that increase sensitivity across these domains, in turn identifying techniques for researchers and evaluators to use to more effectively disseminate results of program evaluations.
External Link(s)
Registration Citation
Citation
Bell, Elizabeth and Mattie Toma. 2021. "Understanding and Improving Policymakers' Sensitivity to Program Impact." AEA RCT Registry. May 21. https://doi.org/10.1257/rct.7659-1.0.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
This project will leverage a lab-in-the-field survey experiment among federal employees in the US government. All survey respondents will be presented with descriptions of hypothetical programs and evaluation results and will estimate the maximum cost at which they would be willing to fund each program. We will vary the mode of presenting the information across programs. Specifically, respondents may see a program description presented with no additional framing ("baseline"); a description with a calculator that translates total program costs into the cost per person affected per year ("impact calculator"); or two similar descriptions presented together on one page ("side-by-side").
Intervention Start Date
2021-05-24
Intervention End Date
2021-09-01
Primary Outcomes
Primary Outcomes (end points)
The primary outcome of interest is respondents' perceived program value, which is defined as the maximum cost at which the respondent would be willing to fund the program, as identified in the experiment.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
The secondary outcome of interest is self-reported confidence in one's valuation assessments, as a proxy for cognitive uncertainty.
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Different modes of presenting program information will be randomly assigned within respondents. That is, respondents will see two program descriptions presented using each of the "baseline" condition, the "impact calculator," and the "side-by-side" comparison, in random order. The programs shown for each condition as well as the calculated impact shown for each program will be randomly varied across respondents.
Experimental Design Details
Not available
Randomization Method
The randomization will be implemented in Qualtrics.
Randomization Unit
Conditions will be randomized within respondent.
Was the treatment clustered?
Yes
Experiment Characteristics
Sample size: planned number of clusters
500
Sample size: planned number of observations
3,000
Sample size (or number of clusters) by treatment arms
500
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2021-02-01
IRB Approval Number
IRB21-0002 (Note - this study was deemed Not Human Subjects Research)
Analysis Plan
Analysis Plan Documents
Analysis Plan

MD5: c35e36a3579bcd0fc874a4358cd5eb26

SHA1: a14d219620b321601d14b4965b5dd3964d2cbaf9

Uploaded At: May 21, 2021