Understanding and Improving Policymakers' Sensitivity to Program Impact

Last registered on November 07, 2021

Pre-Trial

Trial Information

General Information

Title
Understanding and Improving Policymakers' Sensitivity to Program Impact
RCT ID
AEARCTR-0007659
Initial registration date
May 21, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 21, 2021, 4:20 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 07, 2021, 10:24 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Warwick

Other Primary Investigator(s)

PI Affiliation
Florida State University

Additional Trial Information

Status
Completed
Start date
2021-05-24
End date
2021-11-05
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
When making decisions about which programs to implement, policymakers must assess impact in the face of bounded rationality constraints when processing relevant information. These constraints may result in “under-sensitivity” to impact-relevant information about evidence-based programs, ultimately leading to support for less impactful interventions, as compared to a utilitarian benchmark. This study uses a lab-in-the-field experiment among federal employees of the US government to estimate and seek to improve sensitivity. We will first document any under-sensitivities in government decision making across three key program attributes relevant to impact: scope, outcome type, and effect persistence. The primary goal of this study is to test modes of presenting program information that increase sensitivity across these domains, in turn identifying techniques for researchers and evaluators to use to more effectively disseminate results of program evaluations.
External Link(s)

Registration Citation

Citation
Bell, Elizabeth and Mattie Toma. 2021. "Understanding and Improving Policymakers' Sensitivity to Program Impact." AEA RCT Registry. November 07. https://doi.org/10.1257/rct.7659-3.3000000000000003
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
This project will leverage a lab-in-the-field survey experiment among federal employees in the US government. All survey respondents will be presented with descriptions of hypothetical programs and evaluation results and will estimate the maximum cost at which they would be willing to fund each program. We will vary the mode of presenting the information across programs. Specifically, respondents may see a program description presented with no additional framing ("baseline"); a description with a calculator that translates total program costs into the cost per person affected per year ("impact calculator"); or two similar descriptions presented together on one page ("side-by-side").
Intervention Start Date
2021-05-24
Intervention End Date
2021-10-25

Primary Outcomes

Primary Outcomes (end points)
The primary outcome of interest is respondents' perceived program value, which is defined as the maximum cost at which the respondent would be willing to fund the program, as identified in the experiment.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The secondary outcome of interest is self-reported confidence in one's valuation assessments, as a proxy for cognitive uncertainty.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Different modes of presenting program information will be randomly assigned within respondents. That is, respondents will see two program descriptions presented using each of the "baseline" condition, the "impact calculator," and the "side-by-side" comparison, in random order. The programs shown for each condition as well as the calculated impact shown for each program will be randomly varied across respondents.
Experimental Design Details
Randomization Method
The randomization will be implemented in Qualtrics.
Randomization Unit
Conditions will be randomized within respondent.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
The originally-specified sample was 500 participants. Recruitment has been more challenging than expected, so we are committing to a date rather than sample cutoff -- the final sample will be the N collected as of October 25, 2021. At the time of this update, N = 121.
Sample size: planned number of observations
N x 6
Sample size (or number of clusters) by treatment arms
N
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2021-02-01
IRB Approval Number
IRB21-0002 (Note - this study was deemed Not Human Subjects Research)
Analysis Plan

Analysis Plan Documents

Analysis Plan

MD5: c35e36a3579bcd0fc874a4358cd5eb26

SHA1: a14d219620b321601d14b4965b5dd3964d2cbaf9

Uploaded At: May 21, 2021

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
October 25, 2021, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
October 25, 2021, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
192 policymakers
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
1136 program assessments
Final Sample Size (or Number of Clusters) by Treatment Arms
192 policymakers (within-subject design)
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
Policymakers routinely make high-stakes decisions of which programs to fund. Assessing the value of a program is difficult and may be affected by bounded rationality. In an experiment with policymakers in the U.S. government, we find that respondents’ valuations of programs are inelastic with respect to the program’s impact. A complementary experiment among a representative sample of the general public reveals even more pronounced inelasticity in a population less familiar with making program funding decisions. We design and test two portable decision aids, one which presents two alternative programs side-by-side rather than in isolation and another which translates total program cost into an annual cost per person impacted. The decision aids increase elasticity by 0.20 on a base of 0.33 among policymakers and by 0.21 on a base of 0.21 among the general public. We provide evidence that cognitive noise—noisy assessments of complex inputs—is a mechanism that can help explain the observed inelasticity of program valuation with respect to impact.
Citation
Toma, M. and E. Bell. 2021. "Understanding and Improving Policymakers' Sensitivity to Program Impact," Working Paper.

Reports & Other Materials