Abstract
When making decisions about which programs to implement, policymakers must assess impact in the face of bounded rationality constraints when processing relevant information. These constraints may result in “under-sensitivity” to impact-relevant information about evidence-based programs, ultimately leading to support for less impactful interventions, as compared to a utilitarian benchmark. This study uses a lab-in-the-field experiment among federal employees of the US government to estimate and seek to improve sensitivity. We will first document any under-sensitivities in government decision making across three key program attributes relevant to impact: scope, outcome type, and effect persistence. The primary goal of this study is to test modes of presenting program information that increase sensitivity across these domains, in turn identifying techniques for researchers and evaluators to use to more effectively disseminate results of program evaluations.