The AEA RCT Registry will be down for maintenance on Tuesday, April 20th, from 9pm EDT to 11pm EDT to perform necessary upgrades. The site will be in maintenance mode during that time. We apologize for any inconvenience this may cause.
Benchmarking development programs: a preference-based approach
Initial registration date
June 23, 2016
April 12, 2018 4:16 AM EDT
Busara Center for Behavioral Economics
Other Primary Investigator(s)
Additional Trial Information
Hundreds of billions of dollars are spent each year on aid programs. In 2014, for example,
OECD countries provided USD 135 billion in official development assistance (OECD,
2014) and US charitable giving to international programs exceeds USD 20 billion (Reuters,
2012). Beyond this, developing country governments allocate substantial sums to programs
intended to benefit the poor and spur development. These billions of dollars are allocated
across a wide variety of programs such as infrastructure, education, health, agriculture and
direct assistance (e.g., subsidized goods, food aid, livestock transfers and cash transfers). A
fundamental problem, impacting the hundreds of millions of individuals reached by aid, is
how best to allocate spending across programs.Yet it is incredibly difficult to decide how to
allocate resources across programs. An important input, among others, into the allocation
decision is how much recipients value particular forms of aid relative to the cost of providing
that aid, including both the value of goods and services received by beneficiaries and the
overhead cost of providing those goods and services.
A central aim of this study is to develop a replicable methodology to rapidly and efficiently
estimate the value of different types of aid to recipients. This information can be used to
determine whether a particular form of aid is valued more highly than its cost and to assess
the relative value of alternative uses of aid funding.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Values of aid programs relative to costs. Consistency of choice over time.
Primary Outcomes (explanation)
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design Details
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Approval Date
Post Trial Information
Is the intervention completed?
Is data collection complete?
Data Collection Completion Date
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Reports, Papers & Other Materials
REPORTS & OTHER MATERIALS
This study proposes a preference-based benchmarking approach to assess trade-offs
between alternative uses of aid dollars. We ask ~800 low-income Kenyans their valuation (in cash) for common aid and development programs. We compare recipientsâ€™ stated
Shapiro, Jeremy. "Benchmarking development programs: a preference-based approach," October 01, 2017.