The AEA RCT Registry will be down for maintenance on Tuesday, April 20th, from 9pm EDT to 11pm EDT to perform necessary upgrades. The site will be in maintenance mode during that time. We apologize for any inconvenience this may cause.
Messaging to Improve Phone Survey Response Rates
Last registered on December 10, 2020


Trial Information
General Information
Messaging to Improve Phone Survey Response Rates
Initial registration date
July 08, 2020
Last updated
December 10, 2020 11:50 AM EST

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Innovations for Poverty Action
Other Primary Investigator(s)
PI Affiliation
Innovations for Poverty Action
PI Affiliation
Innovations for Poverty Action
PI Affiliation
Northwestern University
PI Affiliation
Northwestern University
Additional Trial Information
On going
Start date
End date
Secondary IDs
Substantial literature on survey response rates focuses on framing and the appeal to altruism as a motivation for participating, with methods like pre-survey post-cards and letters to incentivize cooperation, with evidence coming primarily from the U.S. and Europe. More recently, there has been interest in mobile phone surveys in low and middle income countries, where the efficacy of methods for improving response rates is not as well known. This study randomizes the use of pre-survey text messages, whether to send them and which type of appeal to make. The study also randomizes the messaging used in the consent script, appealing alternatively to “researcher” or “government” as the motivating authority, in the first round of experiments, and appeals to efficacy of participation and to self-interest with reminders about monetary compensation. The experiment is conducted in 11 random-digit dial (RDD) surveys in 10 countries, with followup surveys in 5 of those surveys.
External Link(s)
Registration Citation
Dillon, Andrew et al. 2020. "Messaging to Improve Phone Survey Response Rates." AEA RCT Registry. December 10. https://doi.org/10.1257/rct.6111-2.0.
Experimental Details
The experiment varied two factors

Factor 1 is SMS text message sent to respondent prior to CATI interview call, with 3 possible levels:
S0 = No SMS
SG= SMS, appeal to "government"
SR= SMS, appeal to "researcher"

Factor 2 is appeal in the consent script, with 3 levels:
G = consent appeals to "government"
R = consent appeals to "researcher"
P = consent appeals to "policymaker"

In Colombia and Mexico, it was a 2x2 (no cases assigned to S0 or P)
In the other wave 1 countries it was a 3x1 (S, G, or P)
"Mixed message" cells (SG-R, SG-P, SR-G, SR-P, etc.) are not populated.
In Spanish-speaking countries, "policymaker" appeals are omitted because terminology is hard to distinguish from government.

In Wave 2 surveys, the treatment factor structure is altered and different messaging contrasts are planned. The pre-survey SMS message has the following appeal type for each treatment arm:
A1 Placebo/short message -- no specific appeal
A2 General Learning (Food access), says that the first wave of the survey was informative about food access
A3 General Learning (Household finances), says the first wave of the survey related to household finances
A4 Specific Learning (Food access), same as above but shared a statistic about food access
A5 Specific Learning (Household finances), same as above, but shared a statistic about household finances
B1 Self Interest, reminded respondents about the monetary incentive
B2 General Learning (Food access) + Self interest, same as A2, but included message about monetary incentive
B3 General Learning (Household finances) + Self interest, same as A3 but included message about monetary incentive
B4 Specific Learning (Food access) + Self interest = A4 + monetary incentive message
B5 Specific Learning (Household finances) + Self interest = A5 + monetary incentive message
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Survey consents and completions
Primary Outcomes (explanation)
Consent and survey completion are binary, straightforward to code.
Secondary Outcomes
Secondary Outcomes (end points)
Contact rates (for SMS arms)
Respondent attention
Response distributions for key survey items: mask-wearing behavior and hand-washing behavior
Secondary Outcomes (explanation)
Respondent attention is measured by interviewer-coded items that are not read aloud, asking to rate respondent attention at two points during the survey, on a four-point scale: "Very attentive", "Somewhat attentive", "Somewhat distracted", and "Very distracted". Interviewer training materials spell out the details to help interviewers define and operationalize these concepts. In the analysis stage these responses are collapsed into a binary attentive/distracted measure.
Experimental Design
Experimental Design
Randomization is built into the SurveyCTO case management system. Respondents are randomized into treatment arms and are subsequently sent either an SMS with the assigned messaging or no SMS, and subsequently if the are contacted, are read a script that has one of the randomly determined appeals. The study uses paradata collected as part of the survey as well as survey responses.
Experimental Design Details
Not available
Randomization Method
Randomization programmed in SurveyCTO.
Randomization Unit
Individual cases. Each case is a phone number for an individual.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
The survey is RDD and the number of working numbers cannot be known ahead of time with certainty. We estimate that we will have about 50,000 cases across countries/surveys (countries are strata, not clusters)
Sample size: planned number of observations
50000 cases
Sample size (or number of clusters) by treatment arms
Varies by country/survey, but approximately 12000 cases in each treatment arm for the Wave 1 experiment.
The Wave 2 experiment will be conducted with followups from the RDD survey. This sample will be substantially smaller, approximately 6,000 total spread over 10 treatment arms, with unequal cell sizes ranging from 8.3% (~500 cases) to 16.7% (~1,000 cases).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Binary outcome (response rate): approximately 2.7 percentage points with Dunn-Bonferroni adjustment, assuming 4 tests for Wave 1 experiment. For the Wave 2 experiment, we estimate the minimum detectable effect size for the contrast with the smallest sample will be 0.104, assuming a binary outcome with control group mean of 0.50 and using a Dunn-Bonferroni correction for multiple hypothesis tests.
IRB Name
IRB Approval Date
IRB Approval Number
N/A Board chair approved blanket language to be incorporated into several separate IRB submissions to cover methods experiments that imposed no new burden on respondents
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information