NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Messaging to Improve Phone Survey Response Rates
Initial registration date
July 08, 2020
July 13, 2020 3:51 PM EDT
This section is unavailable to the public. Use the button below
to request access to this information.
Innovations for Poverty Action
Other Primary Investigator(s)
Innovations for Poverty Action
Additional Trial Information
Substantial literature on survey response rates focuses on framing and the appeal to altruism as a motivation for participating, with methods like pre-survey post-cards and letters to incentivize cooperation, with evidence coming primarily from the U.S. and Europe. More recently, there has been interest in mobile phone surveys in low and middle income countries, where the efficacy of methods for improving response rates is not as well known. This study randomizes the use of pre-survey text messages, whether to send them and which type of appeal to make. The study also randomizes the messaging used in the consent script, appealing alternatively to “researcher” or “government” as the motivating authority. The experiment is conducted in random-digit dial (RDD) surveys in up to 12 countries in Latin America, Africa, and Asia.
The experiment varied two factors
Factor 1 is SMS text message sent to respondent prior to CATI interview call, with 3 possible levels:
S0 = No SMS
SG= SMS, appeal to "government"
SR= SMS, appeal to "researcher"
Factor 2 is appeal in the consent script, with 3 levels:
G = consent appeals to "government"
R = consent appeals to "researcher"
P = consent appeals to "policymaker"
In Colombia and Mexico, it was a 2x2 (no cases assigned to S0 or P)
In other countries it was a 3x1 (S, G, or P)
"Mixed message" cells (SG-R, SG-P, SR-G, SR-P, etc.) are not populated.
In Spanish-speaking countries, "policymaker" appeals are omitted because terminology is hard to distinguish from government.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Survey consents and completions
Primary Outcomes (explanation)
Consent and survey completion are binary, straightforward to code.
Secondary Outcomes (end points)
Contact rates (for SMS arms)
Response distributions for key survey items: mask-wearing behavior and hand-washing behavior
Secondary Outcomes (explanation)
Respondent attention is measured by interviewer-coded items that are not read aloud, asking to rate respondent attention at two points during the survey, on a four-point scale: "Very attentive", "Somewhat attentive", "Somewhat distracted", and "Very distracted". Interviewer training materials spell out the details to help interviewers define and operationalize these concepts. In the analysis stage these responses are collapsed into a binary attentive/distracted measure.
Randomization is built into the SurveyCTO case management system. Respondents are randomized into treatment arms and are subsequently sent either an SMS with the assigned messaging or no SMS, and subsequently if the are contacted, are read a script that has one of the randomly determined appeals. The study uses paradata collected as part of the survey as well as survey responses.
Experimental Design Details
Randomization programmed in SurveyCTO.
Individual cases. Each case is a phone number for an individual.
Was the treatment clustered?
Sample size: planned number of clusters
The survey is RDD and the number of working numbers cannot be known ahead of time with certainty. We estimate that we will have about 50,000 cases across countries/surveys (countries are strata, not clusters)
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Varies by country/survey, but approximately 12000 cases in each treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Binary outcome (response rate): approximately 2.7 percentage points with Dunn-Bonferroni adjustment, assuming 4 tests
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Approval Date
IRB Approval Number
N/A Board chair approved blanket language to be incorporated into several separate IRB submissions to cover methods experiments that imposed no new burden on respondents