x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Messaging to Improve Phone Survey Response Rates
Last registered on July 13, 2020

Pre-Trial

Trial Information
General Information
Title
Messaging to Improve Phone Survey Response Rates
RCT ID
AEARCTR-0006111
Initial registration date
July 08, 2020
Last updated
July 13, 2020 3:51 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Innovations for Poverty Action
Other Primary Investigator(s)
PI Affiliation
Innovations for Poverty Action
PI Affiliation
Northwestern
PI Affiliation
Northwestern University
Additional Trial Information
Status
On going
Start date
2020-06-15
End date
2020-09-30
Secondary IDs
Abstract
Substantial literature on survey response rates focuses on framing and the appeal to altruism as a motivation for participating, with methods like pre-survey post-cards and letters to incentivize cooperation, with evidence coming primarily from the U.S. and Europe. More recently, there has been interest in mobile phone surveys in low and middle income countries, where the efficacy of methods for improving response rates is not as well known. This study randomizes the use of pre-survey text messages, whether to send them and which type of appeal to make. The study also randomizes the messaging used in the consent script, appealing alternatively to “researcher” or “government” as the motivating authority. The experiment is conducted in random-digit dial (RDD) surveys in up to 12 countries in Latin America, Africa, and Asia.
External Link(s)
Registration Citation
Citation
Dillon, Andrew et al. 2020. "Messaging to Improve Phone Survey Response Rates." AEA RCT Registry. July 13. https://doi.org/10.1257/rct.6111-1.0.
Experimental Details
Interventions
Intervention(s)
The experiment varied two factors

Factor 1 is SMS text message sent to respondent prior to CATI interview call, with 3 possible levels:
S0 = No SMS
SG= SMS, appeal to "government"
SR= SMS, appeal to "researcher"


Factor 2 is appeal in the consent script, with 3 levels:
G = consent appeals to "government"
R = consent appeals to "researcher"
P = consent appeals to "policymaker"

In Colombia and Mexico, it was a 2x2 (no cases assigned to S0 or P)
In other countries it was a 3x1 (S, G, or P)
"Mixed message" cells (SG-R, SG-P, SR-G, SR-P, etc.) are not populated.
In Spanish-speaking countries, "policymaker" appeals are omitted because terminology is hard to distinguish from government.
Intervention Start Date
2020-06-15
Intervention End Date
2020-09-30
Primary Outcomes
Primary Outcomes (end points)
Survey consents and completions
Primary Outcomes (explanation)
Consent and survey completion are binary, straightforward to code.
Secondary Outcomes
Secondary Outcomes (end points)
Contact rates (for SMS arms)
Breakoffs
Respondent attention
Response distributions for key survey items: mask-wearing behavior and hand-washing behavior
Secondary Outcomes (explanation)
Respondent attention is measured by interviewer-coded items that are not read aloud, asking to rate respondent attention at two points during the survey, on a four-point scale: "Very attentive", "Somewhat attentive", "Somewhat distracted", and "Very distracted". Interviewer training materials spell out the details to help interviewers define and operationalize these concepts. In the analysis stage these responses are collapsed into a binary attentive/distracted measure.
Experimental Design
Experimental Design
Randomization is built into the SurveyCTO case management system. Respondents are randomized into treatment arms and are subsequently sent either an SMS with the assigned messaging or no SMS, and subsequently if the are contacted, are read a script that has one of the randomly determined appeals. The study uses paradata collected as part of the survey as well as survey responses.
Experimental Design Details
Not available
Randomization Method
Randomization programmed in SurveyCTO.
Randomization Unit
Individual cases. Each case is a phone number for an individual.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
The survey is RDD and the number of working numbers cannot be known ahead of time with certainty. We estimate that we will have about 50,000 cases across countries/surveys (countries are strata, not clusters)
Sample size: planned number of observations
50000 cases
Sample size (or number of clusters) by treatment arms
Varies by country/survey, but approximately 12000 cases in each treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Binary outcome (response rate): approximately 2.7 percentage points with Dunn-Bonferroni adjustment, assuming 4 tests
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IPA IRB
IRB Approval Date
2020-05-05
IRB Approval Number
N/A Board chair approved blanket language to be incorporated into several separate IRB submissions to cover methods experiments that imposed no new burden on respondents
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information