The AEA RCT Registry will be down for maintenance on Tuesday, April 20th, from 9pm EDT to 11pm EDT to perform necessary upgrades. The site will be in maintenance mode during that time. We apologize for any inconvenience this may cause.
Optimizing Protocols for Random Digit Dial Surveys
Initial registration date
December 29, 2020
January 04, 2021 9:10 AM EST
Innovations for Poverty Action
Other Primary Investigator(s)
Additional Trial Information
In empirical development economics, it is customary to collect data via face to face household surveys. However, the COVID-19 pandemic and other factors have encouraged a shift to phone surveys for this work, where the research literature on mobile phone surveys in low- and middle-income countries is thin. Much of the literature is also out of date, given the rapid changes in mobile cell phone penetration even among poor households as digital financial tools are used more and more for social protection interventions.
In this study, IPA takes advantage of phone surveys launched in nine countries to embed experiments on the best times and days to initiate phone surveys and the optimal number of attempts for high response rates data quality. We use random digit dial surveys, for which it is automatic to randomly assign cases (initial attempts) to time of day and day of week. By definition, most other variables in call protocols are also randomized because cases (phone numbers to attempt) are dialed in random order. Registration Citation
Vary time of day, day of week, and maximum number of attempts
Time of day is recorded in precise units with timestamp, but for the analysis we group calls into morning (before 12 PM), mid-day (12PM to 3:49PM), and evening (4 PM to 10:59 PM)
For day of week, each day is extracted from timestamp data, but Saturday and Sunday are combined into a single weekend indicator
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Pickup -- did the respondent answer the phone?
Survey completion - did the respondent complete the interview?
Sample composition (demographic characteristics of respondents)
Primary Outcomes (explanation)
Demographic composition is conditional on having completed the survey.
Age is continuous (years)
Education is collapsed into an indicator for secondary education completed or not.
Gender is self-reported.
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Units of analysis are individual cases. A case is a valid phone number (registered SIM card) that could result in a completed phone interview.
Interviewers are assigned cases at random from a list provided by the mobile network operator or a third-party sample provider such as Sample Solutions, who pre-pulse the numbers to verify that they are working, active numbers.
The survey dialer randomly assigns cases over a work week (5, 6, or 7 days) and paradata are recorded for each call attempt. These include time of day, day of week, interviewer ID, and attempt number. We will analyze the first attempt for time-of-day and day-of-week analysis, because all actions taken on subsequent attempts are influenced by call center staff behavior and depend on respondent behavior (e.g. reaching the second attempt requires someone to ignore a call or refuse cooperation on the first attempt).
Analysis is straightforward comparison of pickup and completion rates by day and time.
Experimental Design Details
Randomization is done with software, by the dialer program within SurveyCTO.
Units are "cases", where each case is a pre-pulsed phone number.
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
Conditional analyses will have smaller sample sizes.
Analysis of completions conditional on pickup, N = 27,945
Analysis of respondent demographics is based on respondents who completed the survey, N = 7,953
Sample size (or number of clusters) by treatment arms
Time of day: Morning 16,230; Mid-day 27,885; Evening 18,548
Day of week: Range from 8,200 for weekends to 10,970 on Mondays, around 10,000 each weekday
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Approval Date
IRB Approval Number
N/A Board chair approved blanket language to be incorporated into several separate IRB submissions to cover methods experiments that imposed no new burden on respondents