The Effects of Phone-Based Surveys on Measurement Quality: When and Why Does Modality Matter?

Last registered on June 28, 2023

Pre-Trial

Trial Information

General Information

Title
The Effects of Phone-Based Surveys on Measurement Quality: When and Why Does Modality Matter?
RCT ID
AEARCTR-0010053
Initial registration date
September 19, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 19, 2022, 3:08 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 28, 2023, 10:22 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Rochester

Other Primary Investigator(s)

PI Affiliation
University of Rochester
PI Affiliation
University of Rochester
PI Affiliation
Center for Global Development

Additional Trial Information

Status
On going
Start date
2019-11-22
End date
2023-12-31
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
Applied research relies heavily on data quality. As more and more researchers collect their own data through surveys, how to ensure data quality is a central issue for survey data. In this project, we plan to study the impact of survey modality on data quality. Specifically, we introduce three treatments in a field experiment on a set of 900 micro-entrepreneurs in Uganda: 1) whether to conduct the survey in person or over the phone, 2) whether to fix enumerator-respondent pairings across survey rounds, and 3) whether to include a trust-building activity prior to the survey. We also cross-cut the three treatments, which can help us understand how these treatments interplay with each other. We will assess how measurement varies for 1) simple, objective questions, 2) complex, objective questions, 3) subjective, sensitive questions, and 4) subjective, non-sensitive questions. We also measure and bound experimenter demand effects (EDE) by telling respondents the results we expect in a donation allocation task. We plan to study how EDE vary with each of our treatments.
External Link(s)

Registration Citation

Citation
Baseler, Travis et al. 2023. "The Effects of Phone-Based Surveys on Measurement Quality: When and Why Does Modality Matter? ." AEA RCT Registry. June 28. https://doi.org/10.1257/rct.10053-3.0
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2022-11-01
Intervention End Date
2022-12-15

Primary Outcomes

Primary Outcomes (end points)
We are interested in assessing measurement differences in 1) simple, objective questions, such as basic household and business information, 2) complex, objective questions, such as business profit, 3) subjective, sensitive questions, such as social and political attitudes toward controversial topics, and 4) subjective, non-sensitive questions, such as subjective well-being. We are also interested in how experimenter demand effects, defined as the impact of priming on the donation task, vary with each of our treatments.
Primary Outcomes (explanation)
Please refer to PAP for details on index construction.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We randomize 1) whether to conduct the survey in person or over the phone, 2) whether to fix enumerator-respondent pairings across survey rounds, and 3) whether to include a trust-building activity prior to the survey for a survey conducted on 900 micro-entrepreneurs in Uganda. We cross-cut these three treatments. We will study how measurement differs across treatments.

We also randomize the respondents into one of the two groups which receive different priming scripts prior to a donation game to measure experimenter demand effects.
Experimental Design Details
Randomization Method
Randomization is done by computer.
Randomization Unit
The unit of randomization is individual.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
900 individuals
Sample size: planned number of observations
900 individuals
Sample size (or number of clusters) by treatment arms
There are 3 different treatments and we cross-cut them, so there are in total 8 treatment arms: 1) in-person survey, familiar enumerator, trust-building exercise at the beginning of the survey, 2) in-person survey, new enumerator, trust-building exercise at the beginning of the survey, 3) in-person survey, familiar enumerator, no trust-building exercise at the beginning of the survey, 4) in-person survey, new enumerator, no trust-building exercise at the beginning of the survey, 5) phone survey, familiar enumerator, trust-building exercise at the beginning of the survey, 6) phone survey, new enumerator, trust-building exercise at the beginning of the survey, 7) phone survey, familiar enumerator, no trust-building exercise at the beginning of the survey, 8) phone survey, new enumerator, no trust-building exercise at the beginning of the survey.

Each treatment arm has around 115 individuals.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2019-12-23
IRB Approval Number
IRB00000109
Analysis Plan

Analysis Plan Documents

Measurement_PAP_Nov22.pdf

MD5: 247565b4116457335e3d8716572c3845

SHA1: e43410de43422c2a9fa421695e4630e953e573cd

Uploaded At: November 17, 2022

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials