Back to History

Fields Changed

Registration

Field Before After
Abstract Mobile phone-based surveys are an increasingly popular tool allowing researchers to collect data more frequently and cheaply than traditional, face-to-face surveys. However, phone surveys can potentially introduce measurement error by exacerbating respondent attrition or reporting error. We conduct a field experiment on a set of 900 micro-entrepreneurs in Uganda in which we randomly assign participants to face-to-face or to phone surveys. Data we collected prior to this experiment will allow us to characterize any attrition bias introduced by a differential tracking rate. We will assess how measurement varies for 1) simple, objective questions, such as age and experience, 2) complex, objective questions, such as business profit and labor supply, 3) subjective, sensitive questions, such as social and political attitudes on controversial topics, and 4) subjective, non-sensitive questions, such as subjective well-being. We will test whether modality effects can be mitigated by fixing enumerator-respondent pairings across survey rounds, or by including a trust-building activity prior to the survey. Applied research relies heavily on data quality. As more and more researchers collect their own data through surveys, how to ensure data quality is a central issue for survey data. In this project, we plan to study the impact of survey modality on data quality. Specifically, we introduce three treatments in a field experiment on a set of 900 micro-entrepreneurs in Uganda: 1) whether to conduct the survey in person or over the phone, 2) whether to fix enumerator-respondent pairings across survey rounds, and 3) whether to include a trust-building activity prior to the survey. We also cross-cut the three treatments, which can help us understand how these treatments interplay with each other. We will assess how measurement varies for 1) simple, objective questions, 2) complex, objective questions, 3) subjective, sensitive questions, and 4) subjective, non-sensitive questions. We also measure and bound experimenter demand effects (EDE) by telling respondents the results we expect in a donation allocation task. We plan to study how EDE vary with each of our treatments.
JEL Code(s) C42, C81
Last Published September 19, 2022 03:08 PM November 17, 2022 09:32 PM
Intervention Start Date September 24, 2022 November 01, 2022
Intervention End Date November 25, 2022 December 15, 2022
Primary Outcomes (End Points) We are primarily interested in assessing measurement differences in profit, revenue, expenses, and social and political attitudes. We are interested in assessing measurement differences in 1) simple, objective questions, such as basic household and business information, 2) complex, objective questions, such as business profit, 3) subjective, sensitive questions, such as social and political attitudes toward controversial topics, and 4) subjective, non-sensitive questions, such as subjective well-being. We are also interested in how experimenter demand effects, defined as the impact of priming on the donation task, vary with each of our treatments.
Primary Outcomes (Explanation) Please refer to PAP for details on index construction.
Experimental Design (Public) We will randomize the modality of a survey conducted with 900 Ugandan micro-entrepreneurs. These entrepreneurs were enrolled in a separate study on social and political views about refugees, giving us rich panel information on their business characteristics and social and political attitudes. In addition to the modality, we will cross-cut two mitigation strategies which can easily be implemented by other researchers. We will randomize whether the survey is conducted by the same enumerator who conducted the previous survey. We randomize 1) whether to conduct the survey in person or over the phone, 2) whether to fix enumerator-respondent pairings across survey rounds, and 3) whether to include a trust-building activity prior to the survey for a survey conducted on 900 micro-entrepreneurs in Uganda. We cross-cut these three treatments. We will study how measurement differs across treatments. We also randomize the respondents into one of the two groups which receive different priming scripts prior to a donation game to measure experimenter demand effects.
Randomization Method Randomization will be done by computer. Randomization is done by computer.
Randomization Unit We are randomizing individuals to be assigned to either a phone or in-person survey. The unit of randomization is individual.
Sample size (or number of clusters) by treatment arms 450 control individuals and 450 treatment individuals There are 3 different treatments and we cross-cut them, so there are in total 8 treatment arms: 1) in-person survey, familiar enumerator, trust-building exercise at the beginning of the survey, 2) in-person survey, new enumerator, trust-building exercise at the beginning of the survey, 3) in-person survey, familiar enumerator, no trust-building exercise at the beginning of the survey, 4) in-person survey, new enumerator, no trust-building exercise at the beginning of the survey, 5) phone survey, familiar enumerator, trust-building exercise at the beginning of the survey, 6) phone survey, new enumerator, trust-building exercise at the beginning of the survey, 7) phone survey, familiar enumerator, no trust-building exercise at the beginning of the survey, 8) phone survey, new enumerator, no trust-building exercise at the beginning of the survey. Each treatment arm has around 125 individuals.
Additional Keyword(s) survey design, enumerator effects, experimenter demand effects
Public analysis plan No Yes
Back to top

Analysis Plans

Field Before After
Document
Measurement_PAP_Nov22.pdf
MD5: 247565b4116457335e3d8716572c3845
SHA1: e43410de43422c2a9fa421695e4630e953e573cd
Back to top