What Drives Bias in Responses to Survey Questions about Sensitive Traits and Behaviors?

Last registered on March 20, 2017

Pre-Trial

Trial Information

General Information

Title
What Drives Bias in Responses to Survey Questions about Sensitive Traits and Behaviors?
RCT ID
AEARCTR-0002109
Initial registration date
March 17, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 20, 2017, 11:24 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
ITAM

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2017-04-03
End date
2017-05-31
Secondary IDs
Abstract
This project tests empirical implications of a theory about the factors driving bias in responses to sensitive questions in surveys.
External Link(s)

Registration Citation

Citation
Simpser, Alberto. 2017. "What Drives Bias in Responses to Survey Questions about Sensitive Traits and Behaviors?." AEA RCT Registry. March 20. https://doi.org/10.1257/rct.2109-1.0
Former Citation
Simpser, Alberto. 2017. "What Drives Bias in Responses to Survey Questions about Sensitive Traits and Behaviors?." AEA RCT Registry. March 20. https://www.socialscienceregistry.org/trials/2109/history/15207
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The randomized interventions are different kinds of question structures in two substantive domains.
Intervention Start Date
2017-04-03
Intervention End Date
2017-05-31

Primary Outcomes

Primary Outcomes (end points)
In Simpser (2017), I propose a model where the probability of truthfully answering a question about having engaged in a sensitive behavior (or exhibiting a sensitive trait) is a function of:
• A shame aversion parameter
• A lying aversion parameter
• A Bayesian probability calculation based on second-order beliefs about the interviewer’s priors (about whether the respondent engaged in the sensitive behavior)

I operationalize these concepts through survey questions about the following:

A. Subjective sense of privacy after being asked a question on the sensitive item (variable reveal).
B. Shame aversion (through a battery of survey questions inspired or drawn from SHArQ) (variables shame1, shame2, shame3, [shame4])
C. Degree to which lack of privacy would translate into shame (via the variable shame, only measured conditional on reveal).
D. Lying aversion (through a battery of questions drawn from psychology) (variables lie1, [lie2], lie3, lie4, lie5)
E. Respondent second-order beliefs about the interviewer’s priors (variable expect)
F. Respondent second-order beliefs about how much the interviewer cares that the respondent might have engaged in the sensitive behavior (variable reaction)
G. Respondent second-order beliefs about how much the interviewer cares that the respondent might have engaged in the sensitive behavior (proxied as one agree/disagree question in the battery about the normative desirability of the sensitive behavior). (shame4 for voting, lie2 for coin flip).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The treatments are assigned randomly to individuals according to a pre-specified probability distribution. The estimands are the statistical associations between the variables representing model parameters, on the one hand, and treatment effect signs and magnitudes, on the other.
Experimental Design Details
Question structure is independently and randomly assigned for each question topic, to each respondent. Those who receive the direct question do not subsequently receive any additional questions. However, all those assigned to the RRT or LE questions subsequently receive the direct question on the same topic.

Everyone is asked (at least) one question about voting. In the coin toss section, only those who reported having obtained 3 heads in 3 tosses are asked a question about it. All others are not exposed to the coin toss questions.
Randomization Method
Randomization of treatments to individual is done by a computer program.
Randomization Unit
The unit is the individual respondent.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The randomization is not clustered.
Sample size: planned number of observations
1000 individual respondents.
Sample size (or number of clusters) by treatment arms
The randomization procedure defines the probability that any unit will receive each of the treatments.
15% will receive the direct question, 31% the RRT, 22% the list experiment without the sensitive item, and 32% the list experiment with the sensitive item. These percentages are applied separately to each of the two substantive categories of sensitive questions used. That is, each category of question is randomized independently. This procedure is followed until the planned number of observations is reached.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
ITAM Institutional Review Board
IRB Approval Date
2017-03-07
IRB Approval Number
No approval number. Dr. Andrei Gomberg is Chair, IRB, ITAM-CIE
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials