IPV Measurement: Application and Validation of the List Experiment

Last registered on November 15, 2021

Pre-Trial

Trial Information

General Information

Title
IPV Measurement: Application and Validation of the List Experiment
RCT ID
AEARCTR-0008558
Initial registration date
November 12, 2021
Last updated
November 15, 2021, 11:49 AM EST

Locations

Region

Primary Investigator

Affiliation
Busara Center for Behavioral Economics

Other Primary Investigator(s)

PI Affiliation
University of Chicago Booth School of Business
PI Affiliation
Busara Center for Behavioral Economics

Additional Trial Information

Status
In development
Start date
2022-01-15
End date
2022-03-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Obtaining accurate information on behaviors and attitudes considered sensitive has posed a challenge for social science researchers. Directly asking sensitive questions can lead to misreporting or nonresponses as respondents may misrepresent their true responses for fear of repercussions, to conform to social norms, or to avoid shame. These inaccurate responses introduce systematic measurement error and bias to the estimates of sensitive behaviours and subsequent inferences. To mitigate this bias, indirect response (IR) techniques, such as the List Experiment (LE), have been used in surveys to better identify estimates of sensitive behavior. We will apply the List Experiment to measure and characterize inconsistencies in reporting of Intimate Partner Violence in a Kenyan sample. We will further test the validity of the technique applying novel methods in the context of the Global South though generalizable to other contexts.
External Link(s)

Registration Citation

Citation
Mughogho, Winnie, Nicholas Owsley and Dhwani Yanganaram. 2021. "IPV Measurement: Application and Validation of the List Experiment." AEA RCT Registry. November 15. https://doi.org/10.1257/rct.8558-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-01-15
Intervention End Date
2022-03-15

Primary Outcomes

Primary Outcomes (end points)
Inconsistency of IPV reporting; that is the difference between estimates from direct method and indirect method.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
- Inconsistency in reporting by subgroup
- Inconsistency in reporting across modes
- Validity of measures
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In order to obtain prevalence levels of IPV within our sample, we will apply the Double List Experiment and also obtain information from direct report. We will randomly assign respondents two groups, where each group will alternate as treatment and control for different lists.
Experimental Design Details
Not available
Randomization Method
Computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
1000 respondents
Sample size (or number of clusters) by treatment arms
500 respondents per mode of data collection, and 250 per arm under each mode
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents