IPV Measurement: Application and Validation of the List Experiment

Last registered on November 15, 2021


Trial Information

General Information

IPV Measurement: Application and Validation of the List Experiment
Initial registration date
November 12, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 15, 2021, 11:49 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

Busara Center for Behavioral Economics

Other Primary Investigator(s)

PI Affiliation
University of Chicago Booth School of Business
PI Affiliation
Busara Center for Behavioral Economics

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Obtaining accurate information on behaviors and attitudes considered sensitive has posed a challenge for social science researchers. Directly asking sensitive questions can lead to misreporting or nonresponses as respondents may misrepresent their true responses for fear of repercussions, to conform to social norms, or to avoid shame. These inaccurate responses introduce systematic measurement error and bias to the estimates of sensitive behaviours and subsequent inferences. To mitigate this bias, indirect response (IR) techniques, such as the List Experiment (LE), have been used in surveys to better identify estimates of sensitive behavior. We will apply the List Experiment to measure and characterize inconsistencies in reporting of Intimate Partner Violence in a Kenyan sample. We will further test the validity of the technique applying novel methods in the context of the Global South though generalizable to other contexts.
External Link(s)

Registration Citation

Mughogho, Winnie, Nicholas Owsley and Dhwani Yanganaram. 2021. "IPV Measurement: Application and Validation of the List Experiment." AEA RCT Registry. November 15. https://doi.org/10.1257/rct.8558-1.0
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Inconsistency of IPV reporting; that is the difference between estimates from direct method and indirect method.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
- Inconsistency in reporting by subgroup
- Inconsistency in reporting across modes
- Validity of measures
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In order to obtain prevalence levels of IPV within our sample, we will apply the Double List Experiment and also obtain information from direct report. We will randomly assign respondents two groups, where each group will alternate as treatment and control for different lists.
Experimental Design Details
Randomization Method
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
1000 respondents
Sample size (or number of clusters) by treatment arms
500 respondents per mode of data collection, and 250 per arm under each mode
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials