Using Response Times in Surveys

Last registered on February 24, 2022

Pre-Trial

Trial Information

General Information

Title
Using Response Times in Surveys
RCT ID
AEARCTR-0009022
Initial registration date
February 24, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 24, 2022, 1:51 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Melbourne

Other Primary Investigator(s)

PI Affiliation
University of Zurich

Additional Trial Information

Status
In development
Start date
2022-03-03
End date
2022-03-11
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We develop and empirically verify a method which improves the quality of inferences from surveys using response times.
External Link(s)

Registration Citation

Citation
Alós-Ferrer, Carlos and Michele Garagnani. 2022. "Using Response Times in Surveys." AEA RCT Registry. February 24. https://doi.org/10.1257/rct.9022-1.0
Experimental Details

Interventions

Intervention(s)
Participants are recruited online using Prolific, where we will request a representative UK sample (in terms of gender, age, and ethnicity). In exchange for a flat payment, participants are asked to give binary answers to several survey questions regarding a wide range of topics (see the list of questions below). We will further measure response times for each choice. We plan to compare how groups of participants might differ in their preferences over these topics. The groups are defined based on gender, political attitude, income, and age.
Intervention Start Date
2022-03-03
Intervention End Date
2022-03-11

Primary Outcomes

Primary Outcomes (end points)
Response times and binary choices in multiple survey questions.
Primary Outcomes (explanation)
For each question, we investigate differences in the proportion of people supporting one of the two options between the two pre-specified groups relevant for that question.

Secondary Outcomes

Secondary Outcomes (end points)
Demographic characteristics (e.g, age, gender, income, political affiliation)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants are recruited online using Prolific, where we will request a representative UK sample (in terms of gender, age, and ethnicity). In exchange for a flat payment, participants are asked to give binary answers to several survey questions regarding a wide range of topics (see the list of questions below). We will further measure response times for each choice. We plan to compare how groups of participants might differ in their preferences over these topics. The groups are defined based on gender, political attitude, income, and age.
Experimental Design Details
Randomization Method
Participants see the same list of questions in a randomized order.
Randomization Unit
Individuals
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1
Sample size: planned number of observations
N=1200 individuals
Sample size (or number of clusters) by treatment arms
N=1200 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The sample size and power analysis is based on the tests of proportions for comparisons across groups. Given that the participants might be not equally distributed between groups (e.g., political attitudes and income), we conservatively set the sample size to be able to allow for a 0.25 allocation ratio (80% vs. 20%). We further require to have enough power (0.8) to detect a 0.1 difference in the proportion of people supporting one option compared to the other between groups. The resulting sample size fulfilling these conditions is N = 1008. During the survey we will implement control questions for attention, and we will not consider subjects who fail. We will recruit participants who successfully pass the control questions until we reach the required sample size. Due to simultaneous online recruitment, the target might be exceeded; we will keep these excess subjects for the analysis. Participants who do not successfully complete the questionnaire will be excluded from the analysis. Since this cannot be done during data collection, and previous experience indicates that this could go up to 10% of the data, we round up the sample size to N = 1, 200.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee of the Faculty of Economics, Business Administration, and Information Technology, University of Zurich
IRB Approval Date
2022-02-09
IRB Approval Number
OEC IRB # 2022-012
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials