Using Response Times in Surveys, Study 2

Last registered on January 07, 2024

Pre-Trial

Trial Information

General Information

Title
Using Response Times in Surveys, Study 2
RCT ID
AEARCTR-0012668
Initial registration date
December 11, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 20, 2023, 9:37 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 07, 2024, 5:31 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Melbourne

Other Primary Investigator(s)

PI Affiliation
University of Zurich

Additional Trial Information

Status
In development
Start date
2024-01-08
End date
2024-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Participants are recruited online using Prolific. They are asked to perform a series of decision tasks in exchange for a flat payment and additional decision-dependent incentives in part of the tasks. In particular, participants are asked to rate 20 charities and organizations according to how much they favor them using 9-point Likert scales (see the supporting document). Two additional questions after the ratings ask for the most- and least-favorite organization from the list. Participants are then asked to give binary answers to several survey questions (see the supporting document). The order of ratings and questions will be randomized for each subject (within blocks of similar questions). We
will further measure response times for each binary choice. We plan to compare how the answers to these questions might differ between groups of participants (see the supporting document for how we pre-define the groups).
External Link(s)

Registration Citation

Citation
Alos-Ferrer, Carlos and Michele Garagnani. 2024. "Using Response Times in Surveys, Study 2." AEA RCT Registry. January 07. https://doi.org/10.1257/rct.12668-1.1
Experimental Details

Interventions

Intervention(s)
Participants are recruited online using Prolific. They are asked to perform a series of decision tasks in exchange for a flat payment and additional decision-dependent incentives in part of the tasks. In particular, participants are asked to rate 20 charities and organizations according to how much they favor them using 9-point Likert scales (see the supporting document). Two additional questions after the ratings ask for the most- and least-favorite organization from the list. Participants are then asked to give binary answers to several survey questions (see the supporting document). The order of ratings and questions will be randomized for each subject (within blocks of similar questions). We
will further measure response times for each binary choice. We plan to compare how the answers to these questions might differ between groups of participants (see the supporting document for how we pre-define the groups).
Intervention Start Date
2024-01-08
Intervention End Date
2024-01-31

Primary Outcomes

Primary Outcomes (end points)
Ratings of the organizations, binary answers to several survey questions (e.g., whether they want to donate to a charity or not), response times
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Gender, age, political preferences, preference for cats or dogs, most favorite charity, least favorite charity.
Secondary Outcomes (explanation)
Demographic information used to create groups for comparisons.

Experimental Design

Experimental Design
Charities to be rated are selected because they are either popular or unpopular, according to yougov.uk for Q3-2023. The popular charities we include had approval rates between 73% and 86% (this was the overall maximum) at the time of the design of the study (November 2023), and a “Fame” of 90% or above: British Heart Foundation; Macmillan Cancer Support; Samaritans; WWF; British Red Cross; Guide Dogs; RSPB (Royal Society for the Protection of Birds). We tried to include a variety of themes, e.g. avoiding having two different cancer charities on the list. Less popular charities had approval rates between 24% and 60%: UNICEF; Fairtrade Foundation; Amnesty International; Cats Protection; Christian Aid; Greenpeace; Unite to End Violence Against Women; Black Lives Matter; PETA; Campaign for Nuclear Disarmament; Vegetarian Society.
We also include the two main political organizations in the UK, the Conservative Party and the Labour Party.

We also ask 38 (20+10+4+4) binary questions which are of 3 different kinds:
Type 1. We ask participants to decide how they want to split an amount of money (4 Pounds) between themselves and a charity. In particular, the question asks participants to decide whether they would prefer to keep the entire amount or split the sum evenly (50% - 50%) with the charity. We ask this question for all 20 charities and organizations participants rated.
For each participant, one of the questions of type 1 is randomly selected and paid at the end of the survey. That is, the participant’s decision is actually implemented for the selected question (including making the donation to the corresponding organization). Randomization will be made at the charity level, i.e. we will pick one of the 20 questions at random and realize the corresponding question for all participants.

Type 2. We present participants with two lines of different lengths, one above the other. We ask them if the one above is longer than than the other. We repeat this question for different lengths 10 times. For each participant, one of the questions of type 2 is randomly selected and paid at the end of the survey. If the participant gave the correct answer, he or she will earn an additional one pound.

Type 3. We ask participants to answer four framing and invariance questions analogous to the ones proposed by Tversky and Kahneman (1981, 1986). Participants are randomly assigned to see only one of the two versions of the questions (e.g., either gain frame or loss frame).
Questions of type 3 are not incentivized.
Experimental Design Details
Randomization Method
Participants are randomly assigned to one of the two versions of the framing and invariance questions.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1
Sample size: planned number of observations
1100
Sample size (or number of clusters) by treatment arms
1100
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We require to have enough power (0.8) to detect a 0.1 difference in the proportion of people supporting one option compared to the other between groups. The resulting sample size fulfilling these conditions is N = 1008, which we will round up to N = 1100.
Supporting Documents and Materials

Documents

Document Name
Supporting document
Document Type
proposal
Document Description
File
Supporting document

MD5: cf4a75c405ad96f89f8b9006461e975b

SHA1: 30f58bff84d2ca8f457e43ad8afb49565657a2a2

Uploaded At: December 11, 2023

IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee of the Faculty of Economics, Business Administration, and Information Technology
IRB Approval Date
2023-12-06
IRB Approval Number
OEC IRB # 2023- 110
Analysis Plan

Analysis Plan Documents

Preregistration_TWT_Survey_Study_2.pdf

MD5: cf4a75c405ad96f89f8b9006461e975b

SHA1: 30f58bff84d2ca8f457e43ad8afb49565657a2a2

Uploaded At: December 11, 2023

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials