x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Preferences for mixing in multiple-choice questions
Last registered on November 13, 2019

Pre-Trial

Trial Information
General Information
Title
Preferences for mixing in multiple-choice questions
RCT ID
AEARCTR-0005034
Initial registration date
November 13, 2019
Last updated
November 13, 2019 11:09 AM EST
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Sungkyunkwan University
Other Primary Investigator(s)
PI Affiliation
National University of Singapore
PI Affiliation
National University of Singapore
Additional Trial Information
Status
In development
Start date
2019-11-14
End date
2020-11-14
Secondary IDs
Abstract
Using a large-scale IQ test, this experiment investigates whether and how performance on multiple choice questions is affected by the opportunity of mixing the answers. Prior research shows that when facing difficult choices over similarly attractive options, people tend to choose the default option or procrastinate, and mixing the answers might enhance their performance. Each participant is randomly assigned a scoring rule. Under the standard scoring rule, participants can choose only one option, and receive the maximum score for the question if the correct option is chosen and zero otherwise. Under two nonstandard scoring rules, participants can choose any number k of options, and the scores will be determined either by probability mixing or outcome mixing.
External Link(s)
Registration Citation
Citation
Fu, Jingcheng, Xing Zhang and Songfa Zhong. 2019. "Preferences for mixing in multiple-choice questions." AEA RCT Registry. November 13. https://doi.org/10.1257/rct.5034-1.0.
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2019-11-14
Intervention End Date
2020-11-14
Primary Outcomes
Primary Outcomes (end points)
Scores from the IQ test (both total score and score by question).
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The experiment will be carried out by university students of different majors. There is a random incentive for participation: a small number of randomly drawn participants will receive a fixed payment, plus a bonus which depends on their score in the IQ test. The participants will complete the experiment using their mobile device in a classroom. For the IQ test we employ a factorial design with three scoring rules (Standard, Randomization and Split) and two time constraints (Tight vs. Loose), which gives six treatments. Each participant will be randomly assigned into one of the treatments. Before the test, participants will complete some practice items, and a series of comprehension questions on the experimental instructions. After the IQ test, the participants fill in a questionnaire with the following questions:
1. Hypothetical lottery choice questions (Global Preference Survey, Falk et al. 2018);
2. Big Five Personality Test: a brief, 10-question version (Rammstedt and John 2007);
3. Maximization and Regret Scales (Schwartz et al. 2002);
4. Self-assessed willingness to take risk (Global Preference Survey, Falk et al. 2018);
5. Questions on preference for randomization (Rubinstein 2002);
6. Task-related questions, e.g. opt-in for feedback on relative performance.
Experimental Design Details
Not available
Randomization Method
Treatment group assignment will be randomized by the oTree software.
Randomization Unit
Individual participant.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
Around 3000 participants.
Sample size: planned number of observations
Around 3000 participants.
Sample size (or number of clusters) by treatment arms
Around 500 participants.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We are interested in an effect size of 0.15 to 0.2 standard deviation of the outcome variable (the planned sample size allows us to detect the effect size of 0.1774 standard deviation between any two treatments with the power of 0.8 and alpha of 0.05, assuming equal variance).
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Institutional Review Board at National University of Singapore
IRB Approval Date
2018-06-08
IRB Approval Number
S-18-181