x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Strategic Curiosity
Last registered on July 18, 2019

Pre-Trial

Trial Information
General Information
Title
Strategic Curiosity
RCT ID
AEARCTR-0004402
Initial registration date
July 01, 2019
Last updated
July 18, 2019 11:38 AM EDT
Location(s)
Region
Primary Investigator
Affiliation
NHH- Norges Handelshøyskolen
Other Primary Investigator(s)
PI Affiliation
NHH- Norges Handelshøyskolen
PI Affiliation
NHH-Norges Handelshøyskolen
Additional Trial Information
Status
In development
Start date
2019-07-19
End date
2019-08-02
Secondary IDs
Abstract
In this study, we investigate whether people use curiosity in a strategic manner to justify dishonest behavior. Specifically, we propose that individuals experiencing a want-should conflict will be motivated to acquire information that can serve as a potential justification to act in line with their temptations. Just as people might be strategically ignorant, we propose that people also have a tendency to acquire non-instrumental information for the sake of justifying their own selfishness - we call this "strategic curiosity". As such, we conjecture that people are not merely passive receivers of information but that they shape their information environment to serve their self-interest. To test our predictions, we conduct a digital version of the die-under-the-cup task, where subjects roll a virtual die and report the outcome for monetary rewards. In this controlled setting, we experimentally manipulate the availability of superfluous information and whether this information has the potential to justify dishonesty. Our manipulations focus on mainly two directions: how many times the die can be rolled and the outcome of the die. This study can have a wide range of practical implications and can open up a strand of research that focuses on understanding how people actively shape their information environment to serve their own self-interest.
External Link(s)
Registration Citation
Citation
Ay, Fehime Ceren, Katrine Berg Nødvedt and Joel Berge. 2019. "Strategic Curiosity." AEA RCT Registry. July 18. https://doi.org/10.1257/rct.4402-2.0.
Former Citation
Ay, Fehime Ceren, Katrine Berg Nødvedt and Joel Berge. 2019. "Strategic Curiosity." AEA RCT Registry. July 18. https://www.socialscienceregistry.org/trials/4402/history/50267.
Experimental Details
Interventions
Intervention(s)
It is an online experiment which is conducted on Amazon Mechanical Turk.
The complete task takes about 7 minutes, treatments are built on the outcome of a die-rolling task and how many times it can be rolled.
Intervention Start Date
2019-07-19
Intervention End Date
2019-08-02
Primary Outcomes
Primary Outcomes (end points)
Information seeking behavior and dishonesty
Primary Outcomes (explanation)
How the reported values change based on the outcome of the first roll and the number of rolls. In addition, when the outcome changes and generates unrelated information dishonesty might change.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We employ a digital version of the die-under-the-cup task (Shalvi et al. ,2011) where subjects roll a virtual die and report the outcome for monetary rewards. We choose the die-rolling setting as it allows us to experimentally vary the demand for justifications. Before rolling, all participants will be shown the payoff structure for their report, i.e. higher reported numbers will result in higher payoffs. The digital die will be programmed to be fair: it will randomly display the numbers of 1 through 6. The outcomes of the die is recorded, which means that this is an observed game similar to games used in recent studies on dishonesty (Gneezy et al. 2018; Pittarello et al., 2015). This design enables us to investigate whether the outcome of the first roll affects the probability that participants would want to roll more than once. Thus, we can test directly whether the distance between the observed outcome and the wealth-maximizing outcome predicts information acquisition.
Experimental Design Details
Participants in our experimental set-up are allocated to four different conditions. The four conditions are as follows: C: Participants are only allowed to roll the digital die once, before they report their result on a subsequent page (roll-once condition). T1: Participants are asked to first roll the die once, and then to roll it two more times. They will then be asked to report their outcome from the first roll on a subsequent page (three-rolls condition). T2: Participants are asked to roll the die once, and will then be able to roll it for as many times they would like. They then continue to a subsequent page and are asked to report the outcome from the first roll (roll freely - justification potential condition). T3: Participants are asked to roll the die once, and will then be able to roll a different type of die as many times as they would like. This different die will only display unordered and non-numeric symbols. After rolling as many times as they would like, participants continue to a subsequent page, and are asked to report the outcome from the first roll (roll-freely - no justification potential condition) Participants self-report their roll, which provides them with an opportunity to cheat (misreport), and participants who receive a lower number than 6 will have a monetary incentive to cheat. We will pay participants according to their report, as we inform them in the instructions. All participants will have equal chances to receive a high number on the digital die on their first roll (1/6), and will therefore have equal chances to earn the maximum bonus without cheating. In practice, all participants can claim the maximum bonus by simply reporting the number 6. After reporting their outcomes, participants will be asked questions about the experiment and other demographic questions (Experiment materials can be found in the submitted pre-analysis plan).
Randomization Method
Participants will be randomized by the python based program that we use for the design of the experiment: Otree. Each condition will have the same number of participants.
Randomization Unit
Individual level randomization.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
4 conditions
Sample size: planned number of observations
1600 participants
Sample size (or number of clusters) by treatment arms
400 participants in each of the 4 conditions: control, roll 3 times, roll freely, roll freely with figures.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We aim for 0.3 effect size with min. 0.8 power.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
NHH IRB
IRB Approval Date
2019-06-06
IRB Approval Number
NHH IRB 07/19
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers