Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Motivated Information Choice
Last registered on July 26, 2019


Trial Information
General Information
Motivated Information Choice
Initial registration date
July 25, 2019
Last updated
July 26, 2019 4:36 PM EDT
Primary Investigator
Other Primary Investigator(s)
PI Affiliation
Harvard University
PI Affiliation
Harvard University
Additional Trial Information
In development
Start date
End date
Secondary IDs
Previous work has shown that people (1) neglect selection when they learn from a sample (2) avoid information that they expect to be negative or misaligned with their preferences and (3) tend to update asymmetrically and conservatively (compared to the Bayesian benchmark) when receiving information that is ego relevant. Although all these results have been individually observed in isolated studies, our study presents a simple experiment that allows us to investigate how these effects could possibly interact and further compound biased beliefs.
External Link(s)
Registration Citation
Kwon, Spencer, William Murdock III and Pierre-Luc Vautrey. 2019. "Motivated Information Choice." AEA RCT Registry. July 26. https://doi.org/10.1257/rct.4486-1.0.
Former Citation
Kwon, Spencer et al. 2019. "Motivated Information Choice." AEA RCT Registry. July 26. http://www.socialscienceregistry.org/trials/4486/history/50737.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
No Intervention. See Experimental Design.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Information choice (Yay VS Nay) ; Final Bias ; Updating Biases
Primary Outcomes (explanation)
See the attached analysis plan.
Secondary Outcomes
Secondary Outcomes (end points)
Attribution Biases
Secondary Outcomes (explanation)
See the attached analysis plan.
Experimental Design
Experimental Design
This experiment aims to jointly measure an individual's information choice, subsequent belief formation, as well as their metacognition (understanding of their own biases) in a motivated setting. The basic experiment follows a 2 by 2 design, with treatment variations across 2 dimensions:
Ego Relevance: Beliefs elicitations are about the participant's performance (Ego-Relevant) or a random number (Ego-Neutral).
Information Choice: Participants select the bias of the source themselves from two options (Chosen-Bias) or are automatically assigned one of the two biases (Forced-Bias).
Experimental Design Details
In the ego treatment, subjects first complete logic puzzles presented as parts of an IQ test. For the logic puzzles, we use Raven's Progressive Matrices, a widely administered test which was originally developed as a method to gauge general intelligence in a non-verbal setting.
Subjects' performance on these puzzles are evaluated as a mixture of speed and accuracy and they are incentivized to perform as well as they can.

For the non-ego treatment, subjects do not complete IQ puzzles. Instead, they report their beliefs (described below) about a random integer drawn uniformly from 1 to 100.

During the course of the experiment, participants report their belief about either performing above a given rank in a pool of other subjects at IQ puzzles (ego treatment) or about whether the random number is above a given value (non-ego treatment). In both cases, the elicited belief regards a binary event and is reported as a probability. It is incentivized using the lottery method.
They report these beliefs multiple times before and after receiving feedback, the structure of which is detailed below.

For all treatments, the participants first receive noisy feedback regarding the binary outcome (whether they outrank a given threshold in the ego treatment and whether the random number is above a given threshold in the non-ego treatment). For clearer comprehension, the participants are told that there are 6 truth-telling gremlins and 2 lying gremlins. They are shown a visualization of drawing the gremlins with replacement, and receive feedback from a gremlin of unknown type. Participants receive three signals from this set of gremlins, reporting their beliefs after each signal. Although not described to participants as such, this unbiased noisy feedback round can be considered a "practice round" -- it was designed with the intent to slowly introduce participants to various components of the experiment before introducing them to the biases.

The participants are then introduced to potential biases. They are told that in addition to the truth-telling and lying gremlins, there are two additional types of gremlins. First, the Yay Sayer gremlin delivers positive news regardless of the state of the world. In contrast, the Nay Sayer gremlin always delivers negative news.
Along with the 6 truth-tellers and 2 liars, there will be either 2 Yay Sayers or 2 Nay Sayers in the urn to be drawn with replacement. As described above, the exact procedure in which these biased gremlins will be added will depend on the treatment arm.

To ensure that the participants understand the nature of the biases, we add a comprehension check: we ensure that they understand that adding a positive bias will increase the likelihood of the positive signal, yet reduce the meaningfulness of the signal, and similarly for a negative bias.

In the chosen-bias treatment arm, the participants are given the choice between Yay Sayer and Nay Sayer biases for each round. In contrast, in the forced-choice forced bias treatment arm, the bias is randomly selected at the beginning of each round and disclosed to the participant.

Elicitations about Signal Attribution
In all treatment arms, if an agent receives a signal that is concurrent with the bias (i.e., a ``yes" signal under Yay Sayers or a ``no" signal under Nay Sayers), they are asked what they think is the probability the signal came from the uninformative biased gremlin. This elicitation will take place after agents report their new belief about their rank.

Metacognition (About Self and Others)
After participants finish the belief elicitation portion of the study described above, subjects are asked in an incentive-compatible manner what they believe the effect of each bias was on the final beliefs that they just reported and the final beliefs of other participants who had previously taken the study. In particular, they are asked how the belief biases (relative to a Bayesian benchmark) depend on the bias selection. For individuals in the chosen-bias treatment arm, they are also incentivized to report their beliefs about bias selection behavior. These questions are asked separately about other MTurkers and about the given participant.

We conclude the experiment with standard demographic questions to ensure that our samples are balanced on major demographics. We query for gender, age, education, income, occupation type, as well as media usage.
Randomization Method
Randomization done by a computer for the deployment on MTurk.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
900 individuals
Sample size: planned number of observations
900 individuals
Sample size (or number of clusters) by treatment arms
300 Ego-Choice , 300 Ego-Forced, 150 Non-Ego-Choice, 150 Non-Ego-Forced
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on a conservative estimate of the intraclass correlation of 0.1, we expect to detect a minimum effect size of 0.1 on logit biases: the difference in log odds between the belief elicitation and the Bayesian benchmark. The standard deviations we have observed in pilots was 0.7. More details in the plan attached.
IRB Name
IRB Approval Date
IRB Approval Number
IRB Name
Harvard University Committee on the Use of Human Subjects
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)