Learning: Retractions versus Novel Information

Last registered on June 14, 2020


Trial Information

General Information

Learning: Retractions versus Novel Information
Initial registration date
February 18, 2019
Last updated
June 14, 2020, 5:41 AM EDT



Primary Investigator

Columbia University

Other Primary Investigator(s)

PI Affiliation
University of Southern California
PI Affiliation
Columbia University

Additional Trial Information

In development
Start date
End date
Secondary IDs
This project aims to understand why people may fail to unlearn, i.e. to disregard information once it is shown to be irrelevant. It is motivated by many examples of people continuing to hold incorrect beliefs long after they have been widely discredited, for example the belief that vaccines cause autism. We seek to study whether people are able to "unlearn" effectively by comparing the process of unlearning irrelevant information, through retractions of earlier signals, to the process of learning from new information, through receiving new signals. By focusing on an abstract lab setting, we can ensure that the informational content of retractions is the same as that of new signals. Our design also allows us to test whether acting on previous information affects the ability to unlearn it, one possible mechanism, as well as whether receiving a retraction affects learning from subsequent information. We will also use the opportunity of observing beliefs at high frequency to add to existing evidence on how learning deviates from a Bayesian benchmark more generally.
External Link(s)

Registration Citation

Goncalves, Duarte, Jonathan Libgober and Jack Willis. 2020. "Learning: Retractions versus Novel Information." AEA RCT Registry. June 14. https://doi.org/10.1257/rct.3820-1.1
Former Citation
Goncalves, Duarte et al. 2020. "Learning: Retractions versus Novel Information." AEA RCT Registry. June 14. https://www.socialscienceregistry.org/trials/3820/history/70376
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Outcomes are beliefs which are elicited at various points in the experiment. Our analysis will test how these beliefs vary across different histories / treatments. Our primary hypotheses are:

1. Retractions induce mistakes, more so than informationally equivalent new signals:
a. Individuals do not completely "delete" retracted signals, i.e. when a signal is retracted, individuals do not "go back" to the beliefs without that signal
b. Individuals have more difficulty in interpreting retractions than equivalent direct information (new signals)
c. In turn, retractions accentuate standard biases in updating, compared to informally equivalent signals

2. Mechanisms / why retractions fail:
a. Does acting on information hinder subsequent updating from retractions of the information? (comparing effects of retractions when beliefs have already been elicited vs when they have not)
b. Do retractions of recent signals work differently from retractions of earlier signals?
c. Do retractions become less effective as evidence becomes stronger in one direction?

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Do retractions change subsequent updating
a. Are individuals less reactive to new information after a retraction? (comparing updating in period 4 with and without a retraction in period 3)

2. Replicate standard findings on belief updating from direct information
- Grether-style analysis (log-odds regressions): base-rate neglect (coeff on log-odds prior), under-reaction to signals (coeff on signal sign scaled by signal's informational value), confirmation bias (coeff on signal on whether signal's sign matches the prior, e.g. signal is +/- and prior is >/< .5)
- Order dependence
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment will be run on MTurk. See the supplementary doc for the MTurk instructions, which explain the design.
Experimental Design Details
As detailed in the MTurk instructions in the supplementary files, subjects are trying to guess the color of one ball (the "truth" ball) in a bag of five balls (the truth ball and four "noise" balls), each of which is yellow or blue (2 yellow noise balls, 2 blue noise balls).

We measure belief updating (using a scoring rule) as subjects are shown either new signals (a draw of a ball from the bag) or retractions of earlier signals (telling subjects that an earlier draw was a noise ball), and we compare updating under the two.

There is across subject randomization: in group 1 there are four rounds - two rounds of new signals, and then two rounds each of which is (randomly) either a new signal or a retraction, with beliefs elicited each round; in group 2 there are three rounds - two new signals, and then one new signal or retraction, with beliefs only elicited at the end. Comparing across these groups enables us to test the effect of belief elicitation (after rounds 1 and 2) on subsequent updating (in round 3).

There is also within subject randomization: in each round, the signal (new ball draw) or retraction (revelation of an earlier ball draw as either a noise ball or the truth ball) is drawn randomly, and in rounds 3 and 4, whether there is a signal or a retraction is drawn randomly. Comparisons across different such histories allow us to test the other hypotheses.

Each subject plays the game 32 times during the experiment, with all of the within-subject variation drawn randomly each time. The across-subject variation is assigned randomly once at the start of the experiment.
Randomization Method
Randomization is done by oTree, the software on which the experiment is run.
Randomization Unit
The unit of randomization for retractions versus novel information (and the specific balls draws) is the round-individual level (each person faces 32 rounds). The unit of randomization for the effect of previous belief elicitation is the individual level.

The unit of randomization for the motivated vs unmotivated treatment is at the individual level - each individual faces either the motivated or unmotivated treatment for all 16 rounds.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Approximately 400 individuals
Sample size: planned number of observations
Approximately 400 individuals, each of whom will play the game 32 times.
Sample size (or number of clusters) by treatment arms
Approximately 400 individuals, each of whom will play the game 32 times. Elicitation each round vs elicitation at the end is randomized at the level of the individual. One half of individuals will face elicitation each round, one half elicitation at the end.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Institutional Review Boards (IRBs)

IRB Name
Columbia University
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials