Belief Movements under Fact Checking

Last registered on May 10, 2021

Pre-Trial

Trial Information

General Information

Title
Belief Movements under Fact Checking
RCT ID
AEARCTR-0006106
Initial registration date
May 09, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 10, 2021, 11:43 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Southern California

Other Primary Investigator(s)

PI Affiliation
Columbia University
PI Affiliation
Columbia University

Additional Trial Information

Status
In development
Start date
2021-05-12
End date
2021-06-15
Secondary IDs
Abstract
Most news is not absolute, and involves residual uncertainty. How do beliefs update when information received relates to the validity of past information? Building on our previous experimental work, we test whether belief movement changes depend on the nature of fact-checking (relative to a rational benchmark). We consider an abstract design, where subjects receive information suggestive of an underlying state. We then study how beliefs over states update when signals are revealed to be truly reflective of the state or not. We dub this process "fact checking." Fact checking may either relate to the credibility of information, or suggest that a past signal was instead reflective of a different state. Our design also allows us to study how fact-checking one piece of information influences subject beliefs over the validity of other pieces of information. We considers a rich comparison of modes of fact-checking, which allows us to study whether bias in the form of fact-checking leads to subjects either disregarding or overinterpreting the meaning of a fact-checking.
External Link(s)

Registration Citation

Citation
Goncalves, Duarte, Jonathan Libgober and Jack Willis. 2021. "Belief Movements under Fact Checking." AEA RCT Registry. May 10. https://doi.org/10.1257/rct.6106-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2021-05-12
Intervention End Date
2021-06-15

Primary Outcomes

Primary Outcomes (end points)
Our outcomes are beliefs that are reported of the course of the experiment, as subjects are presented information related to the color composition of a particular box for the first set of rounds (32 total), and the source of one of their prior draws in the second set of rounds (12 total).

1. Test whether belief reports under verification and falsification are different. (See "experiment design" for descriptions of these treatments)
Bayesian updating is different under these two treatments, but it is ex-ante unclear whether people do react to the difference.
Our null hypothesis is that these distributions are not different, informed by previous evidence on problems like Monty Hall and on the difficulty of conditional reasoning.
If the reaction to these policies is, instead, different, we will investigate how these differ (i.e., whether beliefs are more biased in one treatment, if subjects appear more confused by one relative to the other, or if subjects react less to one than the other)

2. Testing whether fact-checks are more credible when they may relate to signals of either color, relative to when they are restricted to being a particular color.
Our null hypothesis again is that there is no difference, for similar reasons; again, if we do detect a difference, we will characterize the nature and direction.


3. Test whether the reaction to fact-check differs depending on whether a fact-check indicates that a previous draw was not reflective of the state, or whether a fact-check suggests in fact the previous draw was reflective of the opposite state.
    . Compare findings under the "switched color" fact check with truth/noise setup on belief updating from direct information: Bayesian net, Grether-style regressions, tests, etc.
    . Our hypotheses are that beliefs are (i) more biased, (ii) exhibit greater variance, (iii) react more to signals; note that posteriors for direct evidence should be the same as under the previous setup.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
(1) Testing whether any bias is exacerbated or lessened, depending on whether beliefs relate to the composition of the box or the validity of a given past draw (i.e., whether it was from the truth box, or whether it was swapped). Our hypothesis is that the overall effect should be similar across treatments, taking into account added difficulty with reasoning about the source of the draws.

(2) In addition, we hope to replicate other findings from the literature and the key findings from our previous experiment.
- Grether-style analysis (log-odds regressions): base-rate neglect (coeff on log-odds prior), under-reaction to signals (coeff on signal sign scaled by signal's informational value), confirmation bias (coeff on signal on whether signal's sign matches the prior, e.g. signal is +/- and prior is >/< .5)
- Order dependence
We also hope to replicate our previous experiment's results, which demonstrated that retractions induce more mistakes than informationally equivalent signals
- Individuals do not "delete" retracted signals, i.e. when the last signal observed is retracted, individuals do not "go back" to the same beliefs as before the signal
- Individuals have more difficulty in interpreting retractions than direct information:
    . Larger fraction of individuals updating in the wrong direction
    . Larger bias on average
    . Higher variance in beliefs posteriors (?)
- Retractions accentuate biases in updating compared to informally equivalent signals
    . In Grether-style log-odds analysis: compare coefficients for informationally equivalent signals, case by case to correct for proportionality issues
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment is conducted on mturk. See the supplementary doc for the MTurk instructions, which explain the design.

We present subjects with information on the composition of a particular box. The box is equally likely to have 9 blue balls and 1 yellow ball, and 1 blue ball and 9 yellow balls. We then present subjects with information about the composition. This information is generated in two ways, adjusted so that they yield comparable belief updates:

(1) Showing the subjects either balls which are from an alternative "noise box" containing 5 blue balls and 5 yellow balls, or
(2) Drawing a ball from the box and switching the observation's color. (We dub this "swapping")

We then compare beliefs when subjects are provided information about their past draws. Specifically, for all subjects and in all rounds, we present information with two draws from the box, as described above. We subsequently will provide another signal with probability .5, and a "fact-check" with probability .5.

In case (1), a fact-check informs the subject of whether or not a (randomly chosen) past draw was from a noise box. There are four kinds of fact checks which we vary across subjects; they differ on whether (a) all balls can be fact checked, or only draws from the noise box, and (b) all colors can be fact checked, or only particular colors.

In case (2), a fact check informs the subject of whether or not the previous ball was swapped. Here, there are two kinds of fact checks which we vary across subjects, which differ depending on whether all colors can be fact checked, or only particular colors.

The first 32 rounds involve subjects reporting beliefs on the composition of the box. The next 12 rounds involve subjects reporting beliefs about whether a fixed previous draw was accurate. For these rounds, we default to asking subjects about the first observation they see; if, however, a fact-check provides information on this draw, we instead ask about the second observation.

Of note, relative to our previous design, this experiment introduces (2), and additionally allows us to study how beliefs update if information in fact suggested past observations were in fact more accurate than initially suggested, only less.
Experimental Design Details
Randomization Method
Randomization is done by oTree, the software on which the experiment is run.
Randomization Unit
The unit of randomization for fact-checking versus new draws is the round-individual level (each person faces 32 rounds where beliefs relate to the composition of the box, and 12 rounds where beliefs relate to the validity of past draws. For all subjects, the first set of rounds precedes the second set). The unit of randomization for the effect of the type of fact-checking is at the individual level.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Our target is to have 900 subjects total; ultimate numbers may be higher or lower, depending on budget availability.
Sample size: planned number of observations
Our target is to have 150 subjects per treatment; ultimate numbers may be higher or lower, depending on budget availability; each subject completes 44 rounds, with 32 being where subjects are asked about the composition of the box and 12 being where the subjects are asked about the validity of their past draws (i.e., whether it is from the noise box or swapped, depending on the treatment).
Sample size (or number of clusters) by treatment arms
Our target is to have 150 subjects per treatment (with 6 treatments; for the "noise" treatments, we vary whether or not exclusively noise balls are checked, and whether or not a particular color is checked. For the "swapping" treatment, we vary whether or not a particular color is checked), with uncertainty due to sampling and available budget (with uncertainty in the latter being due to the fact that awards are stochastic).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Columbia University
IRB Approval Date
2018-11-13
IRB Approval Number
AAAS1633

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials