Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Willingness to pay for fact-checking
Last registered on November 23, 2020


Trial Information
General Information
Willingness to pay for fact-checking
Initial registration date
November 20, 2020
Last updated
November 23, 2020 10:33 AM EST

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Center for Growth and Opportunity at Utah State University
Other Primary Investigator(s)
PI Affiliation
Utah State University
Additional Trial Information
In development
Start date
End date
Secondary IDs
Misinformation is at the center of recent political discourse; in this study, we aim to identify an individual’s willingness to pay for fact-checking misinformation. We are interested in analyzing cognitive and non-cognitive factors that motivate individuals’ willingness to pay. This study is of vital importance given the prevalence of misinformation online, and it is important to determine what factors can motivate individuals to make costly investments of time and energy to ascertain the truth. The experiment studies which cognitive and non-cognitive attributes affect the decision to pay for fact-checking posts shared on social media platforms, and statements made by politicians and news anchors. We use a set of 30 questions which were marked either True or False by the PolitiFact website.
External Link(s)
Registration Citation
Mukherjee, Prithvijit and Lucas Renstschler. 2020. "Willingness to pay for fact-checking." AEA RCT Registry. November 23. https://doi.org/10.1257/rct.6789-1.0.
Experimental Details
We give an option to the participants to pay for fact-checking. We elicit their willingness to pay by a multiple price list, and one of the prices are randomly picked.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Factors that determine an individual's willingness to pay.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
In the first decision task, 30 statements have been rated as either True or False by an independent fact-checking organization. Individuals report the probability with which they believe the statement is rated as True. Individuals are incentivized to report their beliefs using a quadratic scoring rule. After reporting their belief, they have the option to pick costs they are willing to pay to see a fact-check. One of these costs is randomly picked by the computer. If the cost randomly chosen is one for which they decided to pay to see a fact-check, then their beliefs will be automatically updated to the result of the fact-check. If the cost randomly chosen is one they decided not to pay to see a fact-check, the belief initially reported would be their final answer. One of the 30 questions is randomly picked for payment for this decision task.

In the second decision task, individuals decide how many boxes to collect out of 100, one of which contains a bomb. For each box an individual collects, they earn 1 cent. Behind one of the boxes hides a bomb that destroys everything that has been collected. The remaining 99 boxes are worth 1 cent each. Individuals do not know where the bomb is located. They only know that the bomb can be in any place with equal probability. If an individual collects the box where the bomb is located, the bomb will explode, earning zero. If the individual stops before collecting the bomb, they gain the amount accumulated that far.

Following the two decisions task, individuals answer a questionnaire which elicits their political preferences, cognitive reflection task, big five, dark triad, and socio-demographic question. At the end of the experiment, they have a choice to donate $0, $1, $2, $3, $4, or $5 towards a fact-checking organization.
Experimental Design Details
Not available
Randomization Method
Randomization is done by the computer.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB Name
Utah State University IRB
IRB Approval Date
IRB Approval Number