Replication: Belief elicitation and behavioral incentive compatibility

Last registered on November 14, 2025

Pre-Trial

Trial Information

General Information

Title
Replication: Belief elicitation and behavioral incentive compatibility
RCT ID
AEARCTR-0016853
Initial registration date
September 23, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 26, 2025, 8:39 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 14, 2025, 3:07 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Technical University Bergakademie Freiberg

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2025-10-03
End date
2025-10-19
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
Danz, Vesterlund, and Wilson (2022) demonstrate in their article "Belief elicitation and behavioral incentive compatibility" that the binarized scoring rule, a widely-used state-of-the-art mechanism for belief elicitation, leads to systematically distorted reports. We replicate their experiment to validate this effect and gather supplementary data to probe additional explanations for the bias. Furthermore, our study assesses the robustness and generalizability of their findings to broader settings.
External Link(s)

Registration Citation

Citation
Lehmann, Niklas Valentin. 2025. "Replication: Belief elicitation and behavioral incentive compatibility." AEA RCT Registry. November 14. https://doi.org/10.1257/rct.16853-2.0
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
We ask participants to provide five predictions of outcomes in the UK National Lottery. Because the 6-out-of-59 lottery game used by the UK National Lottery has clear, objective probabilities, we can directly compare reported beliefs with true probabilities. Additionally, we include two auxiliary tasks to directly examine the incentive compatibility of the binarized scoring rule (“incentives only” conditions). In these tasks, participants make choices between lottery outcomes generated according to the binarized scoring rule; for each task, participants are informed of the probability that a lottery ticket is valid, ensuring that a dominant strategy exists. Finally, we elicit participants’ self-assessed understanding of the mechanism and their perceived honesty in reporting.

We randomize the information that participants receive about the scoring rule, which corresponds to the "information" and "no information" treatment in the study by Danz, Vesterlund, and Wilson (2022).
Intervention Start Date
2025-10-03
Intervention End Date
2025-10-11

Primary Outcomes

Primary Outcomes (end points)
We will measure the outcomes reported in the original study by Danz et al. (2022), which primarily include the rate of false reports and the proportions of reports that pull to center, to the distant extreme, and to the near extreme. These metrics will be calculated based on participants’ main five predictions.

Additionally, we will record participants’ choices in the “incentives only” tasks, specifically whether each choice is compatible with the dominant strategy and, among incompatible choices, whether responses pull to center, near extreme, or distant extreme. We will report the fraction of incompatible choices in each of these categories.

Furthermore, diverging from the original study, we will also assess the accuracy, calibration, and precision of participants’ predictions.
Primary outcomes will be compared across treatment groups.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We will measure participants’ self-assessed understanding of the tasks and perceived honesty in reporting. We will examine whether confusion or lack of understanding affects primary outcomes, and assess whether the experimental treatments influence understanding and honesty.

Additionally, we will record the time participants spend on each task as a proxy for effort.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study investigates how participants report their probabilistic beliefs. We examine the accuracy and honesty of belief reporting, as well as participants' understanding of the tasks.
Experimental Design Details
Randomization Method
Randomization is achieved on the server side. Participants are assigned to treatment or control based on the assigned participant ID (odd or even).
Randomization Unit
We randomize on the participant level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We attempt to recruit 2×70 participants.
Sample size: planned number of observations
The planned number of observations is 140. However, we remark that we recruit participants during a conference (closed pool). Thus, we do not continue to sample new participants until the planned number is reached, and the actual number of observations may be very different.
Sample size (or number of clusters) by treatment arms
We attempt to recruit 70 participants each in two treatment arms.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
October 12, 2025, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
October 12, 2025, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
8
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
8
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials