Underconfidence or lottery design? – A replication of Hoelzl & Rustichini (2005)

Last registered on May 13, 2024

Pre-Trial

Trial Information

General Information

Title
Underconfidence or lottery design? – A replication of Hoelzl & Rustichini (2005)
RCT ID
AEARCTR-0013070
Initial registration date
May 08, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 13, 2024, 12:28 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Universität Paderborn

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-06-01
End date
2024-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The 2005 paper “Overconfident: Do you put your money on it?” by Eric Hoelzl and Aldo Rustichini introduces a novel approach to studying of overconfidence in comparison to others. By letting participants vote for a performance test or a lottery as the basis for a bonus payment, they present a behavioral measure for relative overconfidence that goes beyond verbal performance self-assessments. They observe a tendency for underconfident assessments under monetary incentives with hard test items, i.e. individuals increasingly opting for a luck-based payment scheme (die roll) over the performance-based one (knowledge test), presumably due to individuals perceiving their chances to be higher in a fair 50/50 lottery than in a (difficult) performance test.
More recent literature has noted that the observed voting patterns do not necessarily constitute underconfidence but may rather be attributed to attitudes of aversion against risk or ambiguity associated with the test (Grieco & Hogarth 2009, Blavatskyy 2008, Owens et al. 2014). We further argue that the two payment schemes are not perfectly comparable in terms of perceived winning probability, as the test presents a fixed winner distribution of 50% (i.e. no variance in the amount of winners), while the lottery presents a probabilistic winner distribution with a variance to it. This inherent variance may bias the voters’ perception of their probability to win and may even lead to altruistic behavior in the sense that “everybody has the chance to win on an individual die roll”, in which the chance of winning is not interdependent with other participants performance. For these reasons, we suspect that the voting share in favor of the lottery that is due to underconfidence may be overestimated.
Therefore, in the planned experiment, a fixed outcome distribution lottery (50/50) will be compared against the probabilistic 50/50 lottery from Hoelzl & Rustichini (2005) in a between-subject design to evaluate the impact of the lottery mechanism’s design on individual voting behavior. Additional questionnaire measures will be implemented to gain a more complete understanding of the driving motive(s) behind individual voting behavior to either rebut or affirm alternative explanations besides underconfidence.
External Link(s)

Registration Citation

Citation
Protte, Marius. 2024. "Underconfidence or lottery design? – A replication of Hoelzl & Rustichini (2005)." AEA RCT Registry. May 13. https://doi.org/10.1257/rct.13070-1.0
Experimental Details

Interventions

Intervention(s)
In the planned experiment, a fixed outcome distribution lottery with a 50% win probability will be compared against the probabilistic 50% win probability lottery from Hoelzl & Rustichini (2005) in a between-subject design to evaluate the impact of the lottery mechanism’s design on individual voting behavior.
Additional questionnaire measures will be implemented to gain a more complete understanding of the driving motive(s) behind individual voting behavior to either rebut or affirm alternative explanations besides underconfidence.
Intervention Start Date
2024-06-01
Intervention End Date
2024-07-31

Primary Outcomes

Primary Outcomes (end points)
Individual voting behavior compared between probabilistic distribution and fixed outcome distribution lottery schemes
Primary Outcomes (explanation)
Overconfidence/Underconfidence will be measured by participants' voting behavior, i.e., whether they vote for the performance-based (better performing half of participants in a test receive a bonus payment) or luck-based (50/50 lottery to determine which participants receive a bonus payment) payoff scheme

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment by Hoelzl & Rustichini (2005) will be replicated in an online experiment on Prolific. Subjects will vote on a performance-based (test) or luck-based (lottery) payoff scheme for a potential bonus payment, which serves as an indicator for accurate self-assessment relative to others. Which payoff mechanism is eventually implemented is determined by the majority vote among all 100 participants. Independent of voting outcomes, all subjects will play both the test and the lottery afterwards, to prevent effort avoidance. The performance test will be based on vocabulary knowledge and consists of 20 items. The lottery mechanism will differ between the replication condition (probabilistic outcome distribution lottery) and the treatment condition (fixed outcome distribution lottery). Before and after the performance test, participants will be asked the same questions on their pre-test expectations and post-test reflection as in the original study by Hoelzl and Rustichini. Ultimately, subjects will answer a multi-page questionnaire like attitudes in terms of risk, ambiguity, altruism, social comparison, and self-efficacy to elicit alternative voting motives.
Experimental Design Details
Not available
Randomization Method
Public study on Prolific.com
Randomization Unit
Not applicable
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
200 participants from Prolific.com
Sample size: planned number of observations
200 participants from Prolific.com
Sample size (or number of clusters) by treatment arms
100 participants in the replication (control) group, 100 participants in the adaptation (treatment) group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Gesellschaft für experimentelle Wirtschaftsforschung e.V. (GfeW)
IRB Approval Date
2024-02-20
IRB Approval Number
4dqGpmS5