NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Elicitation of beliefs in light of dynamic new information: a laboratory experiment
Last registered on June 18, 2020


Trial Information
General Information
Elicitation of beliefs in light of dynamic new information: a laboratory experiment
Initial registration date
February 28, 2020
Last updated
June 18, 2020 10:03 AM EDT
Primary Investigator
University of Glasgow, Adam Smith Business School
Other Primary Investigator(s)
PI Affiliation
Heidelberg University
PI Affiliation
Heidelberg University
PI Affiliation
Heidelberg University
PI Affiliation
Warwick Business School
Additional Trial Information
Start date
End date
Secondary IDs
Bayesian Updating is the dominant theory of learning in economics and other disciplines. According to this theory, decision-makers have prior beliefs which they update according to Bayes rule after receiving new information. The theory is silent about how individuals react to receiving information that the subjects have not observed previously and, hence, which they may deem impossible. Recent theoretical literature has put forth a possible mechanism, called “reverse Bayesianism”, which decision-makers may use to react to unforeseen events. We started to explore experimentally testing reverse Bayesianism in a previous experiment (AEARCTR-0003815). Our experimental findings show that participants are consistent with the theory, while they do not show to have an expectation of the unknown.

Because of these findings, our goal in this project is to understand when individuals cease to expect something new. Additionally we aim to test reverse Bayesianism in a more dynamic context where more than one unforeseen event might occur. A summary description of our design follows. Participants will be asked to repeatedly draw coloured balls from a virtual urn. After every draw, participants are asked to declare their perceived proportion all colours that have been drawn until that point and for any other colour not yet drawn or observed. There will be a total of 30 draws and this will be repeated for 4 different virtual urns. The beliefs on the proportions of different colours will be incentivised using the Karni (2009) method.

We have two treatments. In the “2 - outcome” treatment, participants will first make draws out of a 2-outcome urn, while in the “4-outcome” treatment participants will first make draws out of 4-outcome urn. Following the first urn, the three following urns will be for both treatments: 3-outcome, 2-outcome and 4-outcome. In the supporting documents, we attach a diagram detailing this sequence for each treatment.

This design will allow us to explore if participants have an expectation of the unknown and additionally when does this expectation diminish. Contrasting our two treatments, will allow us to consider whether a difference between individuals that have experienced a broader environment or a narrower environment will exist. Specifically, how experience might cause an individual’s expectation of the unknown to diminish differently. Furthermore, we will be able to consider multiple instances of new events occurring and thus testing reverse Bayesianism in a more dynamic fashion. We will in particular address the following questions: Do individuals naturally expect events that were previously considered impossible? Does experience affect an individual’s expectation of unknown events? How do individuals update their beliefs about the urn composition after an unexpected event takes place?
External Link(s)
Registration Citation
Becker, Christoph et al. 2020. "Elicitation of beliefs in light of dynamic new information: a laboratory experiment." AEA RCT Registry. June 18. https://doi.org/10.1257/rct.5499-1.1.
Experimental Details
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
(i) The subjective probability of an unobserved event (or prize).
(ii) The ratio between subjective probabilities of the two prizes already observed respectively.
Primary Outcomes (explanation)
(i) This will be elicited multiple times. There will be one such entry after each of the 30 draws of each of the 4 urns that will be observed.
(ii) This ratio will be between the pairs of observed prizes up until each draw stage.
Secondary Outcomes
Secondary Outcomes (end points)
(i) Result of the Raven test
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We design an experiment where subjects are asked to report subjective probabilities of prizes after observing multiple draws from virtual urns. These elicitations are incentivised by using methods similar to Becker-DeGroot-Marschak (BDM) to report the subjective probability of prizes (see the enclosed documents for more details).

There are two treatments:

2-outcome treatment:
Participants make 30 draws out of a virtual urn. They are faced with 4 different urns in sequence. After every draw made they are asked to report their subjective probabilities for the each of the coloured balls they have observed until that point and additionally any other colour not observed yet. These inputs are forced to always sum up to 100, that is, participants are prompted to update their inputs if the subjective probabilities do not sum up to exactly 100. The first virtual urn participants in this treatment will contain two outcomes, the second will contain three outcomes, the third will contain two outcomes and the fourth will contain four outcomes. One randomly chosen subjective probability reported out of every of the four urns will be paid according to the Karni (2009) method. Details on this method are offered in the attached document.

4-outcome treatment:
The only difference to the 2-outcome treatment is that the first virtual urn will contain four outcomes. Otherwise, the two treatments are identical

Each session ends with a short incentivised Raven test and some general demographic questions.

Our specific hypotheses to be tested are included in the attached document.
Experimental Design Details
Randomization Method
Done by a computer and virtual random draws from an urn.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
300 laboratory participants
To account for no-shows we will invite more participants. Hence, sample size may exceed 300.
Sample size: planned number of observations
300 laboratory participants
Sample size (or number of clusters) by treatment arms
150 participants: 2-outcome treatment
150 participants: 4-outcome treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Test if the subjective probability of an unobserved event is different between the treatments (2-outcome treatment vs. 4-outcome treatment): Minimum effect size is one standard deviation between the two treatments. Test is a two-sample t-test, data is clustered at individual level. To identify this difference, at least 64 participants per treatment are needed. Testing if the ratio after observing an yet unobserved event increases, that is participants update to put more weight on the subjectively more likely event: According to a power analysis, we can positively identify a ratio of 55 to 45 participants increasing their ratio at an α-level of 0.05 with 300 participants (sign test).
Supporting Documents and Materials
Document Name
Exp. Hypotheses & Instructions
Document Type
Document Description
Experimental details, hypotheses and instructions
Exp. Hypotheses & Instructions

MD5: 368d683451d516c6ef715a43daacbc8b

SHA1: 8b129a1dcd749d72261d1b0e8571f8099455487c

Uploaded At: February 28, 2020

IRB Name
Warwick University Humanities and Social Sciences Research Ethics Committee
IRB Approval Date
IRB Approval Number
HSSREC 104/19-20
Post Trial Information
Study Withdrawal
Is the intervention completed?
Intervention Completion Date
March 06, 2020, 12:00 AM +00:00
Is data collection complete?
Data Collection Completion Date
March 06, 2020, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)

Due to the Covid-19 crisis we had to scrap plans of sessions in Heidelberg University. Our sample collected in Warwick Business School is sufficient given our power calculations.
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
2-outcome treatment: 89 participants 4-outcome treatment: 85 participants
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)