Elicitation of beliefs in light of decision environment changes: A laboratory experiment

Last registered on January 19, 2024

Pre-Trial

Trial Information

General Information

Title
Elicitation of beliefs in light of decision environment changes: A laboratory experiment
RCT ID
AEARCTR-0012369
Initial registration date
October 26, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 01, 2023, 3:52 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 19, 2024, 5:20 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Heidelberg University

Other Primary Investigator(s)

PI Affiliation
University of Alabama
PI Affiliation
University of Glasgow
PI Affiliation
Durham University
PI Affiliation
Heidelberg Unviersity

Additional Trial Information

Status
Completed
Start date
2023-11-01
End date
2024-01-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Bayesian updating is the dominant theory of learning. However, the theory is silent about how individuals react to events that were previously unforeseeable or unforeseen. In previous experiments, we tested if subjects update their beliefs according to “reverse Bayesianism”, under which the relative likelihoods of prior beliefs remain unchanged after an unforeseen event materializes. We found that participants do not systematically deviate from reverse Bayesianism. In this follow-up experiment, we test the robustness of this finding. Specifically, reverse Bayesianism hinges on the assumption that new events provide no information about past events, and hence the relative likelihoods should remain unchanged. This means that the expected updating according to reverse Bayesianism is indistinguishable from a null result. In this new experiment, the underlying probability distribution of the urns can change. Participants that are reverse Bayesian and not just complacent should incorporate this possibility into their updating process.
External Link(s)

Registration Citation

Citation
Becker, Christoph et al. 2024. "Elicitation of beliefs in light of decision environment changes: A laboratory experiment." AEA RCT Registry. January 19. https://doi.org/10.1257/rct.12369-4.0
Experimental Details

Interventions

Intervention(s)
Probability distribution participants sample from might change.
Intervention Start Date
2023-11-01
Intervention End Date
2024-01-15

Primary Outcomes

Primary Outcomes (end points)
(i) The subjective probability of an observed outcome.
(ii) The subjective probability of an unobserved outcome (any other possible outcome).
(iii) The ratio between subjective probabilities of two outcomes.
Primary Outcomes (explanation)
(i) This will be elicited multiple times. There will be one such entry for each already observed outcome after each of the 40 draws for each of the 3 urns.
(ii) Participants will be asked to state a subjective probability for any other outcome not yet observed.
(iii) This ratio will be calculated for all the pairs of observed outcomes in each draw stage where this is possible (i.e. after observing two or more outcomes).

Secondary Outcomes

Secondary Outcomes (end points)
(i) Demographics (age, gender, field of study, risk attitude), number of correctly solved Raven matrices (out of 12), number of outcomes participants thought might be in the urn
Secondary Outcomes (explanation)
(i) Answers will be used as control variables.

Experimental Design

Experimental Design
The experiment consists of three main parts. In each part, participants take 40 samples out of a virtual urn containing different colored marbles. Participants do not know the underlying distribution of the urn. After each sample draw, they are asked to state how many marbles of i) every color sampled so far and ii) any other possible color, not yet sampled, they think the urn contains. These beliefs are incentivized using the Karni method, a variant of the Becker-DeGroot-Marschak mechanism for probabilities. During the sample process, the underlying probability distribution might change. The main parts of the study are followed by an incentivized short version of Raven’s progressive matrices and a brief demographic questionnaire.
Experimental Design Details
The experiment consists of three main parts. In each part, participants take 40 samples out of a virtual urn containing different colored marbles. Participants do not know the underlying distribution of the urn. After each sample draw, they are asked to state how many marbles of i) every color sampled so far and ii) any other possible color, not yet sampled, they think the urn contains. These beliefs are incentivized using the Karni method, a variant of the Becker-DeGroot-Marschak mechanism for probabilities. During the sample process, the underlying probability distribution might change. For the first part, this is possible after the 30th draw, for the second part after the 20th draw, and for the third part after the 10th draw. This is communicated to the participants. We use two treatment conditions: In the Announcement treatment, participants only learn that the distribution might change. In the Information treatment, they additionally learn that the new distribution will always have 5 equiprobable outcomes.

The main parts of the study are followed by an incentivized short version of Raven’s progressive matrices and a brief demographic
questionnaire.

Our specific hypotheses to be tested are included in the uploaded document.
Randomization Method
Randomization of sample draws and colors per urn done by the experimental software, treatments are assigned per session.
Randomization Unit
Samples and colors are randomized on the individual level, treatment is the same for all participants of a session.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
240 laboratory participants
Sample size: planned number of observations
240 laboratory participants
Sample size (or number of clusters) by treatment arms
120 participants per treatment: 120 Announcement, 120 Information
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Test if the subjective probability of the residual is different between the treatments (Information vs. Announcement treatment): As participants sample for some time, it is not entirely clear which standard deviation to use. In our last experiment, participants sampled for 30 rounds inf for different tasks. After half of the rounds (when participants have already some feeling for the probabilities), the standard deviations range from 17 to 21. Assume the worst case of a standard deviation of 21. After the change in environment becomes possible and participants suspect that this happened, participants in the Information treatment should converge to a residual probability of 0 (as they know the true probabilities are 0.2 for every outcome). At 120 participants we could still detect differences up to 8 probability points at alpha=0.05 and beta=0.8. Testing if the ratio after observing a yet unobserved event increases, that is participants update to put more weight on the subjectively more likely event: According to a power analysis, we can positively identify a ratio of 55 to 45 participants increasing their ratio at an alpha-level of below 0.05 at 240 participants (sign test). Testing in the Information treatment if the subjective probabilities deviate from the known value of 20, we can still identify effects up to around 5.5 percentage points difference, assuming a standard deviation of 20, alpha=0.05 and beta=0.8. Note that participants sample over multiple rounds, hence we cannot entirely pinpoint when participants might (i) suspect the environment changed and (ii) update their ratios. This will be taken into account in the analysis by looking at how beliefs developed and using multiple procedures to corroborate the results.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB of the Faculty of Economics and Social Sciences at Heidelberg University
IRB Approval Date
2023-10-23
IRB Approval Number
FESS-HD-2023-015

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
January 15, 2024, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
January 15, 2024, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
241 laboratory participants (clustered on the individual level), 120 participants in the announcement treatment, 121 in the information treatment
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
241 laboratory participants
Final Sample Size (or Number of Clusters) by Treatment Arms
120 participants in the announcement treatment, 121 in the information treatment
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials