Using the Blotto game to measure beliefs

Last registered on November 17, 2022

Pre-Trial

Trial Information

General Information

Title
Using the Blotto game to measure beliefs
RCT ID
AEARCTR-0010405
Initial registration date
November 13, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 17, 2022, 3:46 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Otago

Other Primary Investigator(s)

PI Affiliation
Vrije Universiteit Amsterdam

Additional Trial Information

Status
In development
Start date
2022-11-16
End date
2023-05-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Peeters and Wolk (2019) developed and validated Blotto games to elicit expectations. In their experiments they apply their games to elicit expectations about termination times of a time series process. Given that in many studies expectations are elicited on beliefs and norms about behavior, in this study we apply the most successful Blotto game of Peeters and Wolk (2019) to expectations about behaviour, and do this in the context of the Ultimatum Game. To be precise, we apply the Blotto game to measure beliefs about behaviour and beliefs about beliefs about behavior in this game. The former applies to measuring beliefs about verifiable events, the latter about unverifiable events. In the analysis we will particularly focus on the quality of the measured beliefs in the context of the “wisdom of the few” rather than the “wisdom of the crowd”.
External Link(s)

Registration Citation

Citation
Wolk, Leonard and Ronald Peeters. 2022. "Using the Blotto game to measure beliefs." AEA RCT Registry. November 17. https://doi.org/10.1257/rct.10405-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-11-16
Intervention End Date
2022-12-22

Primary Outcomes

Primary Outcomes (end points)
Finding out in how far the Blotto game is a valid tool to measure beliefs about behavior and beliefs about beliefs about behavior.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Finding out the number of individuals needed to measure beliefs accurately.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment will be conducted online using Qualtrics and participants will be recruited using Prolific. Participants are participating in one of three treatments. In Treatment A, they will be playing a discretized version of the Ultimatum Game as in Trautmann and van de Kuilen (2015). Participants will be playing the game either in the role of Proposer or in the role of Responder. Responders make their decision according to a strategy method. In Treatment B, participants play (in pairs) the Blotto game, where the outcome of the game is in part determined by the outcome of Treatment A. Participants play this game either in the context of the Proposers’ decisions or in the context of the Responders’ decisions. Treatment C is comparable to Treatment B, but instead of the outcome of Treatment A being in part relevant for the outcome of the Blotto game, average behavior within the session will be relevant.
Experimental Design Details
Not available
Randomization Method
All randomizations are computerized.
Randomization Unit
Participants are assigned to either the Proposer role or the Responder role. This randomization is on the individual level. In Treatment A, Proposers are paired with Responders (as they play the Ultimatum Game). In Treatments B and C, pairs are formed within Proposers and Responders (as they play the Blotto game). In Treatments B and C participant earnings are subject to chance, for which randomizations are done on pair-level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
One per decision situation (two for each treatment)
Sample size: planned number of observations
600
Sample size (or number of clusters) by treatment arms
200 (100 for each role)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The sample sizes are chosen in order to conveniently identify an effect size down to 0.5 standard deviations on 5% significance with 80% power in a two-sided non-parametric test.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Otago Human Ethics Committee
IRB Approval Date
2022-11-11
IRB Approval Number
D22/313
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information