Testing Models of Strategic Simplicity

Last registered on December 06, 2023


Trial Information

General Information

Testing Models of Strategic Simplicity
Initial registration date
December 03, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 06, 2023, 8:53 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

University of Essex

Other Primary Investigator(s)

PI Affiliation
University of Essex
PI Affiliation

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We compare performance of different mechanisms corresponding to anonymous social choice functions. Since there is no efficient dominant strategy mechanism, we aim at improving the efficiency by relaxing the dominance constraint. We replace the dominant strategy solution concept with behavioral but still relatively simple solution concepts. We show theoretically that this replacement allows us to gain improvement in efficiency. However, the empirical relevance of these concepts is still in question. Thus we test the theoretical predictions in the lab.
External Link(s)

Registration Citation

Basteck, Christian, Ahrash Dianat and Mikhail Freer. 2023. "Testing Models of Strategic Simplicity." AEA RCT Registry. December 06. https://doi.org/10.1257/rct.12555-1.0
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The key outcomes are:

1) Efficiency of the mechanism (group level):
Observed frequency of the group outcome being Pareto efficient

2) Consistent behavior (individual level):
Frequency of the observed behavior matching the appropriate theoretical prediction
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We consider the social choice situation with two players and three alternatives. Players are denoted by 1 and 2. Alternatives are denoted by a,b, and c. There are four different treatments each treatment corresponding to a different mechanism players participate in.Each mechanism depends on the public priority list. For simplicity in illustration all priority lists are assumed to be abc.

T1: Dominant Strategies. Alternative a is made default. Players are asked whether they want to implement b instead of a. If both agree, then b is implemented. Otherwise a is implemented

T2: Range Dominance-1. Alternative a is made default. Players are asked whether they want to implement b or c instead of a. If both choose b, then b is implemented. If both choose c, then c is implemented. Otherwise a is implemented.

T3: Range Dominance-2. Players are asked whether they want to veto a,b, or c. Alternative is vetoed if at least one person vetoed the alternative. If a is not vetoed, then a is implemented. If a is vetoed and b is not, then b is implemented. Otherwise c is implemented. To ensure better design parallelism we implement this treatment through the subjects casting two votes for the alternatives they support. The alternative they do not vote for is the one we consider vetoed.

T4: Strategically Simple. Alternative a is made current default. Players are asked whether they want to replace the current default with b. If they both agree, then b becomes current default. Otherwise, a stays current default. In the next stage, players are asked whether they want if they want to replace the current default with c. If both agree, then c is made current default. Otherwise current default stays the same. Finally, the current default is implemented.

Out of six preference profiles we consider case with 4 preference profiles for the sake of maximizing the size of the effect. In particular, we remove the two preference profiles that rank the default alternative as top. That is, for the priority list abc we remove abc and acb profiles. Thus, the players can have preferences: bac, bca, cab, or cba.

Experimental Details.

Subject repeatedly play 20 periods of the experiment with random rematching after every period. The payment is determined by randomly picking one round once experiment is completed. Alternatives are color-coded: blue, green and orange. Mapping between underlying alternatives and labeling is randomized. The priority list is randomized at the period level. The type subject have is randomized every period.

Post-experimental tasks:

Subjects participate in two post experimental tasks. First is the standard beauty contest game. Second is a risk elicitation task.

In beauty contest task subjects are asked to guess the 2/3 of the average guess of other subjects in the session. Player who has guessed closest to the target is a winner and receives the compensation. This is a standard beauty contest task used to reveal cognitive sophistication of subjects.

In the risk-elicitation task subject is asked a single question. Whether they prefer the payment corresponding to the second-best alternative for sure versus the lottery of the equally likely payments corresponding to the first- and third-best alternatives. This test is necessary as some of the forecast might be sensitive to successfully inducing not only ordinal but also cardinal preferences.
Experimental Design Details
Not available
Randomization Method
All randomization is computer based.
Randomization Unit
Experimental Session, Subject
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
4 sessions per treatment, 20 subjects per session, 4 treatments
Sample size: planned number of observations
240 subjects
Sample size (or number of clusters) by treatment arms
80 subjects by treatment, 4 treatments
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
WZB Research Ethics Committee
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information