Optimal (mis)coordination under uncertainty: testing information design in the laboratory

Last registered on September 17, 2021


Trial Information

General Information

Optimal (mis)coordination under uncertainty: testing information design in the laboratory
Initial registration date
March 08, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 09, 2021, 6:22 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 17, 2021, 6:28 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator


Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
In different strategic environments, senders might want to use distinct communication strategies to persuade multiple interacting receivers. I test whether the optimal communication strategy involves private (public) signals when the strategic environment of the receivers features strategic substitutes (complements). This prediction arises in information design (Bergemann & Morris, 2019), and can guide i.e. governmental information release. I propose to measure responses to different exogenous structures, focusing on receivers’ interaction.
External Link(s)

Registration Citation

Ziegler, Andreas. 2021. "Optimal (mis)coordination under uncertainty: testing information design in the laboratory." AEA RCT Registry. September 17. https://doi.org/10.1257/rct.7060-1.1
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
(i) Share of participants choosing "work"; (ii) Proportion of following recommendations
Primary Outcomes (explanation)
i. In the analysis plan, I will use the framing as in the investment game in Bergemann & Morris, 2019, while in the experiment the game is between two workers deciding whether to work on easy or difficult projects. In the analysis plan, this means that the state is either good ("easy project") or bad ("difficult project"). The firms ("worker" and "co-worker") decide whether to invest ("work") or not to invest ("not work").

ii. Recommendation following is a binary variable, coded such that a recommendation is followed iff the worker chooses the action recommended by the manager (i.e., equal one if subject works if recommended to do so, or did not work if recommended not to, 0 otherwise).

Secondary Outcomes

Secondary Outcomes (end points)
Beliefs about (i) others' behavior and (ii) the state of the world
Secondary Outcomes (explanation)
I use raw beliefs as well as average differences and average squared differences to the prediction target, where the target is either calculated based on the most recent 40 decisions from all groups with the identical information structure, within the session and groups from earlier sessions, and for beliefs about the state the Bayesian posterior for each information structure.

Experimental Design

Experimental Design
Three treatment dimensions: (I) strategic complements vs. substitutes, (II) public or private information structures, (III) level of recommendations.

(I) and (II) are varied between-subject, in a 2-by-2 design.
(III) is varied within-subject, counter-balanced. This dimension changes the likelihood that the signal recommends to "work" to the worker for "difficult projects". The level of recommendation refers to this likelihood, this varies the obedience of the information structure. Level low has slack obedience constraints, and the lowest likelihood of receiving the signal work. Level optimal is (almost) binding obedience, with an intermediate likelihood. Level high has the highest likelihood to receive the signal work, and obedience constraints are not satisfied.
Experimental Design Details
Structure of the experiment:
The first 3 of 5 parts consist of the investment game (Bergemann & Morris, 2019), with 20 periods per part. Each of these 3 parts uses a different level of recommendations (within-subject treatment variation III). Part 4 elicits beliefs (see secondary outcomes). In part 5, I elicit (i) lottery-type choices, capturing behavior in an individual-decision making transformation of the investment game; (ii) risk preferences (Eckel & Grossman, 2002); and (iii) inequity aversion parameters (Fehr & Schmidt, 1999) using the method by Yang, Onderstal & Schram (2016).
I test comparative static predictions on (i) how the optimal structure depends on the receivers’ strategic environment and (ii) whether recommendations from the information structures are being followed and how this depends on their obedience.
Null hypotheses: <br>
H1. In games of strategic substitutes, private structures lead to equal investment frequencies as public structures. <br>
H2. In games of strategic complements, public structures lead to equal investment frequencies as private structures. <br>
H3. Diff-in-diff of hypotheses H1 & H2: Equal investment frequencies in public structures compared to private structures for complements vis-a-vis the same comparison for substitutes. <br>
H4. Frequency of following recommendations is equal across the levels of recommendations (treatment dimension III). <br>
H5. Beliefs (levels and correctness) on the state and others' choices are not affected by the type of information structure (private vs. public, level).
As the main test of hypotheses H1-H3 I will use the intermediate level ("optimal") of the level of recommendation, but will also perform the tests on the pooled data.
As the key specification, I use regressions of the dependent variables on treatment dummies, using each choice as an observation. I cluster standard errors on the matching group level. I will also estimate these models with additional controls: part fixed effect, a linear period trend within each block, subject controls (risk preferences, social preferences, gender, age) and, for the pooled data, level fixed effects.
Additional analyses: <br>
- Robustness of main results using non-parametric tests (Mann-Whitney U-tests/Wilcoxon signed-rank tests) with data averaged on the matching group level. <br>
- I will classify subjects into types based on following behavior at each level of recommendation (treatment dimension III). For example, group 1 is classified as "never follower", group 2 as "follow only low", group 3 as "follow low and optimal", etc. I will classify a subject as following a level (treatment dimension III) if at least 15 out of 20 recommendations are being followed, and classify a subject as weakly following if at least 12 recommendations are being followed. I will study correlates of the additional elicitations (risk/inequality preferences, beliefs) with the different types. <br>
- Testing behavioral predictions on a subject level (on risk aversion, inequality aversion).
- Structural estimation of the behavioral parameters (inequity aversion as in Fehr & Schmidt, 1999, and risk aversion, quantal response equilibrium), to be compared with the separately elicited parameters in part 5. <br>
- To study the effect of strategic uncertainty, I compare behavior in parts 1-3 to choices in the first task of part 5.<br>
- To study the impact of imperfect best responses, I will compare behavior to empirical best responses based on elicited beliefs in part 4.<br>
Data use: <br>
- If participants in the online experiment drop out during the experiment, I will use all data of groups who can complete the experiment, including the dropped out participants for all available data. In the experiment, participants matched with a drop-out see that their participant has not made an active choice and receive the maximum payment in this period. <br>
- To account for learning effects within each part, I will do the analysis both for (i) the full dataset and (ii) dropping the first 7 periods within each part (I continue using 13 periods to ensure power in the restricted sample with experience). <br>
Randomization Method
Computerized randomization
Randomization Unit
Treatment is randomized at the matching group level. First, each session is randomly assigned to treatment dimension I, complements vs. substitutes. Upon completion of the first set of instructions, subjects are assigned to matching groups. Each matching group is randomly assigned a treatment from dimension II, public or private, as well as randomly to one ordering of levels (dimension III). All treatment assignments are balanced over time.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
72 matching groups of 6 participants
Sample size: planned number of observations
432 participants in an online experiment with the subject pool of CREED at the University of Amsterdam and of MELESSA at LMU München.
Sample size (or number of clusters) by treatment arms
18 matching groups per treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
To approximate power, I simulate 100,000 datasets with the true theoretical treatment effect. On the simulated dataset, I estimate the key specification (regressing investment choice on treatment dummies (dimension II) within game (dimension I), clustering on simulated matching groups). I code an estimate as significant if the p-value is below a significance level of 5%, for a one-sided test, based on the theoretically motivated directional alternative hypotheses (for H1. to H4.). This shows significant estimates for 88.1% of simulations in substitutes (H1), 98.4% for complements (H2), as well as for 99.9% for the interaction effect (H3).

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee Economics and Business, University of Amsterdam
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials