Optimal (mis)coordination under uncertainty: testing information design in the laboratory

Last registered on September 17, 2021


Trial Information

General Information

Optimal (mis)coordination under uncertainty: testing information design in the laboratory
Initial registration date
March 08, 2021
Last updated
September 17, 2021, 6:28 AM EDT



Primary Investigator


Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
In different strategic environments, senders might want to use distinct communication strategies to persuade multiple interacting receivers. I test whether the optimal communication strategy involves private (public) signals when the strategic environment of the receivers features strategic substitutes (complements). This prediction arises in information design (Bergemann & Morris, 2019), and can guide i.e. governmental information release. I propose to measure responses to different exogenous structures, focusing on receivers’ interaction.
External Link(s)

Registration Citation

Ziegler, Andreas. 2021. "Optimal (mis)coordination under uncertainty: testing information design in the laboratory." AEA RCT Registry. September 17. https://doi.org/10.1257/rct.7060-1.1
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
(i) Share of participants choosing "work"; (ii) Proportion of following recommendations
Primary Outcomes (explanation)
i. In the analysis plan, I will use the framing as in the investment game in Bergemann & Morris, 2019, while in the experiment the game is between two workers deciding whether to work on easy or difficult projects. In the analysis plan, this means that the state is either good ("easy project") or bad ("difficult project"). The firms ("worker" and "co-worker") decide whether to invest ("work") or not to invest ("not work").

ii. Recommendation following is a binary variable, coded such that a recommendation is followed iff the worker chooses the action recommended by the manager (i.e., equal one if subject works if recommended to do so, or did not work if recommended not to, 0 otherwise).

Secondary Outcomes

Secondary Outcomes (end points)
Beliefs about (i) others' behavior and (ii) the state of the world
Secondary Outcomes (explanation)
I use raw beliefs as well as average differences and average squared differences to the prediction target, where the target is either calculated based on the most recent 40 decisions from all groups with the identical information structure, within the session and groups from earlier sessions, and for beliefs about the state the Bayesian posterior for each information structure.

Experimental Design

Experimental Design
Three treatment dimensions: (I) strategic complements vs. substitutes, (II) public or private information structures, (III) level of recommendations.

(I) and (II) are varied between-subject, in a 2-by-2 design.
(III) is varied within-subject, counter-balanced. This dimension changes the likelihood that the signal recommends to "work" to the worker for "difficult projects". The level of recommendation refers to this likelihood, this varies the obedience of the information structure. Level low has slack obedience constraints, and the lowest likelihood of receiving the signal work. Level optimal is (almost) binding obedience, with an intermediate likelihood. Level high has the highest likelihood to receive the signal work, and obedience constraints are not satisfied.
Experimental Design Details
Not available
Randomization Method
Computerized randomization
Randomization Unit
Treatment is randomized at the matching group level. First, each session is randomly assigned to treatment dimension I, complements vs. substitutes. Upon completion of the first set of instructions, subjects are assigned to matching groups. Each matching group is randomly assigned a treatment from dimension II, public or private, as well as randomly to one ordering of levels (dimension III). All treatment assignments are balanced over time.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
72 matching groups of 6 participants
Sample size: planned number of observations
432 participants in an online experiment with the subject pool of CREED at the University of Amsterdam and of MELESSA at LMU München.
Sample size (or number of clusters) by treatment arms
18 matching groups per treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
To approximate power, I simulate 100,000 datasets with the true theoretical treatment effect. On the simulated dataset, I estimate the key specification (regressing investment choice on treatment dummies (dimension II) within game (dimension I), clustering on simulated matching groups). I code an estimate as significant if the p-value is below a significance level of 5%, for a one-sided test, based on the theoretically motivated directional alternative hypotheses (for H1. to H4.). This shows significant estimates for 88.1% of simulations in substitutes (H1), 98.4% for complements (H2), as well as for 99.9% for the interaction effect (H3).

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee Economics and Business, University of Amsterdam
IRB Approval Date
IRB Approval Number