Fostering innovation: Experimental evidence on the effectiveness of behavioral interventions

Last registered on October 20, 2022


Trial Information

General Information

Fostering innovation: Experimental evidence on the effectiveness of behavioral interventions
Initial registration date
December 27, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 04, 2020, 11:38 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 20, 2022, 4:21 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

University of Fribourg

Other Primary Investigator(s)

PI Affiliation
University of Fribourg
PI Affiliation
University of Fribourg

Additional Trial Information

Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Innovation is a core component of a growing economy. Yet, it is still to be fully understood
how to best foster it. We study in the laboratory the effect of making strategies or profits
salient on the choice of participants to innovate using a management game. To do so, we use
participants’ generated reports and vary their content to be either strategy or profit focused,
keeping everything else, including a pay-for-performance incentives scheme, fixed. We expect
that participants who are asked to report their strategies will be more innovative than those
who report their profits.
External Link(s)

Registration Citation

Matthewes, Elisa, Anis Nassar and Christian Zihlmann. 2022. "Fostering innovation: Experimental evidence on the effectiveness of behavioral interventions." AEA RCT Registry. October 20.
Experimental Details


The laboratory task is adapted from Ederer&Manso (2013). We employ the pay-for-performance scheme and keep that fixed. Our treatments target an individual's perceived salient aspect of the game: We ask subjects to report a specific aspect of the game and we expect them, through this shift of attention, to focus on the very aspect we ask them to report. Concretely, the treatments are as follows:
1. Control
No reporting. Mimics 1-to-1 the pay-for-performance treatment (Ederer&Manso (2013)).
2. Profit treatment
After the decisions are made, in period 3,6,9 and 12, subjects are requested to report
their profit of the last three periods. Along with the wording
"Please report the profit
of the last three periods", subjects will face an entry mask, where they need to enter
the profits of the last three period.
3. Strategy treatment
After the decisions are made, in period 3,6,9 and 12, subjects are requested to report
their strategy of the last three periods.

For further details, refer to the pre-analysis plan.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Exploration behavior, proxied by different variables (see Ederer & Manso 2013):
- maximal per-period profit
- profit in the final period
- location in the final period
- longest duration of an exploration phase
- standard deviation from three continuous variables
Primary Outcomes (explanation)
Exploration phase is constructed. Following Ederer & Manso (2013), we classify subjects as having entered an exploratory phase as soon as they chose a location
other than the default (business district). An explorative phase ends when i) a subject switches back to the
default location (business district) or ii) when a subject does not change location and color and simultaneously
does not change the three continuous variables (lemonade content, sugar content, price) by more than 0.25

Secondary Outcomes

Secondary Outcomes (end points)
Attention (or Effort)
Secondary Outcomes (explanation)
To measure attention, we will construct a measure based on the effort sheet subjects fill out: We will compute the proportion of filled out fields for each subject, for the columns of the strategy choices as well as for for the profit columns.

Experimental Design

Experimental Design
Our experimental task is adapted from Ederer & Manso (2013). Subjects solve a task in which
they are facing a trade-off between exploration and exploitation: Participants manage a virtual
lemonade stand. Over 20 experimental periods, participants decide on multiple parameters
such as the recipe of the lemonade (sugar and lemonade content as well as color), the location
of the lemonade stand and the price of a cup of lemonade. The possible combinations of these
choice variables amounts to 23’522’994 combinations
. Participants are compensated accord-
ing to the realized profits. Thus, participants aim is to maximize the profit of the lemonade
stand, and with it, their own earnings. Participants do not know the profits associated with
each of the available choices. Participants receive a default strategy, i.e. the choices and the
associated profit of a fictitious previous manager. The default strategy is not the
most profitable strategy.
After each period, participants learn the profit for the implemented choices. Also, they
receive a brief customer feedback, implemented by having the computer randomly select one
of the three continuous choice variables (price, lemon or sugar content) to provide a binary
feedback. Consequently, the feedback is only informative for the location in which the subject
chose to sell in the current period.
The task is characterized by an exploration-exploitation trade-off with two main behaviors:
either fine-tuning the default strategy and yielding a profit similar to the previous manager
(exploitation), or experimenting with new strategies and taking the associated risk of failure
but also the chance of success (exploration). Parameters are designed in way that exploration
will increase chances to identify the strategy that leads to the global maximum while exploita-
tion rather leads to local maxima. The parameters to calculate the profits of the lemonade
stand are one-to-one adapted from Ederer & Manso (2013).
The payoff is determined by a standard pay-for-performance incentive scheme: participants
are paid 50% of the profits they make during all the 20 periods.
Then, we implement the treatments as describe in the intervention section.
After the lemonade stand task, we elicit the following individual characteristics
through a survey: demographics, risk preferences (Falk et al. 2018) and Big-5 (Lang et al.
Experimental Design Details
Randomization Method
Random cubicle/computer assignment in the laboratory through a random draw.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
No clusters, all sessions conducted in the laboratory (Strasbourg).
Sample size: planned number of observations
We will follow a sequential analyses approach as proposed by Lakens (2014). For further details and full specification of the sequential analyses procedure, please refer to the pre-analysis plan. Expected sample size: between 90 to 180 subjects.
Sample size (or number of clusters) by treatment arms
We will follow a sequential analyses approach as proposed by Lakens (2014).
For further details and full specification of the sequential analyses procedure, please refer to the pre-analysis plan.
Expected sample size for each treatment arm: between 30 to 60 subjects, maximally 100 subjects per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
All details of the sequential analyses procedure are in the pre-analysis plan. Here a summary: Our sequential analysis plan will follow the following detailed procedure: Based on an expected effect size of Cohen’s d = 0.50, a power analysis indicated that for a two-sided test with an alpha of .05, a desired statistical power of .8, and two looks using a linear spending function, a total of 180 participants are needed (60 per group). If the expected difference is significant at the first interim analysis (after 90 participants or time = .50, with an alpha boundary of .025) the data collection will be terminated. The data collection will also be terminated when the observed effect size is smaller than the smallest effect size of interest, which is set at d = 0.3875 based on the researcher’s willingness to collect at most 300 participants for this study, and the fact that with one interim analysis 300 participants provide .8 power to detect an effect of d = 0.3875. If the interim analysis reveals an effect size larger than 0.5, but while p>.025, the data collection will be continued until 60 participants (per group) have been collected. If the effect size lies between the smallest effect size of interest (d = 0.3875) and the expected effect size (d = 0.5), the planned sample size will be increased based on a conditional power analysis to achieve a power of .9 (or to a maximum of 100 participants per group, or 300 participants in total). The second analysis is performed at an alpha boundary of .0358.

Institutional Review Boards (IRBs)

IRB Name
IRB of the Department of Psychology, University of Fribourg
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents


MD5: 6245b51fe0fb66bd1638c0be3e875799

SHA1: 05054ad9397eea5e141a0516033f2e4b3e840866

Uploaded At: January 19, 2020


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Intervention Completion Date
January 24, 2020, 12:00 +00:00
Data Collection Complete
Data Collection Completion Date
January 24, 2020, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
See published article
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
See published article
Final Sample Size (or Number of Clusters) by Treatment Arms
See published article
Data Publication

Data Publication

Is public data available?
Public Data URL

Program Files

Program Files
Program Files URL
Reports, Papers & Other Materials

Relevant Paper(s)

We experimentally investigate an intervention that ought to motivate innovative behavior by changing risk perceptions. Participants run a virtual lemonade stand and face a trade-off between exploiting a known strategy and exploring untested approaches. Innovation through testing new approaches comes along with a risk of failure because participants are compensated based on the profits generated by their virtual business. We test whether we can draw attention away from this risk by implementing a salience mechanism, which ought to focus participants on the input rather than the outcome of the innovative process. However, we find that this intervention is not effective in motivating innovative behavior—rather, it jeopardizes innovation. We discuss potential behavioral channels and encourage further research of risk salience as a tool to foster innovation. Our pre-registered study highlights the importance of evaluating interventions before implementation, as even carefully designed interventions may turn out to be ineffective or even backfire.
Matthewes E, Nassar A, Zihlmann C (2022) Fostering innovation: Experimental evidence on the effectiveness of behavioral interventions. PLoS ONE 17(10): e0276463. 10.1371/journal.pone.0276463

Reports & Other Materials