x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Who cares who intervenes how?
Last registered on July 23, 2019

Pre-Trial

Trial Information
General Information
Title
Who cares who intervenes how?
RCT ID
AEARCTR-0001661
Initial registration date
November 28, 2016
Last updated
July 23, 2019 6:11 AM EDT
Location(s)
Region
Primary Investigator
Affiliation
University of Hamburg; International Max-Planck Research School on Earth System Modeling
Other Primary Investigator(s)
PI Affiliation
University of Hamburg
Additional Trial Information
Status
Completed
Start date
2016-11-30
End date
2017-02-28
Secondary IDs
Abstract
We conduct an online experiment with a sample representative of the German internet-using population. We endow participants with Credits they may choose to contribute to a large public good or use for private consumption. Contributions to the large public good are real carbon emission reductions. We conduct a questionnaire prior to and after the experiment.
External Link(s)
Registration Citation
Citation
Bruns, Hendrik and Grischa Perino. 2019. "Who cares who intervenes how?." AEA RCT Registry. July 23. https://doi.org/10.1257/rct.1661-4.0.
Former Citation
Bruns, Hendrik and Grischa Perino. 2019. "Who cares who intervenes how?." AEA RCT Registry. July 23. https://www.socialscienceregistry.org/trials/1661/history/50506.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2016-12-14
Intervention End Date
2017-01-17
Primary Outcomes
Primary Outcomes (end points)
Main outcomes of interest are:
(a) Participants' contribution amounts to the public good, measured in credits (continuous variable).
(b) Participants' answers to four questions from a questionnaire measuring the perceived threat to freedom due to the respective intervention (ordinal scales).
(c) Participants' answers to four questions from a questionnaire measuring the perceived anger due to the respective intervention (ordinal variables).
(d) Participants' answers to 11 questions from a questionnaire measuring state reactance prior to treatment allocation (ordinal scales).
Primary Outcomes (explanation)
From the amount contributed to the public good (a), we construct several other outcome variables: (1) a binary variable that takes the value 1 if the participant contributed a positive amount, zero otherwise; (2) a numeric variable measuring the distance of the respective contribution amount to the default value; (3) a binary variable that takes the value 1 if the participant contributed an amount that is equal to the pre-defined value.
In the case of a two-round design (see Experimental design (hidden) for more information), we also construct (a) as a change-score from the (un-treated) baseline contribution to the treated contribution.
From answers given to the questionnaire measuring the perceived threat to freedom (b), anger (c), and state reactance (d), we respectively construct an (un-weighted) factor-based score by adding up the values of the likert-items for each observations.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Each participant will receive an endowment and will then be provided with the possibility to divide this amount between him- or herself and a large public good. More specifically, she or he can decide how much (if any) to pay in order to buy carbon licenses from the European Unit Emissions Trading Scheme, in order to contribute to climate protection. Payments will be donated to the NGO TheCompensators* that buys carbon licenses off the EU ETS and retires them.
Experimental Design Details
Before the actual experiment, participants will answer a questionnaire in which we gather information on trait reactance, environmental preferences, and political orientation (Stage 1). There will be no obvious link between questionnaire and decision experiment. Both stages are separated by a non-trivial amount of time, so that the probability that participants presume a connection is minimized. The actual experiment consists of ten experimental groups, including a control group (Stage 2). Treatments consist of different ways to affect contributions, introduced by differently characterized sources. The experiment has a fractional factorial 3x4 design. There are two factors, called source type (ST), and intervention type (IT). ST has three levels (no source (NoS), knowledgeable source (KNO), political source (POL)). IT has four levels (no intervention (NoI), recommendation (REC), default (DEF), restriction (RES)). Two possible combinations of both factors are excluded from the design, i.e. no intervention combined with a knowledgeable, as well as political actor. This results in ten combinations, i.e. experimental groups. The combination of no source and no intervention is referred to as the control group. Among others, our aim is to test whether participants respond differently to treatments given their respective baseline contributions. Since these would normally be counterfactuals, thusly not observable, we elicit baseline contributions before allocating subjects to treatments. Since, theoretically, eliciting baseline contributions might compromise treatment effects, we test this prior to the actual experiment (Stage 2 Block1): We will conduct two treatments, each in two different versions, prior to the others. With this, we test whether letting participants decide about their contribution two times, i.e. first, uninfluenced (baseline), second, in their respective treatment group, changes contributions in the second round significantly compared to only one (treated) decision. In the two-round design, participants are informed that only one of their decisions will be picked randomly to be realized. We will do this both for the control group, as well as for the combination of default and knowledgeable actor (DEF-KNO). If contributions in the respective two-round vs. one-round alternatives significantly differ, we will conduct all remaining treatments with only one round. Should there be no significant differences, we will conduct all other treatments with two rounds.
Randomization Method
First, the panel provider selects participants based on an internal rotation system that guarantees not always the same subjects are invited to surveys. Selection is representative of the population.
After collecting observations in stage 1, we shuffle all observations to make sure that subjects that were quickest to submit their survey do not have a higher change to be allocated to either treatment of Stage 2 Block 1, than to treatments of Stage 2 Block 2.
Second, we randomly allocate subjects that participated in the first stage to one of four treatments with permuted block randomization (using the R package randomizeR).
Third, we randomly allocate subjects that participated in the first stage (and did not participate in the first block of the second stage) to one of eight treatments with permuted block randomization (using the R package randomizeR).
In case of sample sizes not dividable by the number of groups (i.e. 4 and 8, respectively for Block 1 and 2 of Stage 2), the number of subjects needed to reduce the overall sample to a number dividable by the block size is allocated to treatments based on complete randomization (using the R package randomizeR).
Randomization Unit
Randomization is on individual level.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
No clustering
Sample size: planned number of observations
We target 960 observations. Assuming an attrition rate of 30% between questionnaire and treatment allocation, ca. 1.370 participants are invited for the questionnaire.
Sample size (or number of clusters) by treatment arms
In the first stage, i.e. before treatments are randomly allocated to participants, 1.370 observations are targeted.
In the second stage, i.e. after randomized allocation to treatment groups took place, 960 observations are targeted. Because 160 observations will be used to decide between the one-round and two-round design, 800 observations will be used for final analysis. This will amount to 80 observations per experimental group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The main outcome of interest is the interaction between treatment factors. We calculate the minimum detectable effect size assuming a significance level of 0.05, a power of 0.8, n = 720, using a two-way analysis of variance (ANOVA) excluding the control group, resulting in a 3 x 3 design with 80 observations per group. We do this for the two main effects, as well as for the interaction effect. The resulting effect size eta is the square root of the ratio of the variance explained by the respective factor to the error variance. We calculate the minimum detectable effect size (MDES) of 0.12 assuming an error variance of 1 for main treatment effects, and a MDES of 0.13 for the interaction effect. Further increases of the number of observations per group do not significantly decrease the MDES, with the minimum being slightly below 0.1 for more than 110 observations per group. Note that our analysis will primarily rely on regression models that better fit the distribution of the likely not normally distributed outcome variable of interest, e.g. a (random-effects) Tobit model to account for left-censoring of contributions. This affects power. With respect to pairwise comparisons with t-tests (or non-parametric Mann-Whitney-U tests), assuming an error probability of 0.05, a power of 0.8, n = 80 per group, an arbitrarily assumed mean of 20 Credits in the control group, we vary the common standard deviations between 10 and 20. This results in minimum detectable differences if contributions in the treatment group range between 24.46 and 28.91 Credits, i.e. if there is a difference between 4.46 and 8.91 to the control, linearly increasing with increased pooled variance. Notably, detecting effects of treatments that we hypothesize to decrease contributions with respect to another group may be problematic when contributions in the respective control group are low. This is because contributions are censored at zero. For lower treatment contributions to be detectable with standard deviations up to 20, the respective contributions in the control group must not be lower than 9 Credits. However, in that case average treated contributions would need to be zero in order for the difference to be detectable with this design. Note that in case of conducting the experiment as a within-subject design, power increases, ceteris paribus.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
Yes
Intervention Completion Date
January 20, 2017, 12:00 AM +00:00
Is data collection complete?
Yes
Data Collection Completion Date
January 19, 2017, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
806 individuals
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
806 individuals
Final Sample Size (or Number of Clusters) by Treatment Arms
75 individuals control, 90 individuals Rec-Nos, 83 individuals Def-Nos, 86 individuals Res-Nos, 77 individuals Rec-Exp, 73 individuals Def-Exp, 79 individuals Res-Exp, 83 individuals Rec-Pol, 81 individuals Def-Pol, 79 individuals Res-Pol
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers