x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Choice Architecture and Transparency
Last registered on October 21, 2020

Pre-Trial

Trial Information
General Information
Title
Choice Architecture and Transparency
RCT ID
AEARCTR-0006308
Initial registration date
August 19, 2020
Last updated
October 21, 2020 5:20 AM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
European University Institute
Other Primary Investigator(s)
Additional Trial Information
Status
In development
Start date
2020-08-23
End date
2020-10-31
Secondary IDs
Abstract
Choice Architects design choice environments. In this setting, Choice Architects may use nudges, changes to the environment, that predictably affect behaviour but keep payoffs constant and do not restrain options available. This project investigates how transparency, awareness of the Choice Architects' use of nudges, changes how nudges influence behaviour and how Choice Architects use them. The model distinguishes between System 1 nudges that rely on mental shortcuts (for example, default options) and System 2 nudges that engage reflective thinking (for example, by providing a new way to think about the choice). The model predicts that a) System 1 nudges have less effect under transparency while System 2 nudges have the same effectiveness and 2) the use of System 1 nudges is reduced by transparency while the use of System 2 nudges is increased.
External Link(s)
Registration Citation
Citation
Kujansuu, Essi. 2020. "Choice Architecture and Transparency." AEA RCT Registry. October 21. https://doi.org/10.1257/rct.6308-1.3.
Experimental Details
Interventions
Intervention(s)
The main intervention is an announcement that makes it salient to all participants that nudges are potentially used in the formulation of the next question. This intervention is common to Choice Architects and Decision Makers.

We are also interested in the effects of the nudges themselves. This intervention affects only Decision Makers. Each nudge is thus also an intervention. We have three versions of the question. First, we have a simple question design that deliberately lists the low target first. Second, we have a system 1 nudge that preselects the high target both in the wording and in the software. Third, we have a system 2 nudge that encourages participants to determine which choice is better in terms of risks and benefits before answering the question itself.

Third, to study transition paths, we give all those Choice Architects in the control a delayed transparency treatment by asking them again to choose a question formulation but for another Decision Maker, who this time reads the transparency announcement before seeing the question.
Intervention Start Date
2020-08-23
Intervention End Date
2020-10-31
Primary Outcomes
Primary Outcomes (end points)
For the Choice Architects, the Primary Outcome variable is "architecture".
For the Decision Makers, the Primary Outcome is called "choice".
Primary Outcomes (explanation)
Primary Outcome "architecture" is the Choice Architect's choice of the nudge: simple, system 1 nudge or system 2 nudge.
Decision Makers' Primary Outcome "choice" records whether they choose a high or a low performance target.
Secondary Outcomes
Secondary Outcomes (end points)
For Choice Architects, we also collect s0_effectiveness, s1_effectiveness, s2_effectiveness, s0_morality, s1_morality, s2_morality.
For Decision Makers, we also collect time spent on the choice page, tasks_correct, tasks_attempted, belief_B, liking_B, satisfaction, affected
Secondary Outcomes (explanation)
Choice Architects' effectiveness variables measure Choice Architects' beliefs: how many people will choose the high target with each nudge. Morality variables measure how manipulative the Choice Architects view the nudges to be. For Decision Makers, we collect how long each participant spends time on the choice page, how many tasks they do correctly in the effort task that the target applies to, and how many they attempt. In the post-experiment survey we also collect which target they believed to have helped the Choice Architect more, what their attitude towards the Choice Architect is, how satisfied they were with the choice of target, and if they felt affected by the question formulation.

Experimental Design
Experimental Design
The experiment is a simple Choice Architecture Game with nudges.

In the second experiment, the goal is to test the Decision Maker hypotheses only on women, and to test the Choice Architect decisions on a more experienced group of people, namely, PhD students or PhDs in Social Sciences.
Experimental Design Details
Not available
Randomization Method
Randomization is done with a computer
Randomization Unit
Individual
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
Only 1 cluster
Sample size: planned number of observations
190 Choice Architects 420 Decision Makers Second experiment: 90 Choice Architects 150-180 Decision Makers
Sample size (or number of clusters) by treatment arms
Aimed sample size by treatment cell:
95 for Choice Architects (2 cells) (total control over this, easily achieved)
70 for each Decision Maker cell (6 cells), target is on average (full balance not achievable or even desirable, target is to get at least 40 participants per each of the 6 cells)

Second Experiment:
90 Choice Architects, 45 in each of the two cells
150-180 Decision Makers, such that we get at least 30 in each of the 5 interesting treatment cells (simple-transparency is the uninteresting one in the 6 cells). Full balance is not likely, hence there is some uncertainty about how many Decision Makers will be required.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The minimum detectable effect size for the CA experiment is 20 percentage points. With a two-sided test, alpha = 0.05, beta = 0.2, starting from 15 % take-up rate, we need 73 observations per treatment cell, which is entirely achievable. The minimum detectable effect size for the Decision Maker outcomes is also 20 percentage points. However, starting from a rate of 25 %, a one-sided test, alpha = 0.05, beta = 0.2, we need 70 observations per each treatment cell. Second experiment: The minimum detectable effect size for the CA experiment is 23 percentage points, starting from 15 %, a onesided test, alpha = 0.05, beta = 0.20, needing 45 observations per each cell. The minimum detectable effect for the Decision Makers is 27 percentage points. Starting from a rate of 10 %, a one-sided test, alpha = 0.05, beta = 0.2, we need 30 observations per cell.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Ethics Committee of the European University Institute
IRB Approval Date
2020-07-20
IRB Approval Number
N/A