Nudge and boosts to help individuals make better choices

Last registered on June 15, 2023

Pre-Trial

Trial Information

General Information

Title
Nudge and boosts to help individuals make better choices
RCT ID
AEARCTR-0011535
Initial registration date
June 07, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 15, 2023, 4:05 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
UGA, INRAE

Other Primary Investigator(s)

PI Affiliation
UMR Amure, UMR GAEL
PI Affiliation
UGA, INRAE

Additional Trial Information

Status
In development
Start date
2023-06-09
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Although economic agents are assumed to behave in a rational way, one classic criticism is that they generally lack computational skills to achieve the optimal outcome. We propose an experiment in which subjects are endowed with a budget and have to decide the amount of goods (there are two goods) that they would like to buy taking into account their prices and the available budget (prices and the budget change from one round to another). In that setting, we also propose tools (one nudge and two boosts) to help subjects better achieve the optimal outcome. While the effects of nudges are now well recognized in the economic literature, one criticism is that they generally rely on the exploitation of agents’ cognitive biases, leading to manipulation. As a solution, boosts have been proposed as a ‘remedy’, in the sense that they rely on the learning of a new competence. However, few studies have sought to investigate the potential of boosts as a substitute to nudges. The experiment includes a control group and three treatment groups: one nudge, the default option, and two different boosts, one based on the provision of computational tools and the other based on providing insight into the problem (see details later in this trial information). We first aim to assess the extent to which the nudge and the boosts are effective in improving subjects’ decisions. Second, we compare the durability of the effects of these instruments. We do this by observing subjects’ decisions once these tools are no longer implemented.

Our main expectations are: i) subjects’ payoffs increase when they are treated with the nudge or the boosts; and ii) the durability of the effects of boosts are larger than those of the nudge we consider, if there is some, since boosts rely on the learning of a competence.
External Link(s)

Registration Citation

Citation
Laurent, Muller, Binet Marie-Estelle and Benjamin Ouvrard. 2023. "Nudge and boosts to help individuals make better choices." AEA RCT Registry. June 15. https://doi.org/10.1257/rct.11535-1.0
Experimental Details

Interventions

Intervention(s)
We consider different versions of our consumer choice game, that differ regarding whether or not an instrument (a nudge or a boost) is implemented. Each version of the game corresponds to a treatment (i.e., intervention).
Intervention Start Date
2023-06-09
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
We will mainly analyze:
- subjects’ payoffs (that are given by the quantity of goods chosen taking into account their prices and the available budget);
- distance between the quantity chosen and the optimum value
- the time spent to take the decision, and;
- their degree of confidence (i.e., the extent to which they believe the chosen quantities are those that allow them to achieve the highest possible payoff).
Primary Outcomes (explanation)
At each repetition of the game, subjects decide quantities of good A and B (see below) they would like to buy, taking into account their prices and the available budget. Their choice of quantities directly gives them their payoff (first outcome variable). We are able to measure the distance from the optimal value (second outcome variable). In our setting, we control the time taken by subjects to take their decisions (third second outcome variable). Finally, once they have validated their choice, we ask subjects’ level of confidence on the fact that their choice allows them to reach the highest maximum payoff (fourth outcome variable).

Secondary Outcomes

Secondary Outcomes (end points)
To better interpret our results, after the main game of our experiment, we:
1) Implement the Cognitive Rationality Test (CRT) with revisited questions:
a) A golden bat and a golden ball cost 5000€ in total. The golden bat costs 4000€ more than the golden ball. How much does the golden bullet cost?
b) If it takes 10 machines 10 minutes to make 10 items, how long would it take 1000 machines to make 1000 items?
c) In a lake, there is a patch of water lilies. Every day, the plot doubles in size. If it takes 40 days for the plot to cover the entire lake, how many days would it take for the plot to cover a quarter of the lake?

2) A memory test consisting in memorizing 16 words in 1 minute. Then, in a list of 48 words, subjects have to select the 16 words they remember. Before selecting the words subjects remember, they perform the third task (see next point).

3) A computation test: subjects have to give the result of 20 calculus in one minute maximum.
Secondary Outcomes (explanation)
Overall, these additional measures allow us to capture some potential heterogeneity among subjects regarding the effectiveness of our tools (nudge and boosts).
In particular, regarding the boosts, one may consider that those with a higher CRT score, who have a high score regarding the memory test, or regarding the computation test, are more likely to perform better under boost implementation (as well as after we remove it).

Experimental Design

Experimental Design
1) General setting:

Subjects play a consumer choice game in which we explain them that they have to choose quantities of (fictious) goods, good A and good B, they would like to buy, taking into account:
- the available budget;
- the prices (price A and price B) of each good.
The quantities they choose allow them to earn a payoff (expressed in Experimental Currency Units, ECU). The payoff function is based on a classic Cobb-Douglas function. Therefore, subject’s main objective is to solve an optimization problem at each round of the game.
In our game, subjects have no paper and pencil, nor calculator to make their decision (mobile phones will be switched off).
Subjects know that they cannot save money from their budget for other rounds. Moreover, if they spend more than the available budget, then subjects receive 0 ECU.

Subjects have to make their decision in a limited amount of time. The amount of time will be determined through a pilot study with around 25 subjects. The pilot study will use the same design with two minutes in each round to make their decision. To incite them to provide both quick, intuitive answers and more thoughtful ones, a round and a time within the two minutes will be drawn at the end of the experiment. This round and time will determine the participant's answer and the associated gain. It is therefore in the participants' interest to provide an answer as quickly as possible (as they receive no gain if they do not provide an answer before the randomly drawn time) and to keep thinking to eventually come up with a better answer before the two minutes are up.

The game is repeated, each round corresponding to different values of prices and budget.

Subjects select the quantities of good A and of good B using a cursor on sliders. To avoid any influence of the initial position of the cursor, subjects will have to first click on the screen to make appear the cursors. Then, they will be able to use them to select the quantities they want.

We consider three different phases of ten rounds:
- A first phase of ten round without any instrument being implemented.
- A second phase of ten rounds with an instrument (nudge or boost) or no instrument (control).
- A third phase of ten round without any instrument being implemented.

Before the start of the game, we will ask questions to measure subjects’ understanding of the game.

Finally, five rounds of the first phase are repeated in the second phase and, then, in the third one. This is another way to assess whether or not subjects learn over time.

2) Treatments:

We consider 4 different types of groups:
1) Control group: no instrument is implemented during the second phase of ten rounds.
2) Nudge treatment: a default option is implemented during the second phase of ten rounds. The optimal quantities of goods A and B are pre-selected for subjects (but they are not aware that the pre-selection corresponds to the optimal choice).
3) Boost treatment 1: at the beginning of the second phase (but before subjects play), subjects are shown a video to teach them the intuition to find the optimal quantities of goods A and B. Before the first round of the second phase, they can watch the video anytime they want. However, once they indicate that they understood, they can no longer watch it.
4) Boost treatment 2: during the second phase (at each round), subjects are shown the formula to compute the optimal quantities of goods A and B. This formula is no longer shown in the third phase.
Experimental Design Details
Randomization Method
We use the experimental platform to perform randomization. More precisely, participants register for a time slot for which they are available. Then, we randomize treatments at the session level. Participants can only participate once to our experiment.
Randomization Unit
At the session level for the treatment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We will recruit around 25 subjects for the pilot experiment to determine the time limit of the final experiment.
We will consider a minimum of 60 subjects (independent observations in our case) in each treatment, for a (minimum) total of 60 x 4 treatments = 240 subjects for the final experiment.
Sample size: planned number of observations
We will recruit a minimum total of 25 + 240 = 265 subjects.
Sample size (or number of clusters) by treatment arms
A minimum of 60 subjects per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials