Back to History Current Version

Narrow Bracketing in Effort Choices

Last registered on April 30, 2019

Pre-Trial

Trial Information

General Information

Title
Narrow Bracketing in Effort Choices
RCT ID
AEARCTR-0003412
Initial registration date
January 07, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 25, 2019, 3:31 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 30, 2019, 3:29 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Bergamo

Other Primary Investigator(s)

PI Affiliation
Central European University

Additional Trial Information

Status
In development
Start date
2019-04-30
End date
2019-12-31
Secondary IDs
Abstract
Narrow bracketing has been established in choices over risky gambles, but not outside of it, even in natural situations such as the working environment. Many decisions people take, such as deciding whether to do an urgent, but not particularly important task right now, have low immediate costs – checking emails – but may have large costs later on, such as requiring one to work late when tired to make up the lost time. While sometimes people may take such decisions in full awareness of these implications – either because it is the ‘right/rational’ decision, or because they are present-biased – it may also be due to not thinking about these future implications. Narrow bracketing is a specific way of not thinking about these implications, and we test for it in a situation where preferences, properly thought through, cannot cause such mistakes, even when people are present-biased.
External Link(s)

Registration Citation

Citation
Fallucchi, Francesco and Marc Kaufmann. 2019. "Narrow Bracketing in Effort Choices." AEA RCT Registry. April 30. https://doi.org/10.1257/rct.3412-2.0
Former Citation
Fallucchi, Francesco and Marc Kaufmann. 2019. "Narrow Bracketing in Effort Choices." AEA RCT Registry. April 30. https://www.socialscienceregistry.org/trials/3412/history/45779
Experimental Details

Interventions

Intervention(s)
We test the concept of narrow bracketing in deterministic choices over work, which are relevant to the labor market.
Intervention Start Date
2019-04-30
Intervention End Date
2019-12-31

Primary Outcomes

Primary Outcomes (end points)
Elicitation of the willingness to accept a payment in order to complete a task across different treatments. Thus the question is whether the framing as doing extra work 'before' rather than 'after' - while holding the actual consequences constant - leads to a change in willingness to work, which it cannot under any broadly framed theory. We will compare this to choices where we enforce broad bracketing, by making the actual change salient.
Primary Outcomes (explanation)
We will ask subjects at what price they will be willing to complete a task. We will elicit their choices in two different ways. There are 5 treatments (WTW stands for 'Willingness to Work'):

- BEFORE ONLY: Subjects are asked for their WTW for additional tasks when there are no required tasks.
- NARROW UNSPECIFIED: Subjects are asked for their WTW for additional tasks when they know there are required tasks. They are not told whether these tasks are done before or after the main tasks.
- NARROW BEFORE: Subjects are asked for their WTW for additional tasks before doing some required tasks.
- NARROW AFTER: Subjects are asked for their WTW for additional tasks after doing some required tasks.
- BROAD: Subjects are asked for their WTW for additional tasks when it is made clear that they are in addition to the required tasks.

The primary outcomes are the willingness to work for the different treatment groups.
We have 3 main comparisons, plus 2 additional robustness checks.

Our main hypotheses are the following:
BEFORE ONLY =< NARROW BEFORE =< NARROW AFTER =< BROAD

The additional hypotheses are:
BEFORE ONLY =< NARROW UNSPECIFIED =< BROAD

Secondary Outcomes

Secondary Outcomes (end points)
We want to measure whether there is a correlation between subject's level of narrow bracketing in deterministic work choices and narrow bracketing in risky choices; whether there is more narrow bracketing when the metrics for the extra work is different from the metric for the main work (that is, it is expressed as a piece-rate, $0.40 per task, rather than $4 for doing 10 tasks), compared to when the metric is the same. A further analysis will be done on an extra within subjects treatment, where we use questions from two treatments. We will test whether people make the same mistake when they see both choices, controlling for an order effect.
Secondary Outcomes (explanation)
It may be that people bracket narrowly, but not if they see the broadly bracketed version first. Thus a person who is asked for their WTW for 40 tasks rather than 30, and then asked for their willingness to do 10 tasks before doing the 30, may realize that these questions are the same, and thus broadly bracket the second question. If asked first for their WTW for 10 tasks before 30, and then for their WTW 40 rather than 30, their answer to the "10 before 30" may be different because they did not realize that it is about doing 40 rather than 30 tasks. Thus we want to measure whether the same question leads to different answers depending when people are asked the question.

Since one concern is that people may either use heuristics to make decisions faster ("This is 10 extra tasks, so I'll give the same answer as before") or want to be consistent with their past choices once they realize they are the same ("Oh, 40 vs 30 tasks is the same as my previous answer, I should give the same answer") rather than admit they might have gotten it wrong (Augenblick and Rabin (2018) do find that this effect is quite strong in their experiment, when subjects are reminded of their past choice) this will not cleany establish which choice people think is a mistake, but together with the between-subjects design it should shed light on it.

Ignoring these other concerns (heuristics, desire for consistency), we will use these answers to create a measure of narrow bracketing at the individual level: the degree to which the BROAD answer is different from the NARROW answer, and we'll do so accounting for order effects.

The reason for testing correlation between individual-level narrow bracketing in our context and in risky choices (based on our within-subjects treatment) is straightforward: we want to see if there are people who are more likely to narrow bracket in different types of settings.

Experimental Design

Experimental Design
In a laboratory experiment, using a real effort task, we measure whether psychological factors affect the decisions to work extra time.
Experimental Design Details
In a laboratory experiment with real effort tasks, we measure whether decisions for extra work are narrowly bracketed: whether people make decisions for extra work by thinking only about the direct disutility incurred from doing the extra work, or whether they also take into account the indirect effects of this extra work on other work they already have to complete. Specifically, subjects will be asked to complete a fixed and given amount of work and then be asked to do additional work. However, there will be 5 treatments:

- BEFORE ONLY: Subjects are asked for their WTW for additional tasks when there are no required tasks.
- NARROW UNSPECIFIED: Subjects are asked for their WTW for additional tasks when they know there are required tasks. They are not told whether these tasks are done before or after the main tasks.
- NARROW BEFORE: Subjects are asked for their WTW for additional tasks before doing some required tasks.
- NARROW AFTER: Subjects are asked for their WTW for additional tasks after doing some required tasks.
- BROAD: Subjects are asked for their WTW for additional tasks when it is made clear that they are in addition to the required tasks.

In all choices except in the BEFORE only treatment, the choice offered allows to choose exactly the same amount of work for exactly the same amount of money -- the choice set is fixed, including no extra requirements or benefits from working fast or slow. A person who brackets narrowly may nonetheless act differently, since they may perceive the extra tasks differently if they are framed as having to be done before or after the required work or if there is no active reminder that there are required tasks to do. We consider as a control treatment the BROAD treatment, in which subjects are told that they choose between the required work (say 30 tasks) or the required work plus extra work (40 tasks). Thus is the most transparent choice, and the one that economic theory would say is 'the right' framing, under standard assumptions on utility over work.

If people bracket narrowly and find the first 10 tasks easier than the last 10 tasks (increasing marginal disutility), then our hypotheses are the following:

- NARROW BEFORE: A person who brackets narrowly should choose as if (or more closely towards) BEFORE ONLY, since they are thinking only of the 10 tasks, not about how it makes the other 30 tasks harder.
- NARROW UNSPECIFIED: Similarly to NARROW BEFORE. Since we don't remind people of the required tasks in this treatment, the effect may be stronger in this treatment (although it may be weaker, as some people might naturally think of doing the work AFTER the required work).
- NARROW AFTER: A person who brackets narrowly and thinks of doing work after 30 tasks should choose as they would in BROAD. However, it may be that the reminder of the required tasks is ignore and not integrated with this choice, and thus it should be between NARROW UNSPECIFIED and BROAD.

Overview of the main experiment:


• Experiment based on the transcription task similar to the one used by Augenblick and Rabin (2015).

• Two parts: the first part will be conducted online (via Lioness Lab, Arechar et al., 2018), the second in the laboratory.

• PART 1 Subjects are invited to participate to the first part of the experiment online. Subjects read the instructions online, telling them that the experiment is made of two parts and that earnings are accumulated in both parts and are paid at the end of the experiment.

o PHASE 1: Subjects practice with the transcription task. They are rewarded a fixed amount (participation fee), for performing this task for 10 minutes.
o PHASE 2: Subjects are told that the week after they will perform this task in the lab. They will book the slot where they can participate to the experiment and told that for that session they will be asked to complete (say) 30 of these tasks to receive XX Euros.
o PHASE 3: depending on treatment, they will be given the opportunity to do YY extra tasks. These tasks will be done alone (BEFORE ONLY), before the main fixed tasks (NARROW BEFORE), after the main fixed tasks (NARROW AFTER), at an unspecificed time (NARROW UNSPECIFIED) or clearly in addition to the main 30 tasks (BROAD). Subjects will be asked to state the minimum amount of money they would be willing to do this task. For the elicitation we will use two different elicitation methods, randomized across the two treatments.

The two methods of eliciting the minimal payment (the WTW) are:
- A slider to select the minimum acceptable payment in order to perform a fix amount of work for a fixed amount of money (e.g. 13 tasks for $3.00)
- A set of multiple questions eliciting the minimum acceptable piece rate payment (13 tasks at $0.20/task, $0.40/task, $0.60/task...)

• PART 2
o PHASE 1: One of the choices made during the PHASE 3 will be selected randomly and implemented.
o PHASE 2: Subjects will work and will be rewarded according to the schedule.
o PHASE 3: At the end of the working part, subjects will be asked to answer to a series of incentivized questions, replicating Rabin and Weizsäcker (2009) with low stakes.

For all treatments except BEFORE ONLY, we will follow them up (in the first session) by a within-subjects treatment where we ask them the BROAD question too for non-BROAD treatments and one of the NARROW treatments for the BROAD treatment. Thus these subjects will be asked both a NARROW and a BROAD question, allowing us to see how narrowly they themselves bracket (since answers should be the same), as well as whether there is more or less narrow bracketing depending on the order of the questions. The subjects who receive both allow us to potentially estimate narrow bracketing at the individual level; or to see if people are less likely to narrowly bracket if they receive the NARROW question after the BROAD treatment, as this may draw their attention to the identical nature of the choices. (See "Secondary Outcomes (Explanation)").

Pilot Description

Without narrow bracketing, all choices except BEFORE ONLY should be identical. However, BEFORE ONLY and BROAD are identical, then our problem is that we have no power to identify narrow bracketing, as both narrow bracketing and broad bracketing give exactly the same answer: all answers should be the same, no matter whether subjects bracket broadly or narrowly. Why might this happen? It can happen if the first 10 tasks are exactly as painful as the next 10, and as the next 10, and so on. In that case narrow bracketing doesn't lead to a mistake. Another reason this can happen is that people *think* that 10 tasks are always equally painful (even if it turns out that they are not).

For this reason we run the following pilot to identify whether subjects think that the task gets harder (as well as whether they end up believing that). Note that what truly matters is the *beliefs* people have at the time they make the choices, not whether it actually ends up being more tedious.

In the pilot, we have three treatments, low-, medium-, and high-effort, denoted W10, W40, and W70 based on the total number of required tasks they'll have to do in each treatment. In each treatment, subjects do 10 tasks first. Then they are asked for their willingness to work after the remaining tasks, which is 0 for the W10 group (they already did 10 tasks), 30 for the W40 group, and 60 for the W70 group. It is important to ask them at this point, since that is the point at which we also ask the subjects in the main study. We ask them for their WTW by telling them now for their minimal payment for doing additional work after the required work. We do so by asking them the broadly framed question - that from BROAD treatment. We moreover ask them the same question as in the main study regarding whether they expect work to become more or less tedious (a question on a 10-point scale).

We will use the slider and price-list (randomized) and choose the more precise of these methods.

Based on the outcomes of this pilot, we will decide whether to use the sliders or the price list, and whether to use 30 or 60 tasks in the main experiment based on which of the 4 combinations would give us the largest power in the main experiment if the effect sizes and variances in the main study were identical (with identical variance) as in the pilot. This power computation will be done under the assumption of a fixed budget.


Arechar, A.A., Gächter, S., & Molleman, L. (2018). Conducting interactive experiments online. Experimental economics, 21(1), 99-131.
Augenblick, N., & Rabin, M. (2015). An experiment on time preference and misprediction in unpleasant tasks. The Review of Economic Studies.
Rabin, M., & Weizsäcker, G. (2009). Narrow bracketing and dominated choices. American Economic Review, 99(4), 1508-43.
Randomization Method
Randomization done throughout Mturk for the online experiment and the recruitment platform Orsee for the laboratory part.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Initial run: 60 MTurkers, 30 per treatment (for preliminary results for conference; deadline May 1st 2019)
Pilot: 120 MTurkers for the pilot, 40 per treatment (W10, W40, W70)
Main: 700 MTurkers, 140 per main treatment (5 main treatments)

Sample size: planned number of observations
880 MTurkers
Sample size (or number of clusters) by treatment arms
60 subjects total (MTurk): 30 subjects for each of NARROW BEFORE and NARROW AFTER

- Brief explanation (April 30th, 2019, 22PM CET): Ideally we would not run this yet, but due to a conference deadline we feel we need some preliminary results.

120 subjects total (MTurk): 40 subjects per group in W10, W40, and W70, with a low-effort (W10) medium-effort (W40) and high-effort (W70) group. Each of those groups is asked for their WTW after having done their required work, given by 10, 40, or 70 tasks.

- Explanation: This is needed to test whether tasks done early are less tedious than when done later, which is an identifying assumption of ours. Since we won't ask in these to choose future work, nor compare any narrow choice with a broad choice, we cannot use this to bias our results. We also use this to find out whether the slider task is more precise or the price list. Whichever has the lower variance (higher precision) between the three effort-level groups. If they are too similar (that is, neither of them is particularly different) then we will go with a 60% slider, 40% price list split for the main groups.

140 subjects for each of the 5 main treatments treatments and 100 subjects in the within subjects treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on an expected effect size d = 0.4 we assign 140 observations to each of the two treatments for our main comparison between NARROW BEFORE and NARROW AFTER. This gives us 90% power to detect the effect size at the 5% level of significance. Similar observations are considered for each of the other treatment comparisons.
IRB

Institutional Review Boards (IRBs)

IRB Name
CEU Ethical Research Committee
IRB Approval Date
2018-08-13
IRB Approval Number
2017-2018/11/EX

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials