x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Narrow Bracketing in Effort Choices
Last registered on April 17, 2020

Pre-Trial

Trial Information
General Information
Title
Narrow Bracketing in Effort Choices
RCT ID
AEARCTR-0003412
Initial registration date
January 07, 2019
Last updated
April 17, 2020 9:31 AM EDT
Location(s)
Region
Primary Investigator
Affiliation
Luxembourg Institute of Socio-Economic Research
Other Primary Investigator(s)
PI Affiliation
Central European University
Additional Trial Information
Status
In development
Start date
2019-10-09
End date
2020-06-30
Secondary IDs
Abstract
Narrow bracketing has been established in choices over risky gambles, but not outside of it, even in natural situations such as the working environment. Many decisions people take, such as deciding whether to do an urgent, but not particularly important task right now, have low immediate costs – checking emails – but may have large costs later on, such as requiring one to work late when tired to make up the lost time. While sometimes people may take such decisions in full awareness of these implications – either because it is the ‘right/rational’ decision, or because they are present-biased – it may also be due to not thinking about these future implications. Narrow bracketing is a specific way of not thinking about these implications, and we test for it in a situation where preferences, properly thought through, cannot cause such mistakes, even when people are present-biased.
External Link(s)
Registration Citation
Citation
Fallucchi, Francesco and Marc Kaufmann. 2020. "Narrow Bracketing in Effort Choices." AEA RCT Registry. April 17. https://doi.org/10.1257/rct.3412-4.199999999999999.
Former Citation
Fallucchi, Francesco, Francesco Fallucchi and Marc Kaufmann. 2020. "Narrow Bracketing in Effort Choices." AEA RCT Registry. April 17. http://www.socialscienceregistry.org/trials/3412/history/66389.
Experimental Details
Interventions
Intervention(s)
We test the concept of narrow and broad bracketing in deterministic choices over work, which are relevant to the labor market.
Intervention Start Date
2019-10-09
Intervention End Date
2020-05-31
Primary Outcomes
Primary Outcomes (end points)
See latest pdf document for change in design.

The following previous design description is included for completeness, but is *NOT* what we are currently planning on running.

Elicitation of the willingness to accept a payment in order to complete a task across different treatments. Thus the questions are two, linked to the framing of doing extra work: the first concerns doing extra work after a (changing) fixed mandatory work; the second is whether the framing as doing extra work 'before' rather than 'after' - while holding the actual consequences constant - leads to a change in willingness to work, which it cannot under any broadly framed theory. We will compare this to choices where we enforce broad bracketing, by making the actual change salient.
Primary Outcomes (explanation)
See latest pdf document for change in design.

The following previous design description is included for completeness, but is *NOT* what we are currently planning on running.

In each treatment we will ask subjects to complete a required task and then elicit their willingness to complete extra tasks. We will compare in each treatment the total number of tasks performed.
Secondary Outcomes
Secondary Outcomes (end points)
See latest pdf document for change in design.

The following previous design description is included for completeness, but is *NOT* what we are currently planning on running.

We want to measure whether there is a correlation between subject's level of narrow bracketing in deterministic work choices and narrow bracketing in risky choices. We will test whether people make the same mistake when they see both choices, controlling for an order effect.
Secondary Outcomes (explanation)
See latest pdf document for change in design.

The following previous design description is included for completeness, but is *NOT* what we are currently planning on running.

It may be that people bracket narrowly, but not if they see the broadly bracketed version first. Thus a person who is asked for their WTW for 20 tasks rather than 10, and then asked for their willingness to do 10 tasks before doing the 20, may realize that these questions are the same, and thus broadly bracket the second question. If asked first for their WTW for 10 tasks before 10, and then for their WTW 20 rather than 10, their answer to the "10 before 10" may be different because they did not realize that it is about doing 20 rather than 10 tasks. Thus we want to measure whether the same question leads to different answers depending when people are asked the question.

Since one concern is that people may either use heuristics to make decisions faster ("This is 10 extra tasks, so I'll give the same answer as before") or want to be consistent with their past choices once they realize they are the same ("Oh, 20 vs 10 tasks is the same as my previous answer, I should give the same answer") rather than admit they might have gotten it wrong (Augenblick and Rabin (2018) do find that this effect is quite strong in their experiment, when subjects are reminded of their past choice) this will not cleany establish which choice people think is a mistake, but together with the between-subjects design it should shed light on it.

Ignoring these other concerns (heuristics, desire for consistency), we will use these answers to create a measure of narrow bracketing at the individual level: the degree to which the BROAD answer is different from the NARROW answer, and we'll do so accounting for order effects.

The reason for testing correlation between individual-level narrow bracketing in our context and in risky choices (based on our within-subjects treatment) is straightforward: we want to see if there are people who are more likely to narrow bracket in different types of settings.
Experimental Design
Experimental Design
In an online experiment, using a real effort task, we measure whether psychological factors affect the decisions to work extra time.
Experimental Design Details
See latest pdf document for change in design. The following previous design description is included for completeness, but is *NOT* what we are currently planning on running. In an online experiment with real effort tasks, we measure whether decisions for extra work are narrowly bracketed: whether people make decisions for extra work by thinking only about the direct disutility incurred from doing the extra work, or whether they also take into account the indirect effects of this extra work on other work they already have to complete. Specifically, subjects will be asked to complete a fixed and given amount of work and then be asked to do additional work. The design is based on a 2x2 + 2 treatments (WTW stands for 'Willingness to Work'): 2 x 2 design - NARROW (10 vs 20): Subjects are asked for their WTW for additional tasks when there are either 10 or 20 required tasks (between subjects treatments). They are not told whether these tasks are done before or after the main tasks. Their choice will be on the extra tasks only. - BROAD (10 vs 20): Subjects are asked for their WTW for additional tasks when it is made clear that they are in addition to the (10 or 20) required tasks. Their choice will be on the total tasks. +2 treatments - NARROW BEFORE: Subjects are asked for their WTW for additional tasks before doing some required tasks. - NARROW AFTER: Subjects are asked for their WTW for additional tasks after doing some required tasks. In all choices the choice to work extra time for the same piece-rate - the choice set is fixed, including no extra requirements or benefits from working fast or slow. A person who brackets narrowly may nonetheless act differently: they may not perceived the extra work differently regardless of this be done on top of different required work. Also, they may perceive the extra tasks differently if they are framed as having to be done before or after the required work. We consider as a control treatment the BROAD treatment, in which subjects are told that they choose between the required work (say 10 or 20 tasks) or the required work plus extra work. Thus is the most transparent choice, and the one that economic theory would say is 'the right' framing, under standard assumptions on utility over work. If people bracket narrowly and find the first 10 tasks easier than the last 10 tasks (increasing marginal disutility), then our hypotheses are the following: - NARROW BEFORE: A person who brackets narrowly should choose as if (or more closely towards) BEFORE ONLY, since they are thinking only of the 10 tasks, not about how it makes the other 10/20 tasks harder. - NARROW AFTER: A person who brackets narrowly and thinks of doing work after 10/20 tasks should choose as they would in BROAD. However, it may be that the reminder of the required tasks is ignored and not integrated with this choice. Overview of the main experiment: • Experiment based on the transcription task similar to the one used by Augenblick and Rabin (2015). • The experiment will be conducted online (via Lioness Lab, Arechar et al., 2018). • PART 1 Mturkers are invited to participate to the first part of the experiment online. o PHASE 1: Subjects practice with the transcription task. o PHASE 2: Subjects are told that they are rewarded a fixed amount (participation fee), for performing a fixed required task. o PHASE 3: Depending on treatment, they will be given the opportunity to choose YY extra tasks for a set of given piece-rates. We elicit the willingness to work (WTW) with a slider to select the number of sequences to decode for a given piece rate payment (e.g. for $0.05/sequence how many sequences are you willing to decode?) • PART 2 o PHASE 1: One of the choices made during the PHASE 3 will be selected randomly and implemented. o PHASE 2: Subjects will work and will be rewarded according to the schedule. o PHASE 3: At the end of the working part, subjects will be asked to answer to a series of incentivized questions, replicating Rabin and Weizsäcker (2009) with low stakes. Pilot Description Without narrow bracketing, all choices except BEFORE ONLY should be identical. However, BEFORE ONLY and BROAD are identical, then our problem is that we have no power to identify narrow bracketing, as both narrow bracketing and broad bracketing give exactly the same answer: all answers should be the same, no matter whether subjects bracket broadly or narrowly. Why might this happen? It can happen if the first 10 tasks are exactly as painful as the next 10, and as the next 10, and so on. In that case narrow bracketing doesn't lead to a mistake. Another reason this can happen is that people *think* that 10 tasks are always equally painful (even if it turns out that they are not). For this reason we run the following pilot to identify whether subjects think that the task gets harder (as well as whether they end up believing that). Note that what truly matters is the *beliefs* people have at the time they make the choices, not whether it actually ends up being more tedious. In the pilot we ask them the same question as in the main study regarding whether they expect work to become more or less tedious (a question on a 10-point scale). Arechar, A.A., Gächter, S., & Molleman, L. (2018). Conducting interactive experiments online. Experimental economics, 21(1), 99-131. Augenblick, N., & Rabin, M. (2015). An experiment on time preference and misprediction in unpleasant tasks. The Review of Economic Studies. Rabin, M., & Weizsäcker, G. (2009). Narrow bracketing and dominated choices. American Economic Review, 99(4), 1508-43.
Randomization Method
Randomization done throughout Mturk.
Randomization Unit
Individual
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
Experiment on MTurk.
Sample size: planned number of observations
450 for the one-day design. 2-day design needs fleshing out.
Sample size (or number of clusters) by treatment arms
90 for each of the four main treatments, 45 for the minor treatments. See october-2019-design.pdf for details.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See latest pdf document for change in design.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
CEU Ethical Research Committee
IRB Approval Date
2018-08-13
IRB Approval Number
2017-2018/11/EX
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS