Back to History Current Version

Projection Bias in Effort Choices

Last registered on May 23, 2019

Pre-Trial

Trial Information

General Information

Title
Projection Bias in Effort Choices
RCT ID
AEARCTR-0004011
Initial registration date
May 01, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 23, 2019, 7:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region
Region

Primary Investigator

Affiliation
Central European University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2019-03-30
End date
2020-03-31
Secondary IDs
Abstract
In this study I measure the degree of projection bias people have in choices over effort -- that is, how much people mispredict their future willingness to work by projecting their current willingness to work into the future. For this I run a study with real effort tasks on Amazon Mechanical Turk (MTurk) with several sessions, where subjects sign up for additional future work both when they are rested (before they do any work) and when they are tired (after they have completed some work).
External Link(s)

Registration Citation

Citation
Kaufmann, Marc. 2019. "Projection Bias in Effort Choices." AEA RCT Registry. May 23. https://doi.org/10.1257/rct.4011-1.0
Former Citation
Kaufmann, Marc. 2019. "Projection Bias in Effort Choices." AEA RCT Registry. May 23. https://www.socialscienceregistry.org/trials/4011/history/47026
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The main intervention consists in randomizing subjects in sessions 1 and 2 in such a way that they have a light workload in session 1 and a hard workload in session 2 (treatment 1) or a hard workload in session 1 and a light workload in session 2 (treatment 2).
Intervention Start Date
2019-04-01
Intervention End Date
2020-02-29

Primary Outcomes

Primary Outcomes (end points)
The main outcome variable is the amount of tasks that subjects sign up for in session 3, as well as how much of this is predicted by the WTW at the end of session 2 based on the treatment they are in.
Primary Outcomes (explanation)
The main part of the design is a between-subjects design. Every participant who signs up will be asked to do tasks in 3 sessions. In all sessions, subjects have to do a given amount of required tasks. I randomize at the individual level whether subjects exert low effort in the first session and high effort in the second; or high effort in the first session and low in the second. Since the total amount of work is kept the same for all, this rules out the possibility that different choices over future work are driven by learning. However, if subjects who worked more in session 2 are more tired, then projection bias predicts that they will be less willing to accept to do extra work at the end of session 3. Therefore, at the end of session 2 I will elicit the willingness to work (WTW) more right away (at the end of session 2), as well as the WTW more in session 3. The degree to which the session 2 WTW predicts session 3 WTW gives a *direct* measure of the population-average projection bias.

I am planning on pooling the answers from the end of session 2 from the within-subjects design with the between-subjects design for purposes of statistical power. I will *not* however use the data from the end of session 1 in the between subjects analysis. This pooled dataset is my main data for the between subject design.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes are the difference between WTW at the end of session 1 and 2 *withing* subjects. At the end of each of session 1 and 2, I elicit the WTW right now, as well as WTW in session 3.

Moreover, I ask questions to measure whether subjects project their own transient willingness onto others, and whether they project it onto their own past choices.
Secondary Outcomes (explanation)
The secondary design adds to this baseline by asking subjects also at the end of session 1 for their WTW right away, and for their WTW in session 3. Together with the same answers at the end of session 2, this can be used to estimate the degree of projection bias at the individual-level. By asking these subjects to participate in the same experiment several months later, I will measure how stable projection bias is for a given individual.

In addition to these primary goals focused on simple projection bias, I will ask several questions at the end of session 2 and session 3 to test whether subjects mispredict others' preferences; and whether they make mistakes in remembering their own choices based on how they currently feel about such past choices.

Experimental Design

Experimental Design
The real effort task I refer to in the description below is a task that consists in counting how often a given character appears in a matrix that contains different characters.

The baseline design consists of a sign-up session, three main sessions, and a final debriefing survey. In the sign-up session I get subjects’ consent to participate; I describe the study, the type of task they will be asked to complete, and the choices they will be asked to make; and I test their comprehension. Those subjects that pass the comprehension test can enroll in the study, the others are excluded.

Moving on to the experiment, the main part of the design is a between-subjects design. Every participant who signs up will be asked to do tasks in 3 sessions. They will do additional tasks in session 1 and 2, as well as choose how much extra work to do in those sessions as well as in the final session.

In session 3, subjects complete the work they signed up for and then complete the debrief survey.
Experimental Design Details
The real effort task I refer to in the description below is a task that consists in counting how often a given character appears in a matrix that contains different characters. The tasks are chosen such that subjects in the pilot take roughly 45-70 seconds to complete one task.

The baseline design consists of a sign-up session, three main sessions, and a final debriefing survey. In the sign-up session I get subjects’ consent to participate; I describe the study, the type of task they will be asked to complete, and the choices they will be asked to make; and I test their comprehension. Those subjects that pass the comprehension test can enroll in the study, the others are excluded.

Moving on to the experiment, the main part of the design is a between-subjects design. Every participant who signs up will be asked to do tasks in 3 sessions. In all sessions, subjects have to do a given amount of required tasks. I randomize at the individual level whether subjects exert low effort in the first session and high effort in the second; or high effort in the first session and low in the second. Since the total amount of work is kept the same for all, this rules out the possibility that different choices over future work are driven by learning. However, if subjects who worked more in session 2 are more tired, then projection bias predicts that they will be less willing to accept to do extra work at the end of session 3. Therefore, at the end of session 2 I will elicit the willingness to work (WTW) more right away (at the end of session 2), as well as the WTW more in session 3. The degree to which the session 2 WTW predicts session 3 WTW gives a *direct* measure of the population-average projection bias.

In session 3, subjects complete the work they signed up for and then complete the debrief survey.

I will run the following:

1. Pilot:
- I elicit the WTW after subjects have done 10, 40, and 70 tasks
- I test the following 3 methods of eliciting WTW, see below.


Elicitation methods that I test in the Pilot:

1. Price list where the number of tasks are fixed and the payments varied
2. Multiple piece-rates for the task, and subjects state the number of tasks they are willing to do (Augenblick and Rabin (2018))
3. Subjects report directly the smallest payment for which they are willing to do a fixed amount of work

Based on the pilot, I will choose the most precise elicitation method and choose the effort level in order to maximize the power of test. One of the identifying assumptions is that people's WTW fluctuates as they do more work. If this is not the case, then projection bias predicts the same as no projection bias. I randomize subjects in a 3 x 3 design, so that 15 subjects belong to each of the 9 possible subgroups.
Randomization Method
Randomization by computer, at the time subjects enroll for the main study.
Randomization Unit
Individual-level randomization
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
One for the pilot (one batch)

The remaining will be done in batches of 9, given how Amazon MTurk charges for larger batches (unless policies change).
Sample size: planned number of observations
200 individuals recruited on MTurk
Sample size (or number of clusters) by treatment arms
Pilot: 145 subjects, randomizing 3x3, i.e. 15 in each subgroup

The following are estimates, since I will base the number of tasks on the results of the pilot, which will give an idea over the statistical power I will have and (given my fixed budget) allow me to compute the sample size.

Between subjects design: 100 in total, 50 in each subgroup (high effort first vs low effort first)
Within subjects design: 200 in total, 100 per subgroup (high effort first vs low effort first)

Repetition a few months later: Randomly chosen half from the 'within-subjects' group - that is, another 100
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
I will defer this computation until after having run the pilot, as there is no point in guessing the population standard deviation for this task, nor of the change in WTW that is directly proportional to the treatment size (not to the effect size - which is about the size of projection bias itself).
IRB

Institutional Review Boards (IRBs)

IRB Name
CEU Ethical Research Committee
IRB Approval Date
2018-07-09
IRB Approval Number
2017-2018/8/EX

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials