Projection Bias in Effort Choices

Last registered on May 06, 2022

Pre-Trial

Trial Information

General Information

Title
Projection Bias in Effort Choices
RCT ID
AEARCTR-0004011
Initial registration date
May 01, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 23, 2019, 7:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 06, 2022, 4:41 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region

Primary Investigator

Affiliation
Central European University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2019-03-30
End date
2022-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In this study I measure the degree of projection bias people have in choices over effort -- that is, how much people mispredict their future willingness to work by projecting their current willingness to work into the future. For this I run a study with real effort tasks on Amazon Mechanical Turk (MTurk) with several sessions, where subjects sign up for additional future work both when they are rested (before they do any work) and when they are tired (after they have completed some work).
External Link(s)

Registration Citation

Citation
Kaufmann, Marc. 2022. "Projection Bias in Effort Choices." AEA RCT Registry. May 06. https://doi.org/10.1257/rct.4011-2.3000000000000004
Former Citation
Kaufmann, Marc. 2022. "Projection Bias in Effort Choices." AEA RCT Registry. May 06. https://www.socialscienceregistry.org/trials/4011/history/142801
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The main intervention consists in randomizing subjects in sessions 1 and 2 in such a way that they have a light workload in session 1 and a hard workload in session 2 (treatment 1) or a hard workload in session 1 and a light workload in session 2 (treatment 2).
Intervention Start Date
2019-04-01
Intervention End Date
2022-12-30

Primary Outcomes

Primary Outcomes (end points)
The main outcome variable is the amount of tasks that subjects sign up for in session 3, as well as how much of this is predicted by the WTW at the end of session 2 based on the treatment they are in.
Primary Outcomes (explanation)
The main part of the design is a between-subjects outcome. Every participant who signs up will be asked to do tasks in 3 sessions. In all sessions, subjects have to do a given amount of required tasks. I randomize at the individual level whether subjects exert low effort in the first session and high effort in the second; or high effort in the first session and low in the second. Since the total amount of work is kept the same for all, this rules out the possibility that different choices over future work are driven by learning. However, if subjects who worked more in session 2 are more tired, then projection bias predicts that they will be less willing to accept to do extra work at the end of session 3. Therefore, at the end of session 2 I will elicit the willingness to work (WTW) more right away (at the end of session 2), as well as the WTW more in session 3. The degree to which the session 2 WTW predicts session 3 WTW gives a *direct* measure of the population-average projection bias.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes are the difference between WTW at the end of session 1 and 2 *within* subjects. At the end of each of session 1 and 2, I elicit the WTW right now, as well as WTW in session 3.

Moreover, I ask questions to measure whether subjects project their own transient willingness onto others, and whether they project it onto their own past choices.
Secondary Outcomes (explanation)
The secondary outcomes uses data from the same subjects from the end of session 1, where they are asked for their WTW right away, and for their WTW in session 3. Together with the same answers at the end of session 2, this can be used to estimate the degree of projection bias at the individual-level.

If the budget permits, I will ask these subjects to participate in the same experiment several months later, to measure measure how stable projection bias is for a given individual.

In addition to these primary goals focused on simple projection bias, I will ask several questions at the end of session 2 and session 3 to test whether subjects mispredict others' preferences; and whether they make mistakes in remembering their own choices based on how they currently feel about such past choices.

Experimental Design

Experimental Design
The real effort task I refer to in the description below is a task that consists in counting how often a given character appears in a matrix that contains different characters.

The design consists of a sign-up session, three main sessions, and a final debriefing survey. In the sign-up session I get subjects’ consent to participate; I describe the study, the type of task they will be asked to complete, and the choices they will be asked to make; and I test their comprehension. Those subjects that pass the comprehension test can enroll in the study, the others are excluded.

Moving on to the experiment, the main outcomes of the design are between-subjects. Every participant who signs up will be asked to do tasks in 3 sessions. They will do additional tasks in session 1 and 2, as well as choose how much extra work to do in those sessions as well as in the final session.

In session 3, subjects complete the work they signed up for and then complete the debrief survey.
Experimental Design Details
The real effort task I refer to in the description below is a task that consists in counting how often a given character appears in a matrix that contains different characters. The tasks are chosen such that subjects in the pilot take roughly 45-70 seconds to complete one task.

The design consists of a sign-up session, three main sessions, and a final debriefing survey. In the sign-up session I get subjects’ consent to participate; I describe the study, the type of task they will be asked to complete, and the choices they will be asked to make; and I test their comprehension. Those subjects that pass the comprehension test can enroll in the study, the others are excluded.

Moving on to the experiment, the main outcomes of the design are between-subjects. Every participant who signs up will be asked to do tasks in 3 sessions. In all sessions, subjects have to do a given amount of required tasks. I randomize at the individual level whether subjects exert low effort in the first session and high effort in the second; or high effort in the first session and low in the second. Since the total amount of work is kept the same for all, this rules out the possibility that different choices over future work are driven by learning. However, if subjects who worked more in session 2 are more tired, then projection bias predicts that they will be less willing to accept to do extra work at the end of session 3. Therefore, at the end of session 2 I will elicit the willingness to work (WTW) more right away (at the end of session 2), as well as the WTW more in session 3. The degree to which the session 2 WTW predicts session 3 WTW gives a *direct* measure of the population-average projection bias.

In session 3, subjects complete the work they signed up for and then complete the debrief survey.

I will run the following:

1. Pilot:
- I elicit the WTW after subjects have done 10, 40, and 70 tasks
- I test 3 methods of eliciting WTW, see below.

Elicitation methods that I test in the Pilot to see whether they give consistent/coherent answers:

1. Price list where the number of tasks are fixed and the payments varied
2. Multiple piece-rates for the task, and subjects state the number of tasks they are willing to do (Augenblick and Rabin (2018))
3. Subjects report directly the smallest payment for which they are willing to do a fixed amount of work

Based on the pilot, I will choose choose the effort level as well as the *primary* elicitation method in order to maximize the power of test. One of the identifying assumptions is that people's WTW fluctuates as they do more work. If this is not the case, then projection bias predicts the same as no projection bias. The primary elicitation method is either going to be 1 or 3 -- the multiple piece-rates does not allow to estimate individual or group-level projection bias estimates without functional form assumptions, and thus will be included only as a control, as will the elicitation method that does not provide the most precise information. The reason for including all 3 methods (with the primary method getting half the choices, and the other two methods getting the remaining half of the choices) is in part as a consistency check, and in part to alleviate measurement error as highlighted in Gillen, Snowberg, and Yaariv (2019).
Randomization Method
Randomization by computer, at the time subjects enroll for the main study.
Randomization Unit
Individual-level randomization
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
One for the pilot (one batch)
Sample size: planned number of observations
Given that each data point will probably cost around $20 (estimate from another experiment with multiple sessions), the current budget permits around 150 subjects.
Sample size (or number of clusters) by treatment arms
50 subjects to estimate the effort disutility and different eliciation methods.

150 subjects for the main run, which consists of 3 sessions.

Repetition a few months later (conditional on additional funding): Randomly chosen subset of 100 subjects from those who completed the original study.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
I will defer this computation until after having run the pilot, as there is no point in guessing the population standard deviation for this task, nor of the change in WTW that is directly proportional to the treatment size (not to the effect size - which is about the size of projection bias itself).
IRB

Institutional Review Boards (IRBs)

IRB Name
CEU Ethical Research Committee
IRB Approval Date
2018-07-09
IRB Approval Number
2017-2018/8/EX

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials