The Buy-In Effect: When Increasing Initial Effort Motivates Behavioral Follow-Through - Online Study

Last registered on March 18, 2025

Pre-Trial

Trial Information

General Information

Title
The Buy-In Effect: When Increasing Initial Effort Motivates Behavioral Follow-Through - Online Study
RCT ID
AEARCTR-0015492
Initial registration date
March 11, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 18, 2025, 10:14 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
University of Konstanz
PI Affiliation
Harvard Business School

Additional Trial Information

Status
In development
Start date
2025-03-12
End date
2025-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We have evidence from a prior field experiment that shows that participants who are randomized into a more effortful sign-up process for a carpool platform use the carpool platform more. However, while we have suggested that several psychological mechanisms could be underlying these effects, we do not have evidence regarding what these mechanisms are.

The purpose of this online experimental study is: (1) to examine whether our field experiment results conceptually replicate in a more controlled, experimental setting, and (2) to build evidence around the psychological mechanisms underpinning these effects. We invite people to an online study to conduct digitization tasks. After finishing these tasks, they are offered the chance to return the following day to complete the same tasks, with participants being randomized into two main treatment groups: the less effort group, where participants can simply indicate they want to sign up for tomorrow's task; and the more effort group, where participants have to complete an additional survey to sign up. Our primary hypothesis is that when participants are randomized to the more effortful sign-up process, they will be more likely to return the following day. On an exploratory basis, we will also test the conditions under which these effects emerge.

Registration Citation

Citation
Dykstra, Holly, Shibeal O' Flaherty and Ashley Whillans. 2025. "The Buy-In Effect: When Increasing Initial Effort Motivates Behavioral Follow-Through - Online Study." AEA RCT Registry. March 18. https://doi.org/10.1257/rct.15492-1.0
Experimental Details

Interventions

Intervention(s)
In Part 1, study participants will complete three digitization tasks. At the end of the digitization tasks, they are informed that they can sign up for the opportunity to return tomorrow to complete the same tasks (Part 2). Participants are randomized into two main treatments:

1. Less effort - these participants simply indicate yes or no whether they want to sign up for tomorrow.
2. More effort - these participants must fill out a 15-question survey if they want to sign up for tomorrow.

In addition to the main treatments, participants are cross-randomized into a set of other treatments designed to study the mechanism behind the effect:

1. Social: Participants are informed that if they choose to return, their bonus rate in Part 2 will depend on either how many other people return (Social - Quantitative) or how well other people do (Social - Quality).
2. Volunteer: Participants are informed that if they choose to return, their earnings in Part 2 will be donated to a charity of their choice. They will have a list of charity options in Part 2.
3. Reminder: Participants who sign up for Part 2 will receive an additional email reminder that Part 2 is now available.
Intervention Start Date
2025-03-12
Intervention End Date
2025-03-19

Primary Outcomes

Primary Outcomes (end points)
% of people who return to Part 2
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
- Effort rate in Part 2: how many letters they attempt to fill in.
- Accuracy rate in Part 2: how accurately they fill in letters.
- Amount of bonus payment they are willing to accept to forgo the return opportunity.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will invite study participants on Amazon Mechanical Turk.

PART 1:
Part 1 of the study consists of 4 parts, described below.

(1) Task instructions and understanding questions:
Participants will be presented with the task instructions. The task asks participants to digitize sets of 35 “fuzzy greek letters" (as used in Augenblick, Niederle, & Sprenger, 2015; Augenblick & Rabin, 2019). Three out of a total of fifty sets of fuzzy greek letters will be randomly presented to each participant. of In order to correctly digitize a fuzzy greek letter, participants must click on the correct corresponding greek letter in the survey. Participants are told that they do not need to complete each set, but that in order to get a bonus payment, they must get 80%, or at least 28 of the 35 letters, correct. After instructions, participants are required to complete three understanding questions before they proceed to the main task. They can repeat these until they get them right.

(2) Main task:
Participants are then presented with three digitization task (where they must click next after completing task 1 to be presented with task 2, then task 3).

(3) Sign-up for return opportunity:
Participants are asked if they would like to sign up for a return opportunity tomorrow, which they are told will involve the same digitization tasks and will be available from 9am-5pm Eastern Time tomorrow. They are also told that regardless of their choice, they have already earned their guaranteed payment and bonus payment for today. The rest of the instructions differ slightly depending on which treatment groups they are in:

MAIN TREATMENTS:
- Less Effort: "If you would like to receive access to tomorrow's return opportunity, please click yes. Otherwise, click no."
- More effort: "If you would like to receive access to tomorrow's return opportunity, please click yes. This will take you to an additional 15-question survey on the next page. Otherwise, click no." On the survey page, participants can review the questions and change their mind (i.e., decide not to complete the questions, and not sign up for the return opportunity).

SOCIAL:
- No social: "Tomorrow, you can earn the same amount of money as today. Both the guaranteed payment and the bonus payment will be calculated the same."
- Social - quantitative: "You can earn the same guaranteed payment as today. However, the bonus will be calculated differently. Tomorrow, the more people who return, the higher your bonus will be."
- Social - qualitative: "You can earn the same guaranteed payment as today. However, the bonus will be calculated differently. Tomorrow, the more accurate the other people are, the higher your bonus will be."

VOLUNTEER:
- Volunteer: "Tomorrow, you can complete the same tasks as today, but as a volunteer opportunity. The money you earn will be donated to a charity of your choice."

NO REMINDER VS. REMINDER:
- No reminder: Participants do not receive a reminder email.
- Reminder: Participants receive a reminder email at 9am Eastern Time, when Part 2 goes live.

(4) Willingness to pay: After completing sign-up, participants are asked about their willingness to accept a bonus payment in order to forego the return opportunity tomorrow using a multiple price list. This involves choosing between increasing values of an additional bonus payment now in order to forgo the return opportunity tomorrow, with one of their choices being randomly implemented.

They are then brought to the end of the survey.

PART 2:
As explained to study participants, Part 2 is available on the Amazon Mechanical Turk Platform from 9am to 5pm Eastern Time. It consists of two sections:

1. Instructions: Participants will be presented with the task instructions. The task instructions are the same as in Part 1.
2. Main task: Participants will then be presented with the digitization tasks. The tasks are the same as in Part 1.
Experimental Design Details
Not available
Randomization Method
Randomization conducted automatically in Qualtrics, with equal amounts of participants assigned to each group
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
500 participants to Part 1 or until one week has passed. The study will only be active on weekdays, so that Part 1 will only be conducted Monday-Thursday, and Part 2 will only be conducted Tuesday-Friday.
Sample size (or number of clusters) by treatment arms
1. No social - paid - no reminder ~ 40
2. No social - volunteer - no reminder ~ 40
3. No social - paid - reminder ~ 40
4. No social - volunteer - reminder ~ 40
5. Social quant - paid - no reminder ~ 40
6. Social quant - volunteer - no reminder ~ 40
7. Social quant - paid - reminder ~ 40
8. Social quant - volunteer - reminder ~ 40
9. Social qual - paid - no reminder ~ 40
10. Social qual - volunteer - no reminder ~ 40
11. Social qual- paid - reminder ~ 40
12. Social qual - volunteer - reminder ~ 40
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2025-03-03
IRB Approval Number
IRB00000109