Understanding Simultaneous Solicitations

Last registered on July 21, 2022

Pre-Trial

Trial Information

General Information

Title
Understanding Simultaneous Solicitations
RCT ID
AEARCTR-0009750
Initial registration date
July 14, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 21, 2022, 11:27 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
DePaul University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-07-18
End date
2022-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Online, collaborative giving days, such as Giving Tuesday, are becoming an important part of the philanthropic landscape in the United States. Giving Tuesday has grown quickly since its inception in 2012, with the 2019 drive raising over $511 million from online gifts alone (Haynes and Stiffman 2019). Collaborative giving days like Giving Tuesday are typically time-limited and often involve a list of approved charities. While the decision making format of Giving Tuesday is arguably akin to other types of federated giving programs, such as workplace giving campaigns, it is notably distinct from typical year-end solicitations, which are usually considered one at a time. Research has not explored how donor decision making regarding lists of charities differs from their decision making regarding a series of single-organization solicitations. This study gathers evidence from a survey experiment on simultaneous (list) vs. sequential (one-by-one) solicitation decision making. The goal of the survey experiment is to reveal how the amount donors give and the organizations they choose differ in this charitable giving context. In addition, the project will collect evidence on the decision making theories which may explain differences between simultaneous and sequential solicitations, including choice overload, differences in mental accounting by donors, and induced comparison and diversification of giving.
External Link(s)

Registration Citation

Citation
Vance-McMullen, Danielle. 2022. "Understanding Simultaneous Solicitations." AEA RCT Registry. July 21. https://doi.org/10.1257/rct.9750-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
In this online survey experiment, subjects will be randomized to receive either simultaneous or sequential solicitations.
Intervention Start Date
2022-07-18
Intervention End Date
2022-08-01

Primary Outcomes

Primary Outcomes (end points)
Primary: Total donations (see analysis plan), organization donations
Primary Outcomes (explanation)
See analysis plan

Secondary Outcomes

Secondary Outcomes (end points)
Reported giving satisfaction/difficulty, time to completion, reported budgeting, donation recall, manipulation awareness, reported organization comparison, reported diversification, and gift similarity
Secondary Outcomes (explanation)
These outcomes are constructed using survey responses. Some of these constructs are measured using multiple survey items.

Experimental Design

Experimental Design
In the experiment, subjects will be randomized to receive either simultaneous or sequential solicitations. Subjects will be told that they may be randomly chosen to receive an “additional bonus payment” of $100 (1 in 200 will win). Subjects will be encouraged to commit to donating a portion of the payment to nonprofit organizations. Then subjects will receive a total of 12 solicitations for national human service organizations in the United States, accompanied by a short description of the organization. In the sequential version, solicitation decisions will come one-by-one, while in the simultaneous version, all options will be presented on one screen.
Experimental Design Details
In the experiment, subjects will be randomized to receive either simultaneous or sequential solicitations. Subjects will be told that they may be randomly chosen to receive an “additional bonus payment” of $100 (1 in 200 will win). Subjects will be encouraged to commit to donating a portion of the payment to nonprofit organizations. Then subjects will receive a total of 12 solicitations for national human service organizations in the United States, accompanied by a short description of the organization. In the sequential version, solicitation decisions will come one-by-one, while in the simultaneous version, all options will be presented on one screen. After making the donation decisions, subjects will be asked about questions about organizational familiarity, impact, and their overall impression of the 12 organizations. They will also answer questions about their giving decision making process, the extent to which they normally set giving budgets, their feelings while making a gift, and other questions designed to reveal the decision making factors that may influence differences between the two solicitation situations.

The research methodology is based on Vance-McMullen 2017, for which this paper is a replication and extension. In this extension, the experiment will also include a survey arm to manipulate the respondents’ use of mental accounting. Respondents in this arm will receive an explicit running budget total to prime them to pay increased attention to charitable budgeting. I will conduct a robustness check to test the hypothesis that the placement of impact, impression, and familiarity questions does not change donation decision making. In the main sequential experiment, these questions are asked between solicitations to better approximate real-world solicitation spacing; to best understand if the placement of these questions affects giving, a treatment arm will be introduced where these questions are asked after all solicitations are completed.
Randomization Method
Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The treatment is not clustered
Sample size: planned number of observations
Wave 1: 700 Wave 2: 1400 The analysis plan describes the conditions under which observations may be excluded due to low-quality responses.
Sample size (or number of clusters) by treatment arms
Wave 1 – 350 in control, 350 in main treatment arm
Wave 2 – 400 in control, 400 in main treatment arm, 400 in mental accounting prime arm, 200 in question placement robustness check arm

The analysis plan describes when the two waves will be combined in analyses.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The main Wave 2 sample of 800 is powered to find a mean donation difference of approximately $6.00 to be significant at the 5% level 80% of the time. (Here, variance is calculated on capped donation data.) When combined with the Wave 1 sample, the overall sample of 1500 is powered to find a mean donation difference of approximately $4.30 to be significant at the 5% level 80% of the time. (Here, variance is calculated on capped donation data.)
IRB

Institutional Review Boards (IRBs)

IRB Name
DePaul IRB
IRB Approval Date
2021-07-30
IRB Approval Number
DV031620MPS
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials