Defaults and Moral Decision-Making

Last registered on January 04, 2020

Pre-Trial

Trial Information

General Information

Title
Defaults and Moral Decision-Making
RCT ID
AEARCTR-0005190
Initial registration date
December 27, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 04, 2020, 11:48 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
Harvard University

Additional Trial Information

Status
Completed
Start date
2019-12-27
End date
2019-12-30
Secondary IDs
Abstract
A large body of empirical research has found that non-binding default options have a substantial effect on individual decision-making. In particular, there are large default effects for “moral” or “self- other” decisions such as organ donation. Do these observed self-other default effects obtain through a psychological mechanism distinct from those of self-self default effects, rendering self-other default effects particularly large? Or do they obtain through standard default effect mechanisms, such as psychological anchoring or switching costs? Existing research has not answered this question. We will fill this gap through a lab experiment that measures self-self and self-other default effects in otherwise comparable decisions. We will directly compare the magnitudes of these effects to provide novel evidence on the role of self-other tradeoffs in decisions with default options.
External Link(s)

Registration Citation

Citation
Hickman, Peter and Brandon Tan. 2020. "Defaults and Moral Decision-Making." AEA RCT Registry. January 04. https://doi.org/10.1257/rct.5190-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We will measure the magnitude of default effects for two types of simple decisions. The first type of decision is a self-self decision: participants will choose between receiving a gift card for themselves or money for themselves. The second type of decision is a self-other decision: participants will choose between receiving money for themselves or sending money to charity. For each type of decision, we can measure the “default effect” as the difference in the share choosing money across (randomly determined) default conditions. We will compare the size of the self-self default effect with the size the self-other default effect. This will show us if defaults have particular importance in self-other decisions.

To ensure that decisions are identical in all dimensions except for being self-self versus self-other, we will first have each participant answer a calibration question: for what number X are you indifferent between receiving $X in gift card for yourself and donating $50 to charity? Then we will randomize each participant into a default condition. Participants next make two binary decisions: the self-self decision is between $5 for self and a gift card for self, while the self-charity decision is between $50 for charity and $5 cash for self. A model with symmetric switching costs in the self-self and self-other decisions predicts all participants will make the same choice in the two decisions. Therefore, if we observe different choices, we can conclude there are different psychological switching costs in self-self and self-other decisions.

Our hypothesis here is that people are selfish but dislike feeling selfish. So when defaulted into the unselfish, “moral” alternative, defaults impose a greater moral cost on participants’ consciousness when they must make an active decision to deviate from the “moral” to the “immoral” alternative. The decision to be “immoral” becomes more salient, feels more deliberate, and seems more wrong. On the other hand, when defaulted into the “immoral” choice, defaults provide “moral wiggle room” to behave self-interestedly (Dana et al. 2007). People use the fact that they were defaulted into the immoral choice as an excuse to passively stay with that choice.
Intervention Start Date
2019-12-27
Intervention End Date
2019-12-30

Primary Outcomes

Primary Outcomes (end points)
Difference in Difference. First difference is the default effect: percent of individuals who choose money when money is the default minus the percent of individuals who choose money when money is not the default. Second difference is across the self-self (i.e., money vs. gift card) decision and the self-other (i.e., money for charity) decision.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Step 1: Explain experiment to participants. This step consists of informing participants of the broad goals of the study, obtaining their consent, and explaining the types of decisions they will make. We will tell participants we are studying decision-making about charitable giving, though we will not explain that we are interested in the effect of defaults, because this could bias participant behavior. We want participants to respond to the presence of a default option “naturally,” meaning similar to how they would in non-experimental contexts. If we inform participants that we are studying defaults, they may determine they should ignore the default. When explaining decisions, we will tell them that they will make a series of binary decisions between bonus rewards. We will tell them (truthfully) that there is a 5% chance one of their decisions will be implemented, and which decision is implemented will be selected at random. We will tell them (truthfully) that it is in their best interest to choose the alternative they truly prefer. We will explain the use of the gift card (a GAP gift card), and the work done by the charity (GiveDirectly). Participants will thus by fully informed about the properties of the options in decisions.

Step 2: Calibration question. It is crucial in our experiment that differences in default effects cannot
be explained by differences in the relative (active choice) valuation of the two alternatives in the self-self
and self-other decision. Therefore, we will first ask a calibration question via multiple price list so that for each participant, the
participant values $50 for charity and gift card for self equally.

Step 3: Randomize into treatment cell. We will randomize the participant into one of the treatment conditions: 1) Default Cash - Self vs Other choice, 2) Default Charity - Self vs Other choice, 3) Default Cash - Self vs Self choice, 4) Default Gift Card - Self vs Self choice. This will determine only the first binary decision they will make post- calibration.

Step 4: First binary decision. The participant will be told, for example: “We offer you a bonus of $5.00 in money for yourself. Alternatively, you can instead choose as your bonus a $50.00 donation to GiveDirectly. Below you can decide if you want the default bonus or the alternative.” The participant will then see the two alternatives next to radio buttons, followed by a “continue” button. The top radio button will be automatically selected and the participant will be free to select “continue.”

Step 5: Additional binary decisions. The participant will make a series of similar decisions in random order. Each decision will vary along the following dimensions: the default condition and the tradeoff condition. We will also include some “dummy decisions”—i.e., decisions we do not plan to analyze—to keep the participant from easily recalling previous decisions. We will randomize the order of decisions.

Step 6: Demographics questions. We will conclude with some straightforward demographic questions about gender, race, ethnicity, age, and education so that i) we know how well our sample corresponds to average Americans, ii) to see if effects vary across demographics, iii) to control for demographics in the analysis to add power, to the extent that demographics predict decisions.
Experimental Design Details
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1300
Sample size: planned number of observations
1300
Sample size (or number of clusters) by treatment arms
Approximately 325 in each arm. We will randomize into the four treatments independently though, so the randomization could be slightly unequal, for example 322-327-330-321. We also will drop participants who fail attention checks.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
15 percentage points is the minimum difference-in-difference that we detect with 80% power. I.e., if one default effect is 15 percentage points larger than the other default effect in the population, we will reject the null hypothesis of equal default effects 80% of the time.
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
2019-08-15
IRB Approval Number
IRB19-0950

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials