Nudging organizations: evidence from three large-scale field experiments

Last registered on August 09, 2019

Pre-Trial

Trial Information

General Information

Title
Nudging organizations: evidence from three large-scale field experiments
RCT ID
AEARCTR-0004238
Initial registration date
June 06, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 21, 2019, 11:49 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 09, 2019, 1:59 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Johns Hopkins University

Other Primary Investigator(s)

PI Affiliation
Naval Postgraduate School
PI Affiliation
PI Affiliation
University of Delaware

Additional Trial Information

Status
Completed
Start date
2016-03-31
End date
2018-10-01
Secondary IDs
Abstract
Nudges and changes to choice architecture can affect individual behaviors, especially for infrequent, unfamiliar decisions. However, whether the same interventions are equally effective in changing the behaviors of experts leading an organization is an open question. We use three randomized field experiments to test the efficacy of five nudges and changes in choice architecture that have been shown to affect individual-level decisions in public good provision and charitable giving contexts. The nudges are 1) raising the salience of the ask, 2) emphasizing the public benefits from contributions, 3) providing social comparisons, 4) publicly recognizing contributors (creating observability), and 5) setting a collective goal. In the three field experiments, local organizations make decisions about whether to contribute financially to a national coordinating organization. The local organizations are conservation districts in the United States, which are independent, nonprofit organizations managed by elected local agricultural operators. In comparison to the estimated treatment effects of similar interventions among individuals, our estimated effects on organizations are dramatically smaller and of the opposite sign. Our results, in combination with prior laboratory studies on group-individual differences, suggest that nudges and changes in choice architecture may not affect the behaviors of experts leading organizations to the same degree as they affect the behaviors of individuals making infrequent and unfamiliar decisions.
External Link(s)

Registration Citation

Citation
Fan, James et al. 2019. "Nudging organizations: evidence from three large-scale field experiments." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.4238-2.0
Former Citation
Fan, James et al. 2019. "Nudging organizations: evidence from three large-scale field experiments." AEA RCT Registry. August 09. https://www.socialscienceregistry.org/trials/4238/history/51522
Experimental Details

Interventions

Intervention(s)
We use three randomized field experiments to test the efficacy of five nudges and changes in choice architecture that have been shown to affect individual-level decisions in public good provision and charitable giving contexts. The nudges are 1) raising the salience of the ask, 2) emphasizing the public benefits from contributions, 3) providing social comparisons, 4) publicly recognizing contributors (creating observability), and 5) setting a collective goal.
Intervention Start Date
2016-03-31
Intervention End Date
2018-10-01

Primary Outcomes

Primary Outcomes (end points)
We seek to estimate the average treatment effect on the monetary value of a district’s annual contribution.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The first field experiment took place in fiscal year 2015-16 (FY16), the second in fiscal year 2016-17 (FY17), and the third in fiscal year 2017-2018 (FY18). In the three experiments, districts were randomly assigned, blocking on state and previous contributions, to one of two groups, a control (status quo) group or a treatment group. The control group received the standard mailing from NACD.

In FY16, districts who had not yet paid their dues by the end of the second quarter were treated in the third and fourth quarters. In FY17 and FY18, districts who had not yet paid by the end of the first quarter were treated in the second, third and fourth quarters. Treatment assignment was later in FY16 because we started collaborating with NACD in March 2016. The FY17 and FY18 experiments did not start in the first quarter because: (1) NACD and the authors agreed to analyze the previous year’s results before initiating another experiment (complete data were not available until after the first quarter began); and (2) the districts that contribute in the first quarter tend to contribute every year at or above the $775 level, and thus were not part of the population that NACD targeted for increased contributions.

In FY16, we made three changes to the mailing: 1) we made the “ask” clearer and more salient, 2) we highlighted the accomplishments of NACD, and 3) we provided social information via a social (peer) comparison. We tested the combined effect of these treatments, which are described in more detail below.

In FY17, we tested a new treatment: making contributors observable to other districts. Historically, only the national office knew which districts paid their dues and how much they paid. There was no public recognition of contributors or their contribution levels. The absence of public recognition was a deliberate decision. NACD believed that publicly recognizing contributors could stigmatize, or even indirectly ostracize, the non-contributors.

In FY18, we tested another modification in the treatment group: announcing a national goal for total contributions. Each quarter, progress towards the goal was displayed as a percentage, visualized as a thermometer.
Experimental Design Details
Randomization Method
In the three experiments, districts were randomly assigned using a computer, blocking on state and previous contributions, to one of two groups, a control (status quo) group or a treatment group. The control group received the standard mailing from NACD.
Randomization Unit
Individual districts are the unit of randomization
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1,231 districts in FY16, 1,732 districts in FY17, and 1447 districts in FY18
Sample size: planned number of observations
1,231 districts in FY16, 1,732 districts in FY17, and 1447 districts in FY18
Sample size (or number of clusters) by treatment arms
617 districts in treatment, 614 districts in control for FY16. 862 districts in treatment, 870 districts in control for FY17. 746 districts for treatment, 701 districts in control for FY18.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We conducted power analysis simulations, using historical data, to explore minimum detectable treatment effects in our design under a range of assumptions. FY16: With a Type 1 error rate of 5% and power of 80%, we can detect an effect of 0.10 standard deviation (SD) or larger FY17: With a Type 1 error rate of 5% and power of 80%, we can detect an effect of 0.06 standard deviation (SD) or larger FY18: With a Type 1 error rate of 5% and power of 80%, we can detect an effect of 0.075 standard deviation (SD) or larger
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials