x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
The Welfare E ffects of Behavioral Interventions: Theory and Evidence from Energy Conservation
Last registered on June 12, 2015

Pre-Trial

Trial Information
General Information
Title
The Welfare E ffects of Behavioral Interventions: Theory and Evidence from Energy Conservation
RCT ID
AEARCTR-0000713
Initial registration date
June 12, 2015
Last updated
June 12, 2015 4:44 PM EDT
Location(s)
Primary Investigator
Affiliation
New York University
Other Primary Investigator(s)
PI Affiliation
The Wharton School, University of Pennsylvania
Additional Trial Information
Status
On going
Start date
2015-01-01
End date
2015-07-31
Secondary IDs
Abstract
The success of interventions aiming to encourage pro-social behavior is often measured by how the interventions a ffect behavior rather than how they a ffect welfare. We implement a natural fi eld experiment to measure the welfare e ffects of one especially policy-relevant intervention, home energy conservation reports. We measure consumer surplus by sending consumers introductory reports and using an incentive compatible multiple price list survey to elicit willingness-to-pay to continue the program. The experimental design also allows us to estimate negative willingness-to-pay and address non-response bias. The results underscore that the welfare e ffects of non-price "nudge" interventions can be measured, and policy makers should strongly consider the welfare e ffects of such interventions, not just their e ffects on behavior.
External Link(s)
Registration Citation
Citation
Allcott, Hunt and Judd Kessler. 2015. "The Welfare E ffects of Behavioral Interventions: Theory and Evidence from Energy Conservation." AEA RCT Registry. June 12. https://doi.org/10.1257/rct.713-2.0.
Former Citation
Allcott, Hunt and Judd Kessler. 2015. "The Welfare E ffects of Behavioral Interventions: Theory and Evidence from Energy Conservation." AEA RCT Registry. June 12. https://www.socialscienceregistry.org/trials/713/history/4428.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
Recent research has shown that social pressure, peer comparisons, and other non-price interventions can increase pro-social behaviors across a variety of domains. Such “behavioral interventions” have become more common in recent years with the popularity of the book Nudge (Sunstein and Thaler 2008) and the rising prominence of “nudge units” in the UK and US governments. Typically, these interventions are evaluated primarily on how they affect a given behavioral outcome, such as volunteering, charitable donations, job choice and effort, or environmental conservation. Policymakers use these impact evaluations to compare the cost effectiveness of different interventions.

This standard approach ignores a central policy question: what are the welfare impacts of pro-social behavioral interventions? DellaVigna, List, and Malmendier (2012) ask this question in the context of charitable giving, pointing out that donation could increase utility, for example by activating warm glow, or decrease utility, for example if giving occurs due to social pressure. Their welfare calculations show that door-to-door fund raising drive lowered overall utility, even as it raised funds for charity. This result highlights that the impact of a pro-social intervention on the intended behavior is an incomplete, and potentially misleading, measure of whether an intervention should be scaled up.

Our paper asks a similar question in a very different context: what are the welfare effects of providing energy use social comparisons to encourage energy conservation? In recent years, electric and natural gas utilities and their regulators have become increasingly interested in “behavioral en- ergy efficiency.” Because carbon taxes and similar externality pricing mechanisms can be politically infeasible, energy conservation programs including behavioral interventions have been implemented as a substitute. Some “behavioral” energy conservation interventions are also thought to address non-externality market failures, for example by informing consumers about their energy use and ways to conserve. Perhaps the most salient example of such an intervention is the “home energy report,” as popularized by several providers, including a company called Opower. Home energy reports containing social comparisons, conservation messaging, and energy use tips are now being sent to more than six million households across the United States, and their use continues to expand domestically and abroad.

There has been significant academic interest in home energy report interventions, beginning with studies by Nolan et al. (2008) and Schultz et al. (2007), and hundreds of follow-on studies of similar interventions. These studies clearly establish that such interventions can cause consumers to reduce energy use, and many interpret this finding as evidence that policymakers should further implement the programs. However, no study has asked whether these interventions increase welfare. Do people conserve because they are better-informed and inspired, or because they feel guilty? More broadly, do the social comparisons act like a welfare-increasing psychic subsidy on energy conservation or welfare-reducing psychic tax on energy use? The home energy reports are empirically convenient for two reasons: first, they are a private good which can be sold to consumers, and second, it is standard to deliver them to consumers repeatedly over several years. These two features mean that it is possible and relevant to measure willingness to pay for continued reports, in a sample of experienced recipients.

Intervention Start Date
2015-03-30
Intervention End Date
2015-06-01
Primary Outcomes
Primary Outcomes (end points)
The outcome of interest is the WTP for home energy reports
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We implemented our study with a medium-sized utility company (“the Utility”) in the north- eastern United States and a third-party contractor (the “Implementer”) that implements standard home energy reports. During the winter of 2015, the Implementer mailed four home energy reports on an opt-out basis to 10,000 randomly-selected Utility residential natural gas consumers. The Utility plans to continue these same customers on a home energy report program in winter 2016. In the same envelope as the final report of 2015, we included an incentive-compatible multiple price list (MPL) survey that allows us to measure each household’s willingness-to-pay (WTP) for the next winter of home energy reports and construct a demand curve for continued reports.

Aside from simply measuring the demand curve, our design also involved several useful features. First, out MPL questions were structured such that they allow recipients to reveal positive or negative WTP. This is important because a small number of consumers opt out of home energy reports, implying that they dislike them enough to bear the time costs of opting out. Second, to test the channels through which the reports might affect willingness-to-pay, we randomly assigned households to three different survey versions. One cued the reports’ social comparison feature, the second cued the environmental benefits of energy conservation, and the the third was a control.

Third, we carefully designed the survey to be able to address non-response. This is important because the intervention is delivered on an opt-out basis, meaning that we are interested in welfare effects across the entire 10,000-household population receiving the reports, but many recipients will not respond to our MPL survey. The direction and magnitude of non-response bias is an empirical question: non-responders might be more likely to have zero or negative WTP (this is very likely the case for people who do not open the home energy reports and thus never see the survey), but some might have positive WTP but a high cost of time for survey response. To address this, we carried out “intensive follow-up” (DiNardo, McCrary, and Sanbonmatsu 2006) by re-surveying a randomly-selected 2/3 of the original survey population. Comparing average WTP in the base and intensive follow-up groups suggests the direction of non-response bias, and the ratio of the WTP difference to the change in response rates suggests the magnitude. Thus, we can evaluate welfare effects within the sample of respondents and extrapolate (under functional form assumptions) to the entire treated population.

With the basic empirical results in hand, we can carry out a full welfare evaluation of continuing the home energy report program. The welfare effects of continuing the program are the sum of effects on consumer welfare (the area under the demand curve), minus the cost of mailing the reports, plus the reduction in uninternalized energy use externalities. We compare this to the more traditional cost effectiveness metric, which compares the program implementation cost to the social cost of energy. By ignoring consumer welfare effects, the traditional metric could generate very different results.

One potential criticism of our approach might be that it “takes revealed preference too seriously”: if one motivation for the program is that consumers are poorly informed about energy use, why should we assume that their valuations are well-informed? Conceptually, it is important to re- member that WTP is assessed after recipients are experienced with the intervention, so they should be capable of valuing it. Our empirical data also allow us to provide additional evidence. One specific concern might be that consumers know the costs they incur in conserving energy, but they do not know the benefits because it is difficult to infer financial gains of particular energy conservation actions. (“How much did we save last month because we turned the lights off more?”) To test this, we elicit beliefs over the average energy cost savings induced by the intervention and compare that to the true empirical estimates. A second and more conceptually challenging concern is that consumers might tend to be optimistically biased about their energy use compared to neighbors, and willingness to pay is low because the intervention reveals the truth. As in Brunnermeier and Parker (2005) and Oster, Shoulson, and Dorsey (2013), there is then some question about whether welfare analysis should respect or ignore a reduction in consumer surplus from eliminating optimism bias. To calibrate robustness checks in the welfare analysis, we ask report recipients whether they were initially positively or negatively surprised by the social comparison information, and we can adjust welfare estimates to account for correlation between low WTP and initial overoptimism.
Experimental Design Details
Randomization Method
The utility provided the PIs with a list of accounts and the randomization was made in an office by a computer.
Randomization Unit
The randomization unit is the household
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
The treatment will not be clustered.
Sample size: planned number of observations
20,000 households
Sample size (or number of clusters) by treatment arms
10,000 treated households, randomized into 3 groups.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With a 25% response rate, we can estimate the average and distribution of willingness-to-pay very precisely
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
University Committee on Activities Involving Human Subjects - New York University
IRB Approval Date
2014-11-21
IRB Approval Number
IRB# 14-10392 - Exempt status
IRB Name
INSTITUTIONAL REVIEW BOARD - University of Pennsylvania
IRB Approval Date
2015-01-05
IRB Approval Number
#821688 - Relying on NYU's IRB status
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers