Government Funding and Private Donations: Crowding-in Versus Crowding-out in the Context of a Big Data Field Experiment

Last registered on April 27, 2018

Pre-Trial

Trial Information

General Information

Title
Government Funding and Private Donations: Crowding-in Versus Crowding-out in the Context of a Big Data Field Experiment
RCT ID
AEARCTR-0002345
Initial registration date
July 21, 2017
Last updated
April 27, 2018, 1:57 PM EDT

Locations

Region

Primary Investigator

Affiliation
Georgetown University

Other Primary Investigator(s)

PI Affiliation
Rutgers University
PI Affiliation
Rutgers University
PI Affiliation
Rutgers University

Additional Trial Information

Status
Completed
Start date
2017-08-21
End date
2017-12-31
Secondary IDs
Abstract
The study explores whether government funding of nonprofit organizations crowds in/out private donations using a large-scale, big data field experiment. We will buy advertisements on Facebook to solicit donations among Facebook users to real food banks in NYC that are either government funded, donation funded, or without funder information. Clusters of Facebook users will be randomly exposed one of the experimental conditions: 1) government-funded, 2) donation-funded, and 3) no funding information. As outcome measure, we will monitor people’s intention to donate to the food banks by their actual click-through-rates.
External Link(s)

Registration Citation

Citation
Jilke, Sebastian et al. 2018. "Government Funding and Private Donations: Crowding-in Versus Crowding-out in the Context of a Big Data Field Experiment." AEA RCT Registry. April 27. https://doi.org/10.1257/rct.2345-3.0
Former Citation
Jilke, Sebastian et al. 2018. "Government Funding and Private Donations: Crowding-in Versus Crowding-out in the Context of a Big Data Field Experiment." AEA RCT Registry. April 27. https://www.socialscienceregistry.org/trials/2345/history/28906
Experimental Details

Interventions

Intervention(s)
The intervention includes advertisements on Facebook for food banks in NYC with randomly varied funding information (i.e., government-funded, donation-funded, or no such information). Yet, the advertisements will not solicit donations to specific food banks but instead enourage users to click on the "Rutgers Observatory for [government-funded/ donation-funded/ no-info] Food Banks", where they will be directed to a website off Facebook property.
Intervention Start Date
2017-08-21
Intervention End Date
2017-08-27

Primary Outcomes

Primary Outcomes (end points)
Our primary outcome of interest is subjects' intention-to-donate, which we operationalize as the unique-outbound-link-click-rate (i.e., reach divided through unique outbound clicks) per cluster.
Primary Outcomes (explanation)
Reach = The number of people who saw our ads at least once.
Unique outbound clicks = The number of people who performed an outbound click.
Outbound click = The number of clicks on links that take people off Facebook-owned property.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We test crowding-in and crowding-out claims using a large-scale, big data field experiment. We will buy advertisements on Facebook to advertise donations to real food banks from New York City that are either government funded, or donation funded. Respondents from the NYC area will be targeted to keep the experiment regionally focused and at the same time ensure experimental realism. We will use Facebook’s online advertising facilities and randomly allocate groups of Facebook users into three experimental conditions, thereby manipulating the following funding information: 1) government funded, 2) donation funded, 3) no such information. Randomization will be done on a cluster leve of usersl, stratified by age, gender and zip code. A total of 600 clusters will be composed. Subjects within clusters will be repeatedly exposed to donation advertisements for 24 hours.

No individual-level date will be collected. For those who actually clicked the advertisements, they will be directed to a webpage on the School of Public Affairs and Administration (SPAA) at Rutgers University website, called “The SPAA Food Bank Observatory.” On this webpage, visitors will be informed of the purpose of the study and provided with a list (depdening on the experimental condition) of either government funded or primarily donation funded (or all) food banks in NYC, and encouraged to donate money to them (i.e., depending on the exprimental condition, three versions of this webpage exist). The food banks listed on the websites are all real food banks that operate in NYC, as collected through the National Center for Charitable Statistics (NCCS).
Experimental Design Details
Randomization Method
Randomization will be done by a computer (Stata 14.0); 600 clusters will be blocked into 200 groups of 3 clusters each. Within each group, 1 cluster will get randomly assigned to the government-funding condition, 1 cluster to the donation-funding condition, and 1 cluster to the no information condition.
Randomization Unit
The unit of randomization is clusters of facebook users, stratified by age, gender and zip code. Randomly, 100 NYC zip codes were chosen. In addition, two genders (male and female), 3 age groups (18-27; 28-38; 39-60) were used for stratification --> 100 zipcodes * 2 genders * 6 age groups = 600 clusters.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
600 clusters of Facebook users.
Sample size: planned number of observations
Will be determined by Facebook auctions once ads are fielded. Potenital reach per cluster ranges between 1,000 to 110,000 users, and we will pay $10.00 per each cluster which will potenitally result in a reach of about 300-700 per cluster (i.e., a reach between 180,000 - 420,000 in total). To deal with potenitally unequal cluster sizes, cluster randomization will be blocked by size (3 clusters per block; 1 donation-funded, 1 government-funded, 1 control).
Sample size (or number of clusters) by treatment arms
200 cluster per treatment arm (i.e., 200 clusters control, 200 clusters donation-funded, 200 clusters government-funded)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Office of Research and Regulatory Affairs, Arts and Sciences IRB
IRB Approval Date
2017-05-19
IRB Approval Number
17-618M

Post-Trial

Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
August 27, 2017, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
August 27, 2017, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
600 clusters.
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
296,121 Facenook suers nested in 600 clusters
Final Sample Size (or Number of Clusters) by Treatment Arms
200 Clusters per arm
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
In this article we introduce and showcase how social media can be used to implement experiments in public administration research. To do so, we pre-registered a placebo-controlled field experiment and implemented it on the social media platform Facebook. The purpose of the experiment was to examine whether government funding to nonprofit organizations has an effect on charitable donations. Theories on the interaction between government funding and charitable donations stipulate that government funding of nonprofit organizations either decreases (crowding-out), or increases (crowding-in) private donations. To test these competing theoretical predictions, we used Facebook’s advertisement facilities and implemented an online field experiment among 296,121 Facebook users nested in 600 clusters. Through the process of cluster-randomization, groups of Facebook users were randomly assigned to different nonprofit donation solicitation ads, experimentally manipulating information cues of nonprofit funding. Contrary to theoretical predictions, we find that government funding does not seem to matter; providing information about government support to nonprofit organizations does neither increase nor decreases people’s propensity to donate. We discuss the implications of our empirical application, as well as the merits of using social media to conduct experiments in public administration more generally. Finally, we outline a research agenda of how social media can be used to implement public administration experiments.
Citation

Reports & Other Materials