An Economic Approach to Alleviate the Crises of Confidence in Science: With An Application to the Public Goods Game

Last registered on January 27, 2018

Pre-Trial

Trial Information

General Information

Title
An Economic Approach to Alleviate the Crises of Confidence in Science: With An Application to the Public Goods Game
RCT ID
AEARCTR-0002142
Initial registration date
April 02, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 03, 2017, 11:52 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 27, 2018, 9:45 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region
Region

Primary Investigator

Affiliation
University of Chicago

Other Primary Investigator(s)

PI Affiliation
CNRS and University of Lyon
PI Affiliation
University of Chicago
PI Affiliation
Monash University
PI Affiliation
George Mason University

Additional Trial Information

Status
On going
Start date
2016-04-13
End date
2018-11-30
Secondary IDs
Abstract
Novel empirical insights by their very nature tend to be unanticipated, and in some cases at odds with the current state of knowledge on the topic. The mechanics of statistical inference suggest that such initial findings, even when robust and statistically significant within the study, should not appreciably move priors about the phenomenon under investigation. Yet, a few well-conceived independent replications dramatically improve the reliability of novel findings. Nevertheless, the incentives to replicate are seldom in place in the sciences, especially within the social sciences. We propose a simple incentive-compatible mechanism to promote replications, and use experimental economics to highlight our approach. We begin by reporting results from an experiment in which we investigate how cooperation in allocation games is affected by the presence of Knightian uncertainty (ambiguity), a pervasive and yet unexplored characteristic of most public goods. Unexpectedly, we find that adding uncertainty \textit{enhances} cooperation. This surprising result serves as a test case for our mechanism: instead of sending this paper to a peer-reviewed journal, we make it available online as a working paper, but we commit never to submit it to a journal for publication. We instead offered co-authorship for a second, yet to be written, paper to other scholars willing to independently replicate our study. That second paper will reference this working paper, will include all replications, and will be submitted to a peer-reviewed journal for publication. Our mechanism allows mutually-beneficial gains from trade between the original investigators and other scholars, alleviates the publication bias problem that often surrounds novel experimental results, and accelerates the advancement of economic science by leveraging the mechanics of statistical inference.
External Link(s)

Registration Citation

Citation
Butera, Luigi et al. 2018. "An Economic Approach to Alleviate the Crises of Confidence in Science: With An Application to the Public Goods Game." AEA RCT Registry. January 27. https://doi.org/10.1257/rct.2142-3.0
Former Citation
Butera, Luigi et al. 2018. "An Economic Approach to Alleviate the Crises of Confidence in Science: With An Application to the Public Goods Game." AEA RCT Registry. January 27. https://www.socialscienceregistry.org/trials/2142/history/25321
Experimental Details

Interventions

Intervention(s)
This paper proposes and puts into practice a novel and simple mechanism that allows mutually beneficial gains from trade between original investigators and other researchers. In our mechanism, the original investigators, upon completing their initial study, write a working paper version of their study. While they do share their working paper online, they do however commit not to submit it to any journal for publication, ever. The original investigators instead offer co-authorship of a second paper to other researchers who are willing to independently replicate the experimental protocol in their own research facilities.2 Once the team is established, but before beginning replications, the replication protocol is pre-registered at the AEA experimental registry, and referenced in the first working paper. This is to guarantee that all replications, both successful and failed, are properly accounted for, eliminating any concerns about publication biases. The team of researchers composed by the original investigators and the other scholars will then write and coauthor a second paper, which will reference the original unpublished working paper, and submit it to an academic journal.

Our mechanism is as follows for experimentation but it could easily be adapted to more general empirical exercises:
Step 1: Upon completion of data collection and analysis of a new experiment, the original investigators find a significant result. The original investigators commit to write a working paper out of the original study, but commit not to send it to a refereed journal, ever. The working paper, as explained below, should be posted online on an academic repository (e.g. SSRN, working paper series etc.). After calculating the minimum number of replications necessary to substantiate their results given their design, the original researchers offer co- authorship of a second paper to other scholars who are willing to replicate independently the exact experimental protocol at their own institution using their own financial resources. It is a mutual understanding that the second paper is the only paper that will be sent to refereed journals upon completion of all replications, and will include an analysis of the original dataset and of all replication datasets. It is also a mutual understanding that the second paper will reference the first working paper, and that the latter will be coauthored only by the original investigators. The reference to the first working paper serves a dual purpose: it enables the original investigators to credibly signal the paternity of the original research idea, and, as explained below, it provides a binding commitment device for original investigators and other scholars alike that increases the credibility of the replication strategy.

Step 2: Once the original investigators find an agreement with scholars willing to commit to replicate the original study, the original authors pre-register the replication protocol at the AEA RCT registry. The registered protocol includes details about the experimental protocol and materials (e.g. instructions), the data analysis and findings of the original study, lists the names and affiliations of the scholars who will replicate the study, and provides a tentative timeline for replications. All parties agree that only the replications listed in the AEA pre-registration will be included in the second paper.

Step 3: Once step 3 is completed, the original investigators include in the first working paper a section describing the replication protocol, including the list of scholars who will replicate and the reference number for the AEA pre-registration. The original authors then post their first working paper online.

Step 4: Replications are conducted, data is collected, and the second working paper is written and submitted to a refereed journal by the original investigators and the other participating scholars.
Intervention Start Date
2017-04-13
Intervention End Date
2018-10-25

Primary Outcomes

Primary Outcomes (end points)
1. Contributions to public goods under certainty and Knightian uncertainty.
2. Bayesian priors' updates about main outcomes based on independent replications
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We first describe the general procedure, and then provide details about our treatments.
In each session, 16 participants play 4 repeated public goods games in groups of 4 players. Each game consists of 8 rounds. In each round, participants choose how to allocate 10 tokens between a private account and a group account. After each game, groups are reformed using a stranger matching procedure. Participants are only identified by a randomly generated ID number. It is common knowledge since the beginning that only one of the 4 games will be randomly selected for payments, and that each player will be paid the sum of earnings made in the 8 rounds that constitute that game. In all treatments, the instructions specify which are possible values that the MPCR can take. The minimum possible value of the MPCR is 0.05 and the maximum is 1.25, with increments of 0.1. There are therefore 13 possible values that the MPCR can take. In all treatments, subjects are told that in 3 out of 4 games the true MPCR is constant within each game (e.g. the MPCR does not vary between rounds); instead, in one of the 4 games the true MPCR is randomly drawn every round (with replacement) from the 13 possible values. In all treatments, the 3 games with constant MPCR have always the following (predetermined) MPCR values: 0.25, 0.55, and 0.95. We have two sessions per treatment, and we (partially) vary the order in which games are played : in one session the order of games is: 0.25, 0.55, 0.95, VARIABLE; in the other the order is 0.95, 0.55, 0.25, VARIABLE. Before the beginning of each game, participants are informed about whether the game has a constant or variable MPCR. To control for risk and ambiguity preferences, at the end of the experiment all participants play an incentivized Eckel-Grossman risk task (Eckel and Grossman 2002), and an ambiguity task. This basic structure is common to all treatments.

We have a total of four treatments in our experiment plus a baseline. We have thus a total of 160 subjects, equally balanced across treatments. The baseline treatment Base-VCM is a standard public goods game without Knightian uncertainty.
We have two private signal treatments in which participants only observe their own signal. In treatment Private Thin each participant receives a private signal known to be drawn from the interval: true MPCR +/- 0.1. So for instance if a participant receives a private signal of 0.55, he knows that the true MPCR can either be 0.45, 0.55, or 0.65. He also knows that if the true MPCR is, for instance, 0.65, another player might have received a signal of 0.55, 0.65, or 0.75. Differently, in treatment Private Thick participants receive a private signal known to be drawn from the interval: true MPCR +/- 0.2. So for instance if a participant receives a private signal of 0.55, he knows that the true MPCR can either be 0.35, 0.45, 0.55, 0.65, or 0.75.

We have two public signals treatments, Public Thin and Public Thick, that have the same parameters of the private conditions, but differ in the fact that participants also observe the signals of other group members.

The original experiment was conducted at the ExCEN experimental laboratory at Georgia State University, and was programmed using O-Tree (Chen et al. 2016). Participants received a show-up fee of $10.

Three independent teams will replicate the study. Here is the list of scholars who will independently replicate the study and will coauthor the second study that will be sent to an refereed journal for publications:

1. Phillip Grossman - Monash University.
2. Daniel Houser - George Mason University.
3. Marie Claire Villeval - CNRS and University of Lyon.
Experimental Design Details
Randomization Method
Computer randomization
Randomization Unit
Individual randomization
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
160 individual observations X 3 independent replications.
Sample size: planned number of observations
480 subjects
Sample size (or number of clusters) by treatment arms
There will be 5 treatments (1 baseline and 4 treatments described above). For each independent replication, each treatment has 2 experimental sessions with 16 participants each, thus a total of 32 observations per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We have power of about 80% and alpha=0.05
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Georgia State University
IRB Approval Date
2016-01-05
IRB Approval Number
N/A
IRB Name
University of Chicago
IRB Approval Date
2015-05-07
IRB Approval Number
IRB15-0472

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials