x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
What do Editors Maximize? A Survey on Paper Quality
Last registered on October 10, 2016

Pre-Trial

Trial Information
General Information
Title
What do Editors Maximize? A Survey on Paper Quality
RCT ID
AEARCTR-0001669
Initial registration date
October 10, 2016
Last updated
October 10, 2016 7:19 PM EDT
Location(s)
Region
Primary Investigator
Affiliation
UC Berkeley
Other Primary Investigator(s)
PI Affiliation
UC Berkeley
Additional Trial Information
Status
In development
Start date
2016-09-29
End date
2017-03-15
Secondary IDs
Abstract
Publications in top scientific journals play a critical role in the careers of scientists. Yet, remarkably little is known about how editors choose which submissions to publish. We provide evidence on this decision-making process using anonymized data on all submissions over eight years to four leading economics journals: the Journal of the European Economics Association, the Quarterly Journal of Economics, the Review of Economic Studies, and the Review of Economics and Statistics. The data set contains information on characteristics of the papers (and their authors), referee recommendations, and the editorial decision. The manuscripts are matched to Google Scholar citations. We compare the findings in the four journals to the predictions of a simple descriptive model in which editors aim to maximize the citations of published articles. To gather some additional insights, we conduct a survey of faculty and PhD students in economics, asking them to compare pairs of papers in their field of expertise. This survey is the focus of this pre-registration. The pairs of papers evaluated in the survey are selected so both papers are in the same field and both were published in a top journal in the same year between 1999 and 2012. We ask the respondents to compare papers on four features: novelty, exposition, rigor, and importance of contribution. We also provide respondents with the actual Google Scholar citations for the paper and ask them about what they think would be the appropriate citations, given their judgment. This additional evaluation us to return to alternative interpretations for the findings on editorial choices.
External Link(s)
Registration Citation
Citation
Card, David and Stefano DellaVigna. 2016. "What do Editors Maximize? A Survey on Paper Quality." AEA RCT Registry. October 10. https://doi.org/10.1257/rct.1669-1.0.
Former Citation
Card, David, Stefano DellaVigna and Stefano DellaVigna. 2016. "What do Editors Maximize? A Survey on Paper Quality." AEA RCT Registry. October 10. http://www.socialscienceregistry.org/trials/1669/history/11151.
Experimental Details
Interventions
Intervention(s)
There is no intervention. This pre-registration is for a survey that will collect an assessment of quality of pairs of published papers.
Intervention Start Date
2016-09-29
Intervention End Date
2016-11-11
Primary Outcomes
Primary Outcomes (end points)
The outcomes are the collected valuations of paper quality, as specified in the pre-analysis plan.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
This pre-registration is for a survey that will collect an assessment of quality of pairs of published papers. We will use this information to complement the analysis of editorial choices in a data set of 4 high-impact economic journals, as described in detail in the pre-analysis plan.
Experimental Design Details
Randomization Method
This is a survey, in which randomization does not play a key role. Nonetheless, we randomize the order of some questions and which paper is presented first in a pair of papers.
Randomization Unit
The paper-pair level.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
There will be 60 pairs of papers.
Sample size: planned number of observations
We intend to collect evaluations for 50-100 respondents, depending on the response rate to the survey.
Sample size (or number of clusters) by treatment arms
There is only one arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Office for the Protection of Human Subjects, UC Berkeley
IRB Approval Date
2016-09-22
IRB Approval Number
2016-08-9029
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS