Incentivizing peer reviewers at a scientific journal in medicine

Last registered on September 20, 2023

Pre-Trial

Trial Information

General Information

Title
Incentivizing peer reviewers at a scientific journal in medicine
RCT ID
AEARCTR-0012011
Initial registration date
September 13, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 20, 2023, 10:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Queen's University

Other Primary Investigator(s)

PI Affiliation
Queen's University

Additional Trial Information

Status
In development
Start date
2023-09-19
End date
2024-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In early 2020, COVID-19 exploded around the globe, overwhelming health systems and shutting down economies. Doctors tried to treat the disease, which at the time was not well understood, while policy makers worked to contain its spread and mitigate its impact. At the same time, researchers around the world turned their attention to the virus, resulting in a rapidly evolving information landscape. Science moved forward at a rapid rate. Hundreds of thousands of papers have been published since, with many more appearing as unpublished preprints.

While the rapid rate of research was needed to inform the pandemic response, it far outstripped the capacity of the traditional peer review process. Though necessary for quality control and dissemination, peer review is traditionally slow and meticulous, relying on ad hoc reviewers who are often stretched thin by their own research, to carefully evaluate the work of others before the findings are made public.

As a result, some outlets reduced the thoroughness of their reviews and many researchers started releasing their research publicly without first waiting for successful peer review. This increased concerns about the quality and reliability of some of the research findings policymakers and the public were exposed to, potentially generating confusion, distorting policy, and decreasing some people’s trust in the scientific process.

Our research asks whether the traditional methods through which research is peer reviewed and published makes sense in a time of crisis. We have partnered with a medical journal that was active in pandemic publishing, having seen a near doubling of manuscript submissions at the height of the crisis. Peer review at this journal is done on a voluntary basis, with expert reviewers providing reports without compensation, at the invitation of handling editors. Using a randomized design, we will compare this control condition to one in which reviewers are invited with the promise of a monetary incentive for completing a review. We will primarily compare the rate of reviews submitted per invitations sent out, and will secondarily look at turnaround time, report quality, and additional response metrics.

Our results will provided much needed empirical evidence to inform the ongoing debate around incentivizing peer review, and to inform policy decisions regarding the acceleration of scientific output in times of crisis.
External Link(s)

Registration Citation

Citation
Cotton, Christopher and David Maslove. 2023. "Incentivizing peer reviewers at a scientific journal in medicine." AEA RCT Registry. September 20. https://doi.org/10.1257/rct.12011-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Editors invite reviewers to review submitted manuscripts using the journal’s Editorial Manager web-based software. When a reviewer is selected, the software automatically sends a standardized letter to the prospective reviewer indicating the title of the manuscript, a manuscript abstract, and the expectations of the review process. The intervention being tested is a “nudge” that will consist of a change in the reviewer invitation letter specifying that an honourarium of USD$250 will be sent to the reviewer by cheque for completion of their review. The control condition will be the standard reviewer invitation letter.
Intervention Start Date
2023-09-19
Intervention End Date
2024-02-26

Primary Outcomes

Primary Outcomes (end points)
The primary outcome will be the reviewer invitation rate of conversion (ROC).
Primary Outcomes (explanation)
We define the ROC as the ratio of reviews submitted, divided by the number of reviewer invitation letters sent. This is a key performance metric to track reviewer efficiency. This information is automatically tracked through the journal's Editorial Manager system. We consider other measures of interest as secondary outcomes. The analysis will only include invitations for initial manuscript submissions; invitations to re-review manuscripts that have been revised and resubmitted will not be included.

Secondary Outcomes

Secondary Outcomes (end points)
- ROC for on-time reviews only
- percentage of reviewer invitations that are accepted
- time to invitation acceptance
- time to review submission
- review quality (as adjudicated by the handling editors, based on a standardized 100-point scale).
Secondary Outcomes (explanation)
All measures are available through the Editorial Manager system.

Experimental Design

Experimental Design
We will conduct a randomized experiment to test the effect of the above-described nudge strategy on the rate of conversion of reviewer invitations to submitted reviews. Each week of the study will be designated either an incentive week or control week, and all reviewer invitations initiated during a given week will be in the same treatment status.
Experimental Design Details
Not available
Randomization Method
Randomization of the schedule of incentives will be done by coin flip to determine the initial condition (incentive or control). We are using randomization of the study weeks in order to estimate the causal effect of the incentive on the reviewer response rate, as measured by the rate of conversion metric described above. We believe this to be a stronger study design than a before-and-after design since the time of year may be associated with a bias in reviewer response.
Randomization Unit
Unit of randomization is the week in which a reviewer invitation is sent out.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
The study will run for up to 18 weeks, depending on the response rate in the incentive arm. Based on recent submission trends, we anticipate that approximately 175 manuscripts will be sent out for review over an 18-week period.
Sample size: planned number of observations
We anticipate between 600 and 1080 observations (reviewer invitations sent out), depending on the incentive response rate, which will dictate the study duration.
Sample size (or number of clusters) by treatment arms
Based on recent data from the journal’s Editorial Manager system (January – June, 2023), an average of 60 reviewer invitations are sent out each week. These same data show a baseline ROC of 54%. If the incentive payments result in a conversion rate of 90%, then we can afford to run five weeks of treatment with 60 invitations sent per week. The experiment would be sufficiently powered to detect an 11.2 point change in ROC, and a 11.4 point change in invite-to-“on time report" ROC, with an alpha of 0.05 and power of 0.80. For the primary ROC outcome, the analysis is overpowered compared to what is needed to detect the observed 36 point change (going from an estimated 54% in the control group to 90% in the treatment group). Considering a more conservative estimate of incentive effectiveness, if the incentive payments result in a conversion rate of 56%, then we can afford to run eight weeks of treatment with 60 invitations sent per week. The experiment would then be sufficiently powered to detect an 8.9 point change in ROC, and a 9.0 point change in invite-to-“on time report" ROC, with an alpha of 0.05 and power of 0.80.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See attached study protocol for details of the power calculation. The minimal detectable effect size ranges from an 8.9% change in ROC, to a 11.2% ROC.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
General Research Ethics Board (GREB), Queen's University
IRB Approval Date
2022-10-13
IRB Approval Number
GEC0-020-22