Incentivizing peer reviewers at a scientific journal in medicine

Last registered on September 20, 2023

Pre-Trial

Trial Information

General Information

Title
Incentivizing peer reviewers at a scientific journal in medicine
RCT ID
AEARCTR-0012011
Initial registration date
September 13, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 20, 2023, 10:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Queen's University

Other Primary Investigator(s)

PI Affiliation
Queen's University

Additional Trial Information

Status
In development
Start date
2023-09-19
End date
2024-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In early 2020, COVID-19 exploded around the globe, overwhelming health systems and shutting down economies. Doctors tried to treat the disease, which at the time was not well understood, while policy makers worked to contain its spread and mitigate its impact. At the same time, researchers around the world turned their attention to the virus, resulting in a rapidly evolving information landscape. Science moved forward at a rapid rate. Hundreds of thousands of papers have been published since, with many more appearing as unpublished preprints.

While the rapid rate of research was needed to inform the pandemic response, it far outstripped the capacity of the traditional peer review process. Though necessary for quality control and dissemination, peer review is traditionally slow and meticulous, relying on ad hoc reviewers who are often stretched thin by their own research, to carefully evaluate the work of others before the findings are made public.

As a result, some outlets reduced the thoroughness of their reviews and many researchers started releasing their research publicly without first waiting for successful peer review. This increased concerns about the quality and reliability of some of the research findings policymakers and the public were exposed to, potentially generating confusion, distorting policy, and decreasing some people’s trust in the scientific process.

Our research asks whether the traditional methods through which research is peer reviewed and published makes sense in a time of crisis. We have partnered with a medical journal that was active in pandemic publishing, having seen a near doubling of manuscript submissions at the height of the crisis. Peer review at this journal is done on a voluntary basis, with expert reviewers providing reports without compensation, at the invitation of handling editors. Using a randomized design, we will compare this control condition to one in which reviewers are invited with the promise of a monetary incentive for completing a review. We will primarily compare the rate of reviews submitted per invitations sent out, and will secondarily look at turnaround time, report quality, and additional response metrics.

Our results will provided much needed empirical evidence to inform the ongoing debate around incentivizing peer review, and to inform policy decisions regarding the acceleration of scientific output in times of crisis.
External Link(s)

Registration Citation

Citation
Cotton, Christopher and David Maslove. 2023. "Incentivizing peer reviewers at a scientific journal in medicine." AEA RCT Registry. September 20. https://doi.org/10.1257/rct.12011-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Editors invite reviewers to review submitted manuscripts using the journal’s Editorial Manager web-based software. When a reviewer is selected, the software automatically sends a standardized letter to the prospective reviewer indicating the title of the manuscript, a manuscript abstract, and the expectations of the review process. The intervention being tested is a “nudge” that will consist of a change in the reviewer invitation letter specifying that an honourarium of USD$250 will be sent to the reviewer by cheque for completion of their review. The control condition will be the standard reviewer invitation letter.
Intervention Start Date
2023-09-19
Intervention End Date
2024-02-26

Primary Outcomes

Primary Outcomes (end points)
The primary outcome will be the reviewer invitation rate of conversion (ROC).
Primary Outcomes (explanation)
We define the ROC as the ratio of reviews submitted, divided by the number of reviewer invitation letters sent. This is a key performance metric to track reviewer efficiency. This information is automatically tracked through the journal's Editorial Manager system. We consider other measures of interest as secondary outcomes. The analysis will only include invitations for initial manuscript submissions; invitations to re-review manuscripts that have been revised and resubmitted will not be included.

Secondary Outcomes

Secondary Outcomes (end points)
- ROC for on-time reviews only
- percentage of reviewer invitations that are accepted
- time to invitation acceptance
- time to review submission
- review quality (as adjudicated by the handling editors, based on a standardized 100-point scale).
Secondary Outcomes (explanation)
All measures are available through the Editorial Manager system.

Experimental Design

Experimental Design
We will conduct a randomized experiment to test the effect of the above-described nudge strategy on the rate of conversion of reviewer invitations to submitted reviews. Each week of the study will be designated either an incentive week or control week, and all reviewer invitations initiated during a given week will be in the same treatment status.
Experimental Design Details
We will conduct a randomized experiment to test the effect of the above-described incentive strategy on the ROC. The experiment will last up to 18 weeks beginning on Sept 19, 2023. Each week will be designated either an incentive week or control week, and all reviewer invitations initiated during a given week will be in the same treatment status. Randomization by weeks is necessary because the journal is unable to randomize at the individual or submission level, and a week-on/week-off approach ensures that the treatment and control weeks are uniformly spread across the experimental period. Review invitation letters will be switched on Tuesday of each week in order to avoid switches on weekends or holidays. The starting condition (incentive or control) will be chosen at random prior to study initiation.

During control weeks, the standard reviewer invitation letter will be sent out automatically by the Editorial Manager software anytime a handling editor invites a reviewer to review a manuscript. During intervention weeks, the incentive letter will automatically be sent. Reviewers who are invited using the incentive letter will be mailed a cheque in the amount of USD$250 by the journal once their review is submitted. Data regarding review acceptance, time to acceptance, time to review submission, and review quality, will be collected automatically by the Editorial Manager software. Review quality will be adjudicated by the handling editor using a standardized 100-point scale.

For the primary analysis, all control weeks will be pooled, and all intervention weeks will be pooled, thereby creating two groups of reviewer invitations. Group assignment will be determined solely by the letter that was used to invite the reviewer, irrespective of when the review is submitted, or when other reviewers for the same manuscript are invited.

While reviewers often review multiple versions of a manuscript, including the initial submission as well as any subsequent revisions, only first reviews will be eligible for the stipend. Follow-up reviews of manuscripts that have been revised and resubmitted will not be eligible. This is to ensure that a reviewer is only paid once per manuscript, and that only the first review is compensated. The control condition will be the standard reviewer invitation letter, slightly modified to inform the prospective reviewer that anonymized data from the Editorial Manager system may be used for research

Number of rounds--The budget can accommodate payments of USD$250 for up to 270 reviewers. The number of treatment weeks that can be accommodated by the budget will depend on the number of manuscripts submitted to the journal, the corresponding number of reviewer invitations sent, and the share of invited reviewers who accept the invitation and submit a report (which is likely impacted by the incentive payment and therefore not fully predictable).

We anticipate running the experiment between 10 and 16 weeks, depending on the treatment effectiveness and submission numbers, with the possibility of increasing the duration to 18 weeks if the submission numbers are substantially lower than expected during this period. We will assess the budget after each treatment week and not introduce additional treatment weeks once the remaining budget is potentially insufficient to pay the reviewer incentives in another week of treatment.

Because we anticipate a low reviewer response rate over the holidays, the period between December 12, 2023 and Jan 1, 2024 will be a blackout period during which the standard (control) reviewer letter will be used, but no data will be collected.

All information regarding the date that invitations are sent, the reviewer response, the time of report submission, and other key performance metrics is captured directly in Editorial Manager.
Randomization Method
Randomization of the schedule of incentives will be done by coin flip to determine the initial condition (incentive or control). We are using randomization of the study weeks in order to estimate the causal effect of the incentive on the reviewer response rate, as measured by the rate of conversion metric described above. We believe this to be a stronger study design than a before-and-after design since the time of year may be associated with a bias in reviewer response.
Randomization Unit
Unit of randomization is the week in which a reviewer invitation is sent out.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
The study will run for up to 18 weeks, depending on the response rate in the incentive arm. Based on recent submission trends, we anticipate that approximately 175 manuscripts will be sent out for review over an 18-week period.
Sample size: planned number of observations
We anticipate between 600 and 1080 observations (reviewer invitations sent out), depending on the incentive response rate, which will dictate the study duration.
Sample size (or number of clusters) by treatment arms
Based on recent data from the journal’s Editorial Manager system (January – June, 2023), an average of 60 reviewer invitations are sent out each week. These same data show a baseline ROC of 54%. If the incentive payments result in a conversion rate of 90%, then we can afford to run five weeks of treatment with 60 invitations sent per week. The experiment would be sufficiently powered to detect an 11.2 point change in ROC, and a 11.4 point change in invite-to-“on time report" ROC, with an alpha of 0.05 and power of 0.80. For the primary ROC outcome, the analysis is overpowered compared to what is needed to detect the observed 36 point change (going from an estimated 54% in the control group to 90% in the treatment group). Considering a more conservative estimate of incentive effectiveness, if the incentive payments result in a conversion rate of 56%, then we can afford to run eight weeks of treatment with 60 invitations sent per week. The experiment would then be sufficiently powered to detect an 8.9 point change in ROC, and a 9.0 point change in invite-to-“on time report" ROC, with an alpha of 0.05 and power of 0.80.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See attached study protocol for details of the power calculation. The minimal detectable effect size ranges from an 8.9% change in ROC, to a 11.2% ROC.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
General Research Ethics Board (GREB), Queen's University
IRB Approval Date
2022-10-13
IRB Approval Number
GEC0-020-22

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials