Abstract
In early 2020, COVID-19 exploded around the globe, overwhelming health systems and shutting down economies. Doctors tried to treat the disease, which at the time was not well understood, while policy makers worked to contain its spread and mitigate its impact. At the same time, researchers around the world turned their attention to the virus, resulting in a rapidly evolving information landscape. Science moved forward at a rapid rate. Hundreds of thousands of papers have been published since, with many more appearing as unpublished preprints.
While the rapid rate of research was needed to inform the pandemic response, it far outstripped the capacity of the traditional peer review process. Though necessary for quality control and dissemination, peer review is traditionally slow and meticulous, relying on ad hoc reviewers who are often stretched thin by their own research, to carefully evaluate the work of others before the findings are made public.
As a result, some outlets reduced the thoroughness of their reviews and many researchers started releasing their research publicly without first waiting for successful peer review. This increased concerns about the quality and reliability of some of the research findings policymakers and the public were exposed to, potentially generating confusion, distorting policy, and decreasing some people’s trust in the scientific process.
Our research asks whether the traditional methods through which research is peer reviewed and published makes sense in a time of crisis. We have partnered with a medical journal that was active in pandemic publishing, having seen a near doubling of manuscript submissions at the height of the crisis. Peer review at this journal is done on a voluntary basis, with expert reviewers providing reports without compensation, at the invitation of handling editors. Using a randomized design, we will compare this control condition to one in which reviewers are invited with the promise of a monetary incentive for completing a review. We will primarily compare the rate of reviews submitted per invitations sent out, and will secondarily look at turnaround time, report quality, and additional response metrics.
Our results will provided much needed empirical evidence to inform the ongoing debate around incentivizing peer review, and to inform policy decisions regarding the acceleration of scientific output in times of crisis.