Bias in Peer Review

Last registered on February 17, 2022


Trial Information

General Information

Bias in Peer Review
Initial registration date
February 15, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 17, 2022, 5:26 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator


Other Primary Investigator(s)

PI Affiliation
PI Affiliation

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
In many scientific contexts, peer review can be either single-blind or double-blind: in single-blind review, research work (e.g., manuscripts, proposals) is reviewed alongside information on the author(s), whereas in double-blind review, information on the author(s) is withheld. We will report results from a randomized experiment conducted in collaboration with a grantmaking body, in the context of the grantmaker reviewing proposals in one field of science. By comparing scores across single-blind and double-blind reviews for groups of different characteristics, we can test for potential biases in the review process.
External Link(s)

Registration Citation

Levine, S, C Stein and H Williams. 2022. "Bias in Peer Review." AEA RCT Registry. February 17.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Research proposals received by the grantmaking will be randomly assigned to reviewers (while avoiding conflicts of interest, as defined by the grantmaking body). Then, each assigned reviewer–application pair will be further randomly assigned to either:

a. Single-blind review, in which the reviewer sees both the project description and the applicant's information
b. Double-blind review, in which the reviewer sees only the project description but not the applicant's information
c. Project-blind review, in which the reviewer sees only the applicant's information but not the project description
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Our key outcome variable of interest is the score assigned to an application by a reviewer.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment empirically tests for bias in the grantmaking process
Experimental Design Details
Not available
Randomization Method
We wrote a program in the statistical language R, and representative of the grantmaking body executed it on their computers. The program took three inputs:
1. The list of reviewers
2. The list of applications
3. Any reviewer-application pairs that represented a conflict of interest.
As described above, there are three types of reviews: single-blind, double-blind, and project-blind. Starting with single-blind reviews, randomly assign applications to reviewers. If a reviewer receives an application that represents a conflict of interest or an application they have previously been assigned, that application is returned to the pool, and a new application is drawn. We repeat this procedure of double-blind and project blind applications.
Randomization Unit
Individual application in a given year
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
The sample size will be the number of applications received by the grantmaking body.
Sample size: planned number of observations
The sample size will be the number of applications received by the grantmaking body.
Sample size (or number of clusters) by treatment arms
The sample size will be determined by the number of applicants to the program over the life of this experiment. For instance, in 2022, the grantmaking body received exactly 100 applications. This implies 400 single-blind reviews (4*100), 200 double-blind reviews (2*100), and 200 project-blind reviews (2*100).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on prior studies and the anticipated number of applicants to the program we study, we expected to have enough statistical precision for this analysis: specifically, a recent study (Tomkins et al., 2017) assigned 500 manuscripts to 4 reviewers each, 2 of whom knew the authors’ identity and 2 did not (n=1953 reviews). We anticipated a roughly similar number of applications in this program.

Institutional Review Boards (IRBs)

IRB Name
Stanford University
IRB Approval Date
IRB Approval Number