Affiliation bias and conference inclusion: Experimental evidence from early career researchers

Last registered on October 10, 2023

Pre-Trial

Trial Information

General Information

Title
Affiliation bias and conference inclusion: Experimental evidence from early career researchers
RCT ID
AEARCTR-0012229
Initial registration date
October 03, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 04, 2023, 5:07 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 10, 2023, 9:04 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Torino and Collegio Carlo Alberto

Other Primary Investigator(s)

PI Affiliation
University of Turin, Collegio Carlo Alberto and University of Amsterdam
PI Affiliation
University of Turin and Collegio Carlo Alberto
PI Affiliation
University of Turin, Collegio Carlo Alberto and Paris 1 Panthéon-Sorbonne

Additional Trial Information

Status
On going
Start date
2023-10-03
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study the extent author affiliation can bias reviewers in grading papers submitted to international conferences. We exploit a PhD Workshop in Economics arranged by local PhD candidates. Our objective is to investigate the extent to which author affiliation can introduce bias in the evaluation of research papers. Affiliation is often used as a signal to form beliefs about research quality (Blank 1991). We test whether signals can potentially be misinterpreted and exacerbate inequalities across researchers from differently ranked institutions. We will compare blind and non-blind assessments for each paper submitted to the conference.
External Link(s)

Registration Citation

Citation
Carreras, Enrique et al. 2023. "Affiliation bias and conference inclusion: Experimental evidence from early career researchers." AEA RCT Registry. October 10. https://doi.org/10.1257/rct.12229-1.3
Experimental Details

Interventions

Intervention(s)
We study the extent author affiliation can bias reviewers in grading papers submitted to international conferences. Affiliation is often considered as a signal for quality in research outputs. Nevertheless, lack of experience with research, the review process or field knowledge can lead to over reliance on affiliation. This can exacerbate inequalities between researchers from high ranked institutions and those affiliated to less prestigious establishments.

We randomly allocate reviewers to either blind or not blind paper grading. Within each treatment arm we will randomly allocate papers to reviewers. We remove from all papers information on title, authorship and acknowledgements. Papers assigned for blind grading also have authors’ affiliation removed, while papers not assigned for blind grading keep the affiliation of the submitter visible. We only retain the submitter’s affiliation(s) for co-authored papers. Affiliation(s) will be positioned in a footnote in the title page. Reviewers will be instructed not to speak to each other about the reviews. We will not disclose to reviewers that an experiment is taking place at any time. Reviewers will be asked not to communicate to avoid coordinating on grading. Reviewers will not have information on their peers’ grading. Papers are distributed to reviewers through a dedicated shared folder that is only accessible to them. We will rename blind and non-blind papers following separate naming conventions to increase coordination costs for reviewers. In addition to this, both blind and non-blind papers will be made available as non-searchable PDF documents to increase the cost of searching for the missing information online.

All papers are evaluated by two reviewers, one assigned to blind grading and one assigned to non-blind grading. Reviewers are asked to assess papers along three grading criteria: Grade A is assigned based on the research question developed in the paper. Grade B refers to writing style while Grade C assess the research design. All grades range from 1 to 10. Reviewers will also be asked a recommendation on acceptance. For each paper we will also include an optional open-ended box for reviewers to share any comment they wish to provide. The same data collection tool will include questions on beliefs on the relative quality of their own work compared to the submitted papers and willingness to meet the author of the paper.
Subsequently to collecting grades, we will ask reviewers a number of questions to assess any threats to validity. We will inquire reviewers’ about their perceived purpose of this grading scheme. Next, we will ask reviewers to rank universities into separate tiers based on their perceived quality of institutions. Finally, we will also collect information on perceived importance for title, name, affiliation and acknowledgments in evaluating papers.

Reviewers will receive a grading package up to 3 days after the submission deadline to allow for paper allocation and anonymization. The grading package consists of instructions for grading and either blind or not blind paper versions based on a reviewer randomized assignment. Reviewers are instructed not to share or talk to each other about the grading package they receive. Reviewers have up to 14 days to grade about 10 papers each. Grades are sent back to the conference organizers via a dedicated excel tool. After obtaining grading data and perceived purpose from all reviewers, we will collect information on beliefs about relative quality, university rankings and the perceived importance of different signals through a dedicated survey.
Intervention Start Date
2023-10-07
Intervention End Date
2023-10-31

Primary Outcomes

Primary Outcomes (end points)
- Grade A
- Grade B
- Grade C
- Recommendation on acceptance
Primary Outcomes (explanation)
- Grade A is assigned to a paper based on originality of research question and contribution to the literature.
- Grade B assigned to a paper based on writing clarity and readability.
- Grade C assigned to a paper based on soundness of the research design (empirics or theory)
All grades range from 1 to 10.
- Recommendation for acceptance. Reviewers are to choose one of the following options: (A) Definitely accept: very good paper. (B) Probably accept good paper. (C) Might accept: OK paper. (D) Don’t think this paper can be accepted.

Secondary Outcomes

Secondary Outcomes (end points)
- Beliefs about quality of own work compared to submitted papers
- Willingness to meet the speaker
- Ranking of institutions according to tiers
- Comments from open ended field
Secondary Outcomes (explanation)
- Beliefs about quality of own work compared to submitted papers. We want to elicit whether reviewers perceived their work to be of the same or of higher/ lower quality of the submitted paper. We believe this might be influenced by affiliation bias
- Willing to meet the speaker. Reviewers will be asked whether they are interested in meeting the speaker. We will use this question to explore network as potential mechanism.
- Ranking of institutions according to tiers. We will ask reviewers to assign a quartile to each establishment out of the list of applicants’ institutions. We will use this information as a subjective measure for affiliation quality. We suspect it might differ from more established rankings for the population of early career PhD researchers. This measure of perceived quality might also be more easily swayed by high quality applicants originating from less recognized institutions.
- Comments from open ended field. We will explore any qualitative feedback that reviewers might want to share. We will analyse any difference in the language used to refer to papers from the different universities.

Experimental Design

Experimental Design
We compare reviews for the same paper by blind and non-blind reviewers. We will use the whole body of submissions to a PhD Conference in Economics organized in Italy. Submissions are open to PhD students from any field of Economics and all universities. Local PhD candidates will review submissions. Reviewers are first stratified by research area (either micro theory, applied micro and macro) and then randomly allocated to either blind or non-blind grading. Papers are then randomly assigned within each group according to the research area of the paper. Every paper is read by one blind reviewer and a non-blind reviewer. Reviewers submit their grades within 14 days after receiving the grading package.

We explore grading differences between blind and non-blind assessments on quality of the research idea, soundness of the research design and writing style. Affiliation bias is defined as any difference, for each paper, between blind and non-blind assessments. We will then assess whether affiliation bias plays any role in changing beliefs about quality of own work. We will explore mechanisms by assessing heterogeneous treatment effects for universities positioned at the top and bottom of the underlying quality distribution. We will also explore any qualitative insights from open ended text shared by reviewers and explore networking as a potential mechanism.
Finally, we will also investigate whether seniority in the PhD program and mismatch between reviewers and papers’ topic play a role in shaping affiliation bias.
Experimental Design Details
Randomization Method
We perform a two-stage randomization procedure. In the first stage we stratify reviewers on their research area (micro theory, applied micro and macro) and then randomize, within each stratum, reviewers to either blind or non-bling grading. In the second stage we stratify papers according to the same research areas above. Within each stratum, we randomly allocate papers to both blind and non-blind reviewers. Randomization is done in the office by a computer for both stages. The randomization takes place on 07/10/2023 upon the closing of the call for papers.
Randomization Unit
Reviewers
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
16 reviewers
Sample size: planned number of observations
200 grades for 100 papers (expected)
Sample size (or number of clusters) by treatment arms
Control group: 100 grades for 100 papers (expected)
Treated group: 100 grades for 100 papers (expected)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
- Grade A: 0.318 - Grade C: 0.233
IRB

Institutional Review Boards (IRBs)

IRB Name
ETHICS COMMITTEE of Collegio Carlo Alberto
IRB Approval Date
2023-10-09
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials