Back to History Current Version

Paper quality

Last registered on May 17, 2021

Pre-Trial

Trial Information

General Information

Title
Paper quality
RCT ID
AEARCTR-0005620
Initial registration date
April 24, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 24, 2020, 2:57 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 17, 2021, 8:26 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region

Primary Investigator

Affiliation
Victoria University of Wellington

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-04-28
End date
2021-12-31
Secondary IDs
Abstract
This project is about perceived quality of papers.
External Link(s)

Registration Citation

Citation
Feld, Jan. 2021. "Paper quality." AEA RCT Registry. May 17. https://doi.org/10.1257/rct.5620-3.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
******Update: This experiment consists of two parts. In part 1, we collect and edit academic papers. In part 2, we ask experts to rate the original and edited version of these papers. This version of the trial registration contains updated information about part 2 which we document before the start of part 2. ******

Scientists often find it challenging to convey complex ideas in writing so that they can be easily understood. Conventional wisdom holds that overcoming this challenge will cause others, like referees and conference organisers, to evaluate one’s paper more positively.

In this project, we will test whether this wisdom is true and estimate the effect of writing quality. The intervention will consist of offering free language editing for papers of PhD students in economics. This language editing is done by two editors who work for a plain language consultancy and follows the procedure outlined in the language editing guide shown in Appendix A.

This study follows a within-paper design. Each paper has two versions, the edited and the non-edited version. Both versions will be evaluated by academic economists and language experts. Our estimates of the treatment effects are the difference in evaluations of edited papers and non-edited papers (conditional on paper fixed effects).

Intervention Start Date
2020-04-28
Intervention End Date
2021-05-19

Primary Outcomes

Primary Outcomes (end points)
1. Estimated effect of language editing on writing quality.
2. Estimated effect of language editing on paper quality.
3. If we find significant effects for primary outcome 1, we will also estimate the effect of writing quality on paper quality.

Primary Outcomes (explanation)
Explanation for primary outcome 1. Academic economists will rate the writing quality of a paper on an 11-point scale ranging from 0 (very bad) to 10 (very good).

To estimate the effect of language editing on writing quality, we will regress writing quality on an indicator showing whether the paper is edited or not. To increase precision of our estimates, we will also include paper fixed effects.
Explanation for primary outcome 2. Academic economists will rate the quality of a paper on an 11-point scale ranging from 0 (very bad) to 10 (very good).

To estimate the effect of language editing on perceptions of paper quality, we will regress paper quality on an indicator showing whether the paper is edited or not. To increase precision of our estimates, we will also include paper fixed effects.
Explanation for primary outcome 3. If we find significant effects of language editing on writing quality, we will also estimate the effect of writing quality on paper quality with an instrumental variable approach. In the first stage, we predict writing quality with a dummy variable indicating whether the paper is edited or not (the first stage is identical to the analysis for primary outcome 1). In the second stage, we estimate the effect of writing quality (as predicted in the first stage) on paper quality. To increase precision of our estimates, we will include paper fixed effects in both stages.


Secondary Outcomes

Secondary Outcomes (end points)
We have three secondary outcomes, which show the effect of language editing on economists’ assessment of the following.
1. The chance of getting the paper accepted at a conference.
2. Conditional on submission, the chance of the paper getting desk rejected.
3. The probability of the paper getting published in a good journal.

If we find significant effects of paper language editing on writing quality, we will also estimate the effect of writing quality on economists’ assessment of the following.
4. The probability of getting the paper accepted at a conference.
5. The probability of getting desk rejected.
6. The probability of getting published in a good journal.

We have five secondary outcomes which are based on the evaluations of language experts. These show the effect of language editing on:
7. The writing quality.
8. How easy the paper allows the reader to find the key messages.
9. To what extent the paper is free of spelling and grammar mistakes.
10. How easy to read the paper is.
11. How concise the paper is.

Secondary Outcomes (explanation)
Secondary outcomes 1-6 are based on survey responses of economists.

Secondary outcomes 7-11 are based on survey responses of language experts.

Secondary outcomes that look at the effects of language editing (1–3, 7-11) will be estimated with regressions of the outcome variable on an indicator showing whether the paper is edited and paper fixed effects.

Secondary outcomes that look at the effect of writing quality (4–6) will be estimated using instrumental variable regressions which include paper fixed effects. This is the same approach as for primary outcome 3 (see above). We will only estimate these effects if writing quality is significantly affected by language editing.

Experimental Design

Experimental Design
Nothing to report before the experiment is completed.
Experimental Design Details
The experiment consists of four steps.

Step 1: PhD students in economics from New Zealand universities submit their papers to be part of this experiment and fill in a brief survey.

Step 2: Professional language editors from a plain language consultancy will edit the writing quality of papers of PhD students in economics. The editing will follow the procedure outlined in the language editing guide shown in Appendix A.

Step 3: Academic economists will judge the quality of the papers and the writing quality of the papers and record their evaluations in an online survey. Each judge will evaluate the original versions of some papers and the edited versions of other papers.

Step 4: Language experts will rate the writing quality of the original and edited papers.

The experiment has three samples.

Paper sample. We aim to have 30 papers from PhD students in economics from universities based in New Zealand. We will contact PhD students and their supervisor by email one university at a time and ask them if they want to participate in this experiment. If they participate, they receive free language editing in exchange for allowing us to use their paper in this study and filling in a short survey.

Academic economist sample. We aim to have 30 academic economists as raters of the quality of the papers as well as the quality of the writing. We will recruit these raters via email from different universities and research institutions in Australia. Each rater will be asked to review 10 papers (5 edited and 5 non-edited) in 1- 1.5 hours and fill in a short survey with questions about themselves and the papers. Raters will be compensated with a voucher worth AUD 50.

Language expert sample. We aim to have 18 language experts as raters of the writing quality of the papers. We will recruit these from the network of the language editors. Language experts are non-economists who have experience in writing and reading, for example, through their job (e.g. copy editors). Each language expert will be asked to rate 10 papers (5 edited and 5 non-edited) in 1-1.5 hours and fill in a short survey with questions about themselves and the papers. Raters will be compensated with a voucher worth NZD 50.


Randomization Method
Randomization for main analysis sample:

We split the papers by topic into a micro-economics group (20 papers) and a macro-economic group (10 papers).

We randomly assign each paper within a group one “paper number”, so that papers in the micro group will be assigned numbers 1-20 and papers in the macro group will be assigned numbers 21-30.

After that, we create the following six paper blocks:
1. Papers 1-10 (micro), papers 1-5 original version, papers 6-10 edited version
2. Papers 1-10 (micro), papers 1-5 edited version, papers 6-10 original version
3. Papers 11-20 (micro), papers 11-15 original version, papers 16-20 edited version
4. Papers 11-20 (micro), papers 11-15 edited version, papers 16-20 original version
5. Papers 21-30 (macro), papers 21-25 original version, papers 26-30 edited version
6. Papers 21-30 (macro), papers 21-25 edited version, papers 26-30 original version

Randomization for our language expert sample:

We randomly assign each paper one “language paper number”. This is a different randomization, the “language paper numbers” are not the same as the “paper numbers”.

After that, we create the following six blocks:
1. Papers 1-10, papers 1-5 original version, papers 6-10 edited version
2. Papers 1-10, papers 1-5 edited version, papers 6-10 original version
3. Papers 11-20, papers 11-15 original version, papers 16-20 edited version
4. Papers 11-20, papers 11-15 edited version, papers 16-20 original version
5. Papers 21-30, papers 21-25 original version, papers 26-30 edited version
6. Papers 21-30, papers 21-25 edited version, papers 26-30 original version

This assignment of papers to paper numbers will be done with a computer.

The papers will be presented in random order to each rater. This randomization will be done within the survey by Qualtrics.

Assignment of raters to paper blocks

For economists:

We create two lists of academic economists from Australian universities or research institutes: one for micro economists and one for macro economists.

We invite economists in both groups via email to evaluate 10 academic papers in their discipline.

Those who agree to evaluate ten papers are assigned to a paper block, and we will send them a survey containing links to the relevant papers. More specifically, micro-economists will be assigned to paper blocks 1-4 (which contain micro papers) and macro-economists will be assigned to paper blocks 5-6 (which contain macro papers).

The order of the paper block assignment will be determined by the order the economists agree to participate. For example, the first micro-economist will be assigned to paper block 1, the second micro-economist will be assigned to paper block 2, etc. Similarly, the first macro-economist will be assigned to paper block 5 and the second macro-economist will be assigned to paper block 6.

We will deviate from this assignment-procedure to avoid that economists from the same institution are asked to judge different versions of the same paper within a short time. Economists from the same institution are more likely to talk to each other about the task and therefore might realize that we have included different versions of the same paper in the experiment. This may raise suspicions, which we want to avoid. We will solve this problem by swapping the assignment to paper blocks with economists from other institution. For example, if two economists from the University of Melbourne would have been assigned to paper-blocks 1 and 2, and one economist from the University of Sydney would have been assigned to paper-block 3, we would swap the paper-block assignments of the second University of Melbourne economist with the University of Sydney economist.

For language experts:

We will invite language experts to evaluate 10 academic papers.

If they agree, we will send them a survey containing links to the relevant papers.

The order of the paper block assignment will be determined by the order the language experts agree to participate. The expert who first agrees to participate will be assigned to paper block 1, the second expert will be assigned to paper block 2, etc.

As for the academic economists, we will deviate from this assignment procedure to avoid that experts from the same institution are asked to judge different versions of the same paper.
Randomization Unit
See explanation above.

Each paper will be randomly assigned to a paper number (randomization unit = paper)

Each rater will be randomly assigned to a rater number (randomization unit = rater)
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
30 clusters (one cluster for each paper).
Sample size: planned number of observations
For our main analysis, we expect to have 300 paper-version-rater observations. For the analysis by language experts, we expect to have 180 paper-version-rater observations.
Sample size (or number of clusters) by treatment arms
Main analysis:
Treatment group: 30 edited paper versions (= 150 paper-version-rater observations)
Control group: 30 non-edited paper versions (= 150 paper-version-rater observations)

Language expert analysis:
Treatment group: 30 edited paper versions (= 90 paper-version-rater observations)
Control group: 30 non-edited paper versions (= 90 paper-version-rater observations)

Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Victoria University Human Ethics Committee
IRB Approval Date
2020-04-24
IRB Approval Number
0000027561

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Yes
Data Collection Completion Date
Final Sample Size: Number of Clusters (Unit of Randomization)
30 papers
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
480 paper-rater observations
Final Sample Size (or Number of Clusters) by Treatment Arms
240 paper-rater observations for original papers, 240 paper-rater observations for edited papers
Data Publication

Data Publication

Is public data available?
Yes
Public Data URL

Program Files

Program Files
Yes
Program Files URL
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
For papers to have scientific impact, they need to impress our peers in their role as referees, journal editors, and members of conference committees. Does better writing help our papers make it past these gatekeepers? In this study, we estimate the effect of writing quality by comparing how 30 economists judge the quality of papers written by PhD students in economics. Each economist judged five papers in their original version and five different papers that had been language edited. No economist saw both versions of the same paper. Our results show that writing matters. Compared to the original versions, economists judge edited versions as higher quality; they are more likely to accept edited versions for a conference; and they believe that edited versions have a better chance of being accepted at a good journal.
Citation
Feld, J., Lines, C., & Ross, L. (2023). Writing matters.

Reports & Other Materials