Back to History Current Version

Paper quality

Last registered on December 23, 2020

Pre-Trial

Trial Information

General Information

Title
Paper quality
RCT ID
AEARCTR-0005620
Initial registration date
April 24, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 24, 2020, 2:57 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 23, 2020, 11:21 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region

Primary Investigator

Affiliation
Victoria University of Wellington

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2020-04-28
End date
2021-05-31
Secondary IDs
Abstract
This project is about perceived quality of papers.
External Link(s)

Registration Citation

Citation
Feld, Jan. 2020. "Paper quality." AEA RCT Registry. December 23. https://doi.org/10.1257/rct.5620-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2020-04-28
Intervention End Date
2021-03-31

Primary Outcomes

Primary Outcomes (end points)
1. Estimated effect of language editing on writing quality.
2. Estimated effect of language editing on paper quality.
3. Estimated effect of writing quality on paper quality.
Primary Outcomes (explanation)
Explanation for primary outcome 1. Academic economists will rate the writing quality of a paper on an 11-point scale ranging from 0 (very bad) to 10 (very good).

To estimate the effect of language editing on writing quality, we will regress writing quality on an indicator showing whether the paper is edited or not. To increase precision of our estimates, we will also include the standard controls (see below).

Explanation for primary outcome 2. Academic economists will rate the quality of a paper on an 11-point scale ranging from 0 (very bad) to 10 (very good).

To estimate the effect of language editing on perceptions of paper quality, we will regress paper quality on an indicator showing whether the paper is edited or not. To increase precision of our estimates, we will also include the standard controls.

Explanation for primary outcome 3. The same academic economists will also rate the quality of the writing on an 11-point scale ranging from 0 (very bad) to 10 (very good).

We estimate the effect of writing quality on perceptions of paper quality with an instrumental variable approach. In the first stage, we predict writing quality with a dummy variable indicating whether the paper is edited or not (the first stage is identical to the analysis for primary outcome 1). In the second stage, we estimate the effect of writing quality (as predicted in the first stage) on perceived paper quality. To increase precision of our estimates, we will include the standard controls in both stages.

These are the “standard” control variables:
• Rater fixed effects (i.e. academic judge or writing expert)
• Paper fixed effects

Secondary Outcomes

Secondary Outcomes (end points)
We have seven secondary outcomes, which show the effect of language editing on raters’ assessment of the following.
1. The chance of getting the paper accepted at a conference.
2. Conditional on submission, the chance of the paper getting desk rejected.
3. The probability of getting published in a good journal.
4. The writing quality as judged by writing experts.
5. How easy the paper is to understand as judged by writing experts.
6. How easy the paper is engaging as judged by writing experts.
7. How well-structured the paper is as judged by writing experts.

We have three secondary outcomes, which show the effect of writing quality on the raters’ assessment of the following.
8. The chance of getting the paper accepted at a conference.
9. The chance of sending the paper to referees.
10. The probability of getting published in a good journal.
Secondary Outcomes (explanation)
Secondary outcomes that look at the effects of language editing (1–7) will be estimated with regressions of the outcome variable on an indicator showing whether the paper is edited and the set of standard controls.

Secondary outcomes that look at the effect of writing quality (8–10) will be estimated using instrumental variable regressions which include the set of standard controls. This is the same approach as for primary outcome 3 (see above).

Secondary outcomes 5–7 are taken from a survey to language experts. All other outcomes are taken from a survey to academic raters.

Secondary outcomes 1 and 8 will use this survey question:
How likely would you be to accept this paper at a general economics conference (such as the Australian Conference of Economists)? Answer options: 0 to 100 percent.

Secondary outcomes 2 and 9 will use this survey question:
Imagine you were an editor of a general economics journal that is rated A on the ABDC list.
How likely would you be to desk reject the paper? Answer options: 0 to 100 percent.

Secondary outcomes 3 and 10 will use this survey question:
How likely is it that this paper will get published in an A or A* journal according to the ABDC list? Answer options 0 to 100 percent.

Secondary outcomes 4 will use this survey question:
Overall, the quality of the writing is. Answer options: 0 (very bad) to 10 (very good).

Secondary outcomes 5-7 will use these survey questions.
Please state how much you agree with each of the following statements. Answer options: 1 (Strongly disagree) to 5 (Strongly agree).
• The paper is easy to understand (outcome 5).
• The paper is engaging (outcome 6).
• The paper is well-structured (outcome 7).

Experimental Design

Experimental Design
Nothing to report before the experiment is completed.
Experimental Design Details
The experiment consists of four steps.

Step 1: PhD students in economics from New Zealand universities submit their papers to be part of this experiment and fill in a brief survey.

Step 2: Professional language editors from a plain language consultancy will edit the writing quality of papers of PhD students in economics. The editing will follow the procedure outlined in the language editing guide shown in Appendix A.

Step 3: Academic economists will judge the quality of the papers and the writing quality of the papers and record their evaluations in an online survey. Each judge will evaluate the original versions of some papers and the edited versions of other papers.

Step 4: Language experts will rate the writing quality of the original and edited papers.
The experiment has three samples.

Paper sample. We aim to have 30 papers from PhD students in economics from universities based in New Zealand. We will contact PhD students and their supervisor by email one university at a time and ask them if they want to participate in this experiment. If they participate, they receive free language editing in exchange for allowing us to use their paper in this study and filling in a short survey.

Academic economists sample. We aim to have 24 academic economists as raters of the quality of the papers as well as the quality of the writing. We will recruit these raters via email from different universities and research institutions in Australia. Each rater will be asked to review 10 papers (5 edited and 5 non-edited) in 1.5 hours and fill in a short survey with questions about themselves and the papers. Raters will be compensated with a voucher worth AUD 50.

Language experts sample. We aim to have 18 language experts as raters of the writing quality of the papers. We will recruit these from the network of the language editors. Language experts are non-economists who have experience in writing and reading, for example, through their job (e.g. copy editors). Each language expert will be asked to rate 10 papers (5 edited and 5 non-edited) in 1.5 hours and fill in a short survey with questions about themselves and the papers. Raters will be compensated with a voucher worth NZD 50.

Randomization Method
Randomization for main analysis sample:

We have 30 papers and 24 academic raters for our main analysis.
The sampling procedure works in two steps. First, we assign “paper numbers” to “rater numbers”. Second, we randomly assign papers to paper numbers, and raters to rater numbers. For example, this random assignment might assign the paper “The effect of X on Y” to be paper number 14 and rater “Professor X at University Y” to be rater number 21. This assignment will be done with a computer.

Details for the first step. We assign “paper numbers” (1–30) and “rater numbers” (1–24) as follows:
• Rater numbers 1–8 see paper numbers 1–10. Rater numbers 1–4 see the edited versions of the odd-numbered papers. Rater numbers 5–8 see the edited versions of the even-numbered papers.
• Rater numbers 9–16 see paper numbers 11–20. Rater numbers 9–12 see the edited versions of the odd-numbered papers. Rater numbers 13–16 see the edited versions of the even-numbered papers.
• Rater numbers 17–24 see paper numbers 21–30. Rater numbers 17–20 see the edited versions of the odd-numbered papers. Rater numbers 21–24 see the edited versions of the even-numbered papers.

These paper numbers are presented in random order to each rater. Appendix B shows the paper-to-rater assignment for each of the 24 raters. In this assignment, each paper gets gated by 8 raters (who see 4 edited and 4 non-edited). And each rater sees 10 papers (5 edited and 5 non-edited).

Randomization for language expert sample:

We have 30 papers and 15 language expert raters. The assignment follows a similar procedure to the assignment for our main analysis sample. Appendix B shows the assigned number for each of the 15 raters. In this assignment, each paper gets rated by 4 raters (who see 2 edited and 2 non edited) and each rater sees 8 papers (4 edited and 4 non-edited).
Randomization Unit
See explanation above.

Each paper will be randomly assigned to a paper number (randomization unit = paper)

Each rater will be randomly assigned to a rater number (randomization unit = rater)
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
30 clusters (one cluster for each paper).
Sample size: planned number of observations
For our main analysis, we expect to have 240 paper-version-rater observations. For the analysis by language experts, we expect to have 120 paper-version-rater observations.
Sample size (or number of clusters) by treatment arms
Main analysis:
Treatment group: 30 edited paper versions (= 120 paper-version-rater observations)
Control group: 30 non-edited paper versions (= 120 paper-version-rater observations)

Language expert analysis:
Treatment group: 30 edited paper versions (= 60 paper-version-rater observations)
Control group: 30 non-edited paper versions (= 60 paper-version-rater observations)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Victoria University Human Ethics Committee
IRB Approval Date
2020-04-24
IRB Approval Number
0000027561

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials