The Impact of Generative AI on Research Productivity

Last registered on October 22, 2025

Pre-Trial

Trial Information

General Information

Title
The Impact of Generative AI on Research Productivity
RCT ID
AEARCTR-0017030
Initial registration date
October 15, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 22, 2025, 1:24 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
China Center for Agriculture Policy, Peking University

Other Primary Investigator(s)

PI Affiliation
National School of Development,Peking University
PI Affiliation
China Center for Agricultural Policy, Peking University

Additional Trial Information

Status
In development
Start date
2025-10-20
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study proposes a rigorous experimental design to quantify the extent to which generative AI improves research writing quality, efficiency, and innovativeness, and to examine heterogeneity, persistence, and gender differences in these effects. We implement a randomized controlled trial (RCT) with participants assigned to an “AI-assisted” or “business-as-usual” condition, using stratified randomization by baseline proficiency (pre-test scores on standardized writing and literature-search tasks) to identify whether higher- or lower-skilled researchers benefit more.
External Link(s)

Registration Citation

Citation
Huang, Kaixing, Xianling Long and Lulu Pi. 2025. "The Impact of Generative AI on Research Productivity." AEA RCT Registry. October 22. https://doi.org/10.1257/rct.17030-1.0
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Recruitment: Over 400 participants (PhD, master’s, and undergraduate students) in economics-related fields, recruited online; the final sample size will depend on registrations.

Baseline Survey: Collect whether the participant has published in academic journals, hours and purposes of AI use, and WeChat ID; record covariates / benchmarks including prior publications, home institution, cohort/year, and results from an initial round of testing.

Round 1

Task: Each participant completes an English-language research proposal (topic assigned).
Requirements:
1) Literature review/summary
2) Statement of novelty/innovation
3) Data sources and sample size; clarify whether the data are obtainable
4) The econometric (or theoretical) model to be used
5) Complete within 0.5–2 hours
Restriction: No AI use (screen activity monitored via Tencent Meeting recording).
Compensation: ¥50 per person; the top 10% by score receive an additional ¥1,000 (announced after all rounds are completed).

Round 2
Control group: Same as Round 1 (new topic).
Treatment group: Identical except AI use is allowed (new topic).
Post-task: After the proposal is completed, provide a 10-minute AI training (watch a video and answer questions).
Other procedures: As in Round 1.

Round 3
The sample is divided into four groups:
1)a control group of 100 participants without training;
2)a control group of 100 participants with AI training;
3)a treatment group of 100 participants without training;
4)a treatment group of 100 participants receiving the same AI training.

This design allows us to disentangle whether any effects arise from the training itself and whether AI—through training and usage experience—attenuates humans’ intrinsic creativity. We will ensure strict uniformity of training content and avoid including task-specific solution techniques. (In my understanding, this effectively yields three intervention arms—T1: AI training plus permission to use AI; T2: AI training only; T3: permission to use AI only—and one pure control group.)
Intervention Start Date
2025-10-20
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
1. Grammatical errors
2. Logical coherence score
3. Expert rating
4. Task completion time
5. Output per unit time (productivity)
Primary Outcomes (explanation)
1. Methodological novelty score: Not determined yet
2. Innovativeness of research contribution score: Not determined yet

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Not available.
Experimental Design Details
1.Based on the registration records and baseline questionnaire, use stratified random sampling to assign participants to the treatment and control groups.
2.Train research assistants (RAs).
3.Randomly match treatment and control participants to RAs (avoiding pairings of classmates or the same major as far as possible).
4.RAs contact participants in advance to schedule a session and inform them that a 0.5–2 hour Tencent Meeting is required; each round offers ¥50–¥1,000 in compensation.

5. For each session, the RA contacts one randomly assigned participant, sends the Tencent Meeting link, ensures screen recording is enabled, provides the research topic and task requirements (to be completed within 0.5–2 hours), and clarifies AI rules: no AI in Round 1 (all participants); AI optional for the Round-2 treatment group.

6. At the end of the meeting, collect the proposal while keeping the recording; name both files as RAName-ParticipantName-Date.

7.Conduct anonymous human grading together with AI-based scoring.

8.Compile the writing submissions into an experimental-design manuscript suitable for submission.
Randomization Method
Randomization done in office by a computer
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No
Sample size: planned number of observations
400 individuals
Sample size (or number of clusters) by treatment arms
100 Pure control (Control and ban AI)
100 Treatment and allows AI
100 Treatment and ban AI
100 Control and allows AI
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials