Accepted with Revisions: Evaluating AI-Assisted Scientific Writing

Last registered on September 12, 2025

Pre-Trial

Trial Information

General Information

Title
Accepted with Revisions: Evaluating AI-Assisted Scientific Writing
RCT ID
AEARCTR-0016740
Initial registration date
September 08, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 12, 2025, 10:10 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
The University of Utah

Other Primary Investigator(s)

PI Affiliation
The Ohio State University
PI Affiliation
Allen Institute for Artificial Intelligence
PI Affiliation
The Ohio State University

Additional Trial Information

Status
On going
Start date
2025-01-01
End date
2026-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Large Language Models (LLMs) have seen expanding application across domains, yet their effectiveness as assistive tools for scientific writing—an endeavor requiring precision, multimodal synthesis, and domain expertise—remains insufficiently evaluated. We examine the potential of LLMs to support domain experts in scientific writing, with a focus on abstract composition. We design an incentivized randomised trial with a hypothetical conference set up where participants with relevant expertise are segregated into authors and reviewers. Inspired by methods in behavioral science, our novel incentive structure encourages participants to produce high-quality outputs. Authors edit original (control) or their AI-generated (treatment) abstracts of published research from top-tier conferences. Reviewers evaluate if the edited abstract provides adequate justice to the research presented in the original abstract.
External Link(s)

Registration Citation

Citation
Hazra, Sanchaita et al. 2025. "Accepted with Revisions: Evaluating AI-Assisted Scientific Writing." AEA RCT Registry. September 12. https://doi.org/10.1257/rct.16740-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We simulate a realistic writing environment where authors (domain experts), the intended users of an AI assistant, use and edit scientific abstracts to create scientific prose to get it accepted by atleast two reviewers. We have a 2x2 design for the Author study. The first dimension includes the source of the abstract: original or AI-generated. The second dimension includes information disclosure or not about the source of the abstracts. No treatments for Reviewers who review the edited abstract in reference to the original abstract and answer if the edited abstract provides adequate justice to the research idea presented in the original abstract.
Intervention Start Date
2025-08-25
Intervention End Date
2026-06-30

Primary Outcomes

Primary Outcomes (end points)
Author: number of edits, type of edits
Reviewers: decisions about the edited abstract in reference to the original abstract
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We simulate a realistic writing environment where authors (domain experts), the intended users of an AI assistant, use and edit scientific abstracts to create scientific prose to get it accepted by atleast two reviewers. We have a 2x2 design for the Author study. The first dimension includes the source of the abstract: original or AI-generated. The second dimension includes information disclosure or not about the source of the abstracts. No treatments for Reviewers who review the edited abstract in reference to the original abstract and answer if the edited abstract provides adequate justice to the research idea presented in the original abstract. We collect secondary beliefs like confidence and readability of abstracts.
Experimental Design Details
Not available
Randomization Method
Authors: One of the three edited abstracts will be randomly selected for bonus payments.
Reviewers: One of the twenty comparison pairs will be randomly selected for bonus payments.
All randomizations by a random number generator computer program
Randomization Unit
Authors: One of the three edited abstracts will be randomly selected for bonus payments.
Reviewers: One of the twenty comparison pairs will be randomly selected for bonus payments.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Authors: 300 participants
Reviewers: 140 participants recruited so far
Sample size: planned number of observations
Authors: Each participant submits decisions for 3 abstracts. Reviewers: Each participant submits decisions for 20 pairs of original and edited abstracts
Sample size (or number of clusters) by treatment arms
Authors: Four treatments and 75 participants per treatment.
No treatment arms for the Reviewers
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB, Ohio State University
IRB Approval Date
2024-11-11
IRB Approval Number
2024E1034