AI in the Newsroom: Experimental Evidence on the Effectiveness of AI-Generated Journalism

Last registered on September 09, 2025

Pre-Trial

Trial Information

General Information

Title
AI in the Newsroom: Experimental Evidence on the Effectiveness of AI-Generated Journalism
RCT ID
AEARCTR-0016594
Initial registration date
August 22, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 22, 2025, 6:15 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 09, 2025, 8:23 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Johannes Kepler University Linz

Other Primary Investigator(s)

PI Affiliation
University of Hong Kong
PI Affiliation
Styria Media Group
PI Affiliation
Johannes Kepler University Linz

Additional Trial Information

Status
In development
Start date
2025-09-03
End date
2027-08-17
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Generative AI is rapidly entering domains traditionally reserved for human expertise, including the creation of journalistic content. This development raises fundamental questions about its potential role in journalism. Our project provides field-experimental evidence from a real newsroom setting by partnering with a prominent regional newspaper. Readers are randomly assigned to news articles that are (a) written by journalists, (b) produced by large language models (LLMs), or (c) created through human-AI collaboration. All articles are based on identical scientific sources. We measure effects on article clicks, post-click engagement, and other related outcomes. Hereby, we offer causal insights into how human, AI, and hybrid authorship shape public attention to news.
External Link(s)

Registration Citation

Citation
Brottrager, Michael et al. 2025. "AI in the Newsroom: Experimental Evidence on the Effectiveness of AI-Generated Journalism." AEA RCT Registry. September 09. https://doi.org/10.1257/rct.16594-1.2
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Readers are randomly assigned to news articles that are (a) written by journalists, (b) produced by large language models (LLMs), or (c) created through human-AI collaboration. All articles are based on identical scientific sources.
Intervention Start Date
2025-09-14
Intervention End Date
2027-08-17

Primary Outcomes

Primary Outcomes (end points)
Article clicks
Engagement time
Primary Outcomes (explanation)
See PAP.pdf for details.

Secondary Outcomes

Secondary Outcomes (end points)
See PAP.pdf for details.
Secondary Outcomes (explanation)
See PAP.pdf for details.

Experimental Design

Experimental Design
Readers are randomly assigned to news articles that are (a) written by journalists, (b) produced by large language models (LLMs), or (c) created through human-AI collaboration. All articles are based on identical scientific sources and follow a standardized editorial process. We measure effects on article clicks, post-click engagement, and other related outcomes.
Experimental Design Details
Not available
Randomization Method
Stratified randomization by a computer. See PAP.pdf for details.
Randomization Unit
Individual users. See PAP.pdf for details.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Individual randomization. See PAP.pdf for details.
Sample size: planned number of observations
Planned Number of Observations (units): ~232,700 user-article impressions (click analysis, at least 10 articles) and ~53,540 article views (engagement analysis, after clicks, at least 10 articles). See PAP.pdf for details.
Sample size (or number of clusters) by treatment arms
The treatment allocation shares are as follows:

On-site traffic (subscribers + other on-site users):
H-H: 22%
H-A: 17%
H-AUG: 17%
A-A: 22%
AUG-AUG: 22%

External traffic (headline fixed to human):
H-H: 33.3%
H-A: 33.3%
H-AUG: 33.3%

The final number of observations will depend on the realized website traffic during the experiment. See PAP.pdf for details.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For engagement, we detect relatively small effects: With around 10 articles, the minimum detectable effect size is about a 2-3% increase in average time on page relative to the control condition. For clicks, where we rely only on website traffic and cannot pool in external traffic, the design is less sensitive: With 10 articles, we can detect changes to the control condition of about 10%. See PAP.pdf for details.
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Research Ethics Committee at HKU (HREC)
IRB Approval Date
2025-08-18
IRB Approval Number
EA250478
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information