Does AI Deepen the Divide? Examining the Effects of AI-Generated Content on Confirmation Bias and Opinion Polarization

Last registered on April 29, 2026

Pre-Trial

Trial Information

General Information

Title
Does AI Deepen the Divide? Examining the Effects of AI-Generated Content on Confirmation Bias and Opinion Polarization
RCT ID
AEARCTR-0018421
Initial registration date
April 24, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 29, 2026, 3:38 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
The Hong Kong University of Science and Technology (Guangzhou)

Other Primary Investigator(s)

PI Affiliation
The Hong Kong University of Science and Technology (Guangzhou)

Additional Trial Information

Status
In development
Start date
2026-04-29
End date
2026-05-07
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines whether AI-generated content affects confirmation bias in information processing. Using an online experiment, it compares responses to AI-generated and human-written content across controversial social issues. The analysis focuses on two dimensions of confirmation bias, namely information interpretation and information selection, and considers whether source-related features moderate these effects. The study aims to provide evidence on whether AI-generated content mitigates or amplifies confirmation bias and, more broadly, its potential role in shaping opinion polarization.
External Link(s)

Registration Citation

Citation
LYU, Aoqing and Xu Zhang. 2026. "Does AI Deepen the Divide? Examining the Effects of AI-Generated Content on Confirmation Bias and Opinion Polarization." AEA RCT Registry. April 29. https://doi.org/10.1257/rct.18421-1.0
Experimental Details

Interventions

Intervention(s)
The intervention is an incentivized online information-exposure experiment on controversial social issues. Participants are exposed to both AI-generated and human-written content, as well as to supportive and opposing viewpoints, with the order of the six topics randomized. The study uses a Bayesian Truth Serum (BTS) incentive scheme to encourage truthful reporting.
Intervention Start Date
2026-04-29
Intervention End Date
2026-05-07

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variables are: (1) title selection, measured by which title is chosen from the four available options for each topic (AI-supportive, AI-opposing, human-supportive, human-opposing); and (2) post-exposure evaluations and attitudes following article reading, title selection, and the final related-article stage.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study is an incentivized online randomized experiment examining how AI-generated versus human-written information affects confirmation bias and opinion polarization across controversial social issues. The design combines within-subject and between-subject variation. Participants are exposed within-subjects to content that varies by source and stance, while source-related presentation features are assigned between subjects and balanced across treatment conditions. Truthful reporting is incentivized using Bayesian Truth Serum (BTS).

The experiment consists of two main components. In the first, participants are exposed to multiple articles that vary in source and viewpoint, with randomized presentation order. In the second, participants make choices among titles that vary along similar dimensions and are later exposed to related content. The design additionally varies whether source-related features remain aligned across stages of exposure. Topic order is randomized, and treatment assignments are balanced across participants.
Experimental Design Details
Not available
Randomization Method
Randomization was implemented by computer. Topic order and balanced treatment assignments were pre-generated in Excel using random numbers. Participants were then automatically assigned by Credamo to one of the experimental conditions, with all subsequent randomized elements determined by the pre-generated assignment.
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
512 individual participants
Sample size: planned number of observations
512 individual participants
Sample size (or number of clusters) by treatment arms
128 participants in labeled-consistent, 128 in labeled-inconsistent, 128 in unlabeled-consistent, and 128 in unlabeled-inconsistent conditions. Within each treatment arm, participants are exposed to both AI-generated and human-written content and to both supportive and opposing stances.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations are based on two-sided two-sample t-tests with a significance level of 0.05 and power of 0.80. For pooled main between-subject comparisons that combine two treatment cells (e.g., labeled versus unlabeled, or consistent versus inconsistent), with 256 participants per group, the minimum detectable effect size is approximately 0.25 standard deviations (d=0.248). For comparisons between individual treatment cells, with 128 participants per group, the minimum detectable effect size is approximately 0.35 standard deviations (d=0.352). As treatment is assigned at the individual level, no clustering adjustment is required.
IRB

Institutional Review Boards (IRBs)

IRB Name
The Hong Kong University of Science and Technology (Guangzhou)
IRB Approval Date
2025-11-14
IRB Approval Number
HKUST(GZ)-HSP-2025-0371