AI-Generated Political Information and Democracy

Last registered on December 26, 2025

Pre-Trial

Trial Information

General Information

Title
AI-Generated Political Information and Democracy
RCT ID
AEARCTR-0017449
Initial registration date
December 10, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 26, 2025, 1:58 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Australian National University

Other Primary Investigator(s)

PI Affiliation
Australian National University
PI Affiliation
Australian National University

Additional Trial Information

Status
Completed
Start date
2025-12-08
End date
2025-12-19
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This pre-analysis plan outlines the research design and analytical strategy for a vignette survey experiment examining how the use of Artificial Intelligence affects public perceptions of election integrity, trust in investigating authorities, confidence in elections and support for democracy. The experiment is embedded within the December 2025 ANUpoll conducted online with approximately 3,500 adult Australian respondents.
External Link(s)

Registration Citation

Citation
Biddle, Nicholas , Svitlana Chernykh and Constanza Sanhueza Petrarca. 2025. "AI-Generated Political Information and Democracy." AEA RCT Registry. December 26. https://doi.org/10.1257/rct.17449-1.0
Experimental Details

Interventions

Intervention(s)
This study seeks to investigate the following research questions:
1. How and under which circumstances does AI-generated political information affect citizens’ perceptions of elections and democracy?
2. What can be done to mitigate the negative impact of AI on democracy?
The experimental design presented in this pre-analysis plan manipulates three key dimensions associated with AI to assess their effects on democratic perceptions: (1) the investigating authority, (2) the prevalence of AI-generated false information, and (3) voter ability to identify AI-generated political content.
The experiment employs a 3×3×3 factorial between-subjects design. Respondents will be randomly assigned to one of 27 experimental conditions (3× 3×3). The vignette will describe a hypothetical scenario in which the investigating authority examined the use of AI-generated content during an election campaign and reached conclusions about its prevalence and voters' ability to identify such content.
After reading the vignette, respondents are asked a series of questions measuring satisfaction with democracy, trust in findings about AI-generated political content, trust in election outcomes and perceived election legitimacy, regulatory preferences regarding AI, and concerns about comparable situations arising in Australia.
Intervention (Hidden)
This study investigates two fundamental questions about the relationship between artificial intelligence and democratic governance. First, how and under which circumstances does AI-generated political information affect citizens' perceptions of elections and democracy? Second, what can be done to mitigate the negative impact of AI on democracy? Understanding whether particular institutional arrangements or contextual factors can buffer against democratic harm is essential for developing effective regulatory responses.

To address these questions, the experimental design manipulates three theoretically-motivated dimensions to assess their independent and interactive effects on democratic perceptions. The first dimension examines the investigating authority: who conducts the investigation into AI-generated content may significantly affect public trust in the findings. We compare three types of authorities representing different institutional positions—electoral management bodies (Electoral Commission), civil society organizations (independent fact-checking organizations), and private sector platforms (a social media platform). The second dimension manipulates the prevalence of AI-generated false information. We vary whether false information "dominated" online content (high prevalence), was found in "a moderate amount" (moderate prevalence), or was "only minimally found" (low prevalence). The third dimension examines voter identification ability: citizens' capacity to recognize AI-generated content may serve as a protective factor against democratic harm. We manipulate whether "almost no voters," "some voters," or "most voters" could identify AI-generated political content.

The experiment employs a 3×3×3 factorial between-subjects design, yielding 27 unique experimental conditions. This full factorial design allows us to estimate main effects for each dimension as well as all two-way and three-way interactions, providing insight into how these factors operate independently and in combination. Respondents are randomly assigned with equal probability to one of the 27 experimental conditions using the survey platform's randomization function, which occurs at the individual level after respondents complete baseline demographic and attitudinal questions.

The experimental stimulus consists of a hypothetical vignette describing a post-election investigation into AI-generated content. The vignette structure holds constant the basic scenario—a federal election in a hypothetical country followed by an investigation—while systematically varying the three experimental dimensions.
The vignette reads:
"We want to ask you about a hypothetical situation about the use of AI in election campaigns. Imagine that a federal election took place in a hypothetical country. After the election, [INVESTIGATING AUTHORITY] conducted an investigation into the use of AI to generate political content during the election campaign. The investigation found that AI-generated false information [PREVALENCE LEVEL]. Additionally, the investigation revealed that [VOTER IDENTIFICATION LEVEL]."
Quota allocations :
1) <Investigating authority>:
- the Electoral Commission – 33%
- independent fact-checking organizations – 33%
- a Social Media Platform – 33%
2) <Prevalence Level>:
- dominated online content related to the election. – 33%
- was found in a moderate amount of online content related to the election. – 33%
- was only minimally found in online content related to the election. – 33%
3) <Voter identification level>:
- Almost no voters could identify AI-generated political content – 33%
- Some voters could identify AI-generated political content. – 33%
- Most voters could identify AI-generated political content. – 33%

This hypothetical framing allows us to isolate causal effects while avoiding potential confounds from respondents' prior knowledge or partisan attachments to real elections.

The experiment is embedded within the December 2025 ANUpoll. The particular wave of the survey series will be conducted on the Online Research Unit’s (ORU’s) Australian Consumer Panel. The survey commenced with pilot data collection of around 70 respondents on the 9th of December, 2025. It is expected that data collection will be completed by mid-December, with an eventual sample size of around 3,500 respondents.
The target sample is 1,000 respondents aged 18 to 24 years, and the remaining 2,500 respondents aged 25 years and over.


Intervention Start Date
2025-12-08
Intervention End Date
2025-12-19

Primary Outcomes

Primary Outcomes (end points)
Following exposure to the experimental vignette, respondents complete six outcome measures assessing democratic attitudes and regulatory preferences, all using 4-point scales. These measures capture: satisfaction with democracy (C18), trust in the investigation findings about AI-generated content (C19), trust in election outcomes (C20), trust in election legitimacy (C21), support for AI regulation (C22), and concerns about similar situations in Australia (C23). This range of outcomes enables us to distinguish between general democratic satisfaction, specific trust dimensions, regulatory preferences, and contextual concerns, allowing us to test whether experimental effects vary across these different types of democratic attitudes.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study employs a 3×3×3 factorial between-subjects design (27 conditions, n≈3,500) embedded in the December 2025 ANUpoll. Respondents read a hypothetical vignette about a post-election investigation into AI-generated content, with three manipulated dimensions: (1) investigating authority (Electoral Commission/independent fact-checkers/social media platform), (2) prevalence of AI-generated false information (high/moderate/low), and (3) voter ability to identify AI content (low/moderate/high).
Experimental Design Details
Randomization Method
Computerized randomization using the survey platform's built-in randomization function. Respondents are randomly assigned with equal probability to one of 27 experimental conditions (3×3×3 factorial design) at the individual level.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable - individual-level randomization with no clustering.
Sample size: planned number of observations
3500 survey participants
Sample size (or number of clusters) by treatment arms
130
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Minimum Detectable Effect Size: d = 0.35 (or 35% of one standard deviation). All outcome variables are measured on 4-point scales (ranging from 1 to 4).
Supporting Documents and Materials

Documents

Document Name
PAP - AI-Generated Political Information and Democracy
Document Type
other
Document Description
This pre-analysis plan outlines the research design and analytical strategy for a vignette survey experiment examining how the use of AI affects public perceptions of election integrity, trust in investigating authorities, confidence in elections and support for democracy. The experiment is embedded within the December 2025 ANUpoll conducted online with approximately 3,500 adult Australian respondents.
File
PAP - AI-Generated Political Information and Democracy

MD5: 4c516ba893cf89c6f9a74b13bcf09e8d

SHA1: ca650eac98f21edf61629d19ab92ef11af744685

Uploaded At: December 10, 2025

IRB

Institutional Review Boards (IRBs)

IRB Name
ANU Human Research Ethics Committee
IRB Approval Date
2025-09-12
IRB Approval Number
approval 2021/430

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials