Effects of Fact-Checker Label and National Stereotypes on Perceived News Credibility

Last registered on October 22, 2025

Pre-Trial

Trial Information

General Information

Title
Effects of Fact-Checker Label and National Stereotypes on Perceived News Credibility
RCT ID
AEARCTR-0016950
Initial registration date
October 15, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 22, 2025, 1:13 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Nova SBE

Other Primary Investigator(s)

PI Affiliation
Nova School of Business and Economics

Additional Trial Information

Status
In development
Start date
2025-10-16
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates whether people trust fact-checks differently depending on whether they are labeled as coming from AI or from humans, and how this interacts with preconceptions about corruption. In a survey experiment, participants are randomly assigned to read a real news story about corruption in either Sweden or Rwanda, with each story designed to either confirm or challenge common expectations. Participants first report how believable they find the story. They are then informed that the story was fact-checked and confirmed as true. Further, they are informed the fact-checking was conducted by an AI system or, alternatively, by a team of human fact-checkers. Participants then rate their trust in the fact-check. The study follows a between-subjects design to test how source and content influence perception. Primary outcomes include belief in the story and trust in the fact-check. Secondary outcomes include familiarity with the story, perceived alignment with expectations, and familiarity with AI. This research aims to better understand how source labels and national stereotypes affect trust in fact-checking.
External Link(s)

Registration Citation

Citation
Tavares, Jose Albuquerque and Jonathan Casanova. 2025. "Effects of Fact-Checker Label and National Stereotypes on Perceived News Credibility." AEA RCT Registry. October 22. https://doi.org/10.1257/rct.16950-1.0
Experimental Details

Interventions

Intervention(s)
Participants are exposed to a real news story about corruption in either Sweden or Rwanda. The story is designed to either confirm or challenge common expectations about corruption in that country. After reading, participants are told the story was fact-checked and confirmed as true, with the source randomly attributed to either an AI system or human fact-checkers.
Intervention (Hidden)
The study follows a 2x2x2 between-subjects design. Participants are randomly assigned to one of eight conditions varying by: (1) country in the story (Sweden or Rwanda), (2) story type (confirming or disconfirming expectations), and (3) fact-checker label (AI or human). The interventions include exposure to different combinations of these conditions. Stories - either aligned or not with prior expectations, are based on real, truthful, verified news reports. The primary manipulations are the content - confirming vs. disconfirming, and the fact-checker label. Outcomes are measured using Likert scales on belief in the story and trust in the fact-checker.
Intervention Start Date
2025-10-16
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
• Belief in the news story (measured on a Likert scale).
• Trust in the fact-check confirmation (measured on a Likert scale).


Primary Outcomes (explanation)
Belief in the story is measured after reading the story but before exposure to the fact-check label. Trust in the fact-check is measured after exposure to the source (AI or human). Both are collected using Likert-scale items ranging from low to high agreement or trust

Secondary Outcomes

Secondary Outcomes (end points)
• Familiarity with the topic under scrutiny
• Familiarity with AI systems

Secondary Outcomes (explanation)
Familiarity with the story under scrutiny and with AI are measured via self-report items at the end of the survey. These secondary outcomes may be used to explore moderation or mediation effects.

Experimental Design

Experimental Design
This study uses a between-subjects survey experiment with random assignment to one of eight conditions varying in story country (Sweden or Rwanda), story type (confirming or disconfirming expectations), and fact-checker label (AI or human). Participants are recruited from a university student population and complete the survey online.
Experimental Design Details
The experimental design follows a 2 (country: Sweden/Rwanda) × 2 (stereotype alignment: confirming/disconfirming) × 2 (fact-checker label: AI/human) between-subjects factorial design. Participants are exposed, randomly, to only one condition. The story content and fact-check label are randomly assigned via Qualtrics' randomizer. The survey includes demographic questions, pre- and post-intervention outcome measures, and debriefing. The survey is hosted on Qualtrics and conducted at Nova SBE.
Randomization Method
Randomization is automated through Qualtrics' built-in randomizer feature. Assignment to experimental conditions occurs upon survey entry, ensuring equal probability across the eight conditions.
Randomization Unit
The unit of randomization is the individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
None — individual-level randomization.
Sample size: planned number of observations
Approximately 320 participants, all university students or similar respondents, recruited through classroom sessions or online channels.
Sample size (or number of clusters) by treatment arms
The study uses 8 experimental groups (2x2x2 design), with approximately 40 participants per condition.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
-
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board of Nova SBE
IRB Approval Date
2025-10-16
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials