Generative AI and Cognitive Bias: Online Experiment on Fake News

Last registered on February 12, 2025

Pre-Trial

Trial Information

General Information

Title
Generative AI and Cognitive Bias: Online Experiment on Fake News
RCT ID
AEARCTR-0015361
Initial registration date
February 09, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 12, 2025, 12:14 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Kyoto University of Advanced Science

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2025-02-07
End date
2025-02-12
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines the effect of generative AI use on cognitive biases. Specifically, it empirically investigates the extent to which judgments on fake news are influenced by the use of generative AI, such as ChatGPT.
Generative AI is a technology that utilizes large-scale language models to automatically generate new content, including text, images, and audio, based on existing data. ChatGPT, in particular, can generate text responses to questions and continuously improve its dialogue accuracy by learning from new data. This technology is being considered for adoption by businesses and governmental organizations to enhance work efficiency and generate new ideas, and its implementation is expected to increase in the future.
However, it is known that generative AI has an issue called "hallucination," where it produces non-existent data or information. As a result, outputs generated by AI may include inaccuracies, posing a potential problem in its application. Additionally, the use of generative AI could lead users to unintentionally create and disseminate misinformation through social media.
If humans could process information correctly, the impact of fake news would be minimal. However, cognitive biases make it difficult for individuals to detect misinformation accurately. Several reasons can explain this difficulty, such as confirmation bias, where individuals focus only on information consistent with their prior beliefs, or inattentional blindness, where critical information is overlooked. Traditional research has often employed nudging techniques to correct such cognitive biases (Pennycook et al., 2021). However, the effectiveness of nudges is generally low, averaging around 2% (DellaVigna and Linos, 2022).
Although generative AI has hallucination issues, it may also help reduce cognitive biases when used effectively. Previous studies indicate that AI can sometimes outperform humans in systematically error-prone tasks (Chen et al., 2023). Additionally, since generative AI enables interactive use, its personalized nature may result in higher acceptance by users, making it potentially more effective than traditional nudges.
Therefore, this study empirically examines whether using generative AI improves individuals' ability to discern fake news accurately.
External Link(s)

Registration Citation

Citation
Ishihara, Takunori. 2025. "Generative AI and Cognitive Bias: Online Experiment on Fake News." AEA RCT Registry. February 12. https://doi.org/10.1257/rct.15361-1.0
Experimental Details

Interventions

Intervention(s)
ChatGPT-generated responses to each news item
ChatGPT usage
Intervention (Hidden)
Intervention Start Date
2025-02-07
Intervention End Date
2025-02-12

Primary Outcomes

Primary Outcomes (end points)
Correct answer rate
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study will conduct an online survey experiment in the winter of 2025, with an expected sample size of approximately 1,200 participants. The subjects will be 1,200 individuals registered with a web survey company, selected based on the following criteria: Japanese residents aged 20–65, with equal representation of genders and evenly distributed age groups. Individuals who have never used generative AI will be excluded from the survey.
Participants will be randomly assigned to one of three groups and will complete a questionnaire consisting of the following items:
1. Questions about generative AI usage
2. A set of ten true-or-false questions, including fake news items
3. Questions about demographic attributes
Respondents will receive a participation reward of 100 yen. Additionally, they will earn a 10-yen bonus for each correct answer in the true-or-false section, with a maximum bonus of 100 yen.

In the true-or-false task, participants will assess whether each news item is real or fake. These items are derived from fact-check articles provided by the Japan Fact-Check Center (https://www.factcheckcenter.jp/). Participants will answer each question using a three-choice format (True / Unsure / False). The order of the questions will be randomized.
Participants will be assigned to one of three groups, each subjected to a different intervention condition:
1. Control Group (C): No AI intervention. Participants will simply judge whether the news is fake or not.
2. AI Information Group (T1): Participants will be shown ChatGPT-generated responses to each news item before making their judgment. The provided information will be factually accurate.
3. AI Usage Group (T2): Participants will personally interact with ChatGPT and use its responses to determine whether the news is fake or not. They will also submit their input and the AI's response.
Experimental Design Details
Randomization Method
randomization done in office by a computer
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1
Sample size: planned number of observations
1,200
Sample size (or number of clusters) by treatment arms
1,200 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Kyoto Unversity of Advanced Science
IRB Approval Date
2025-01-10
IRB Approval Number
24EB02

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials