Abstract
This study examines the effect of generative AI use on cognitive biases. Specifically, it empirically investigates the extent to which judgments on fake news are influenced by the use of generative AI, such as ChatGPT.
Generative AI is a technology that utilizes large-scale language models to automatically generate new content, including text, images, and audio, based on existing data. ChatGPT, in particular, can generate text responses to questions and continuously improve its dialogue accuracy by learning from new data. This technology is being considered for adoption by businesses and governmental organizations to enhance work efficiency and generate new ideas, and its implementation is expected to increase in the future.
However, it is known that generative AI has an issue called "hallucination," where it produces non-existent data or information. As a result, outputs generated by AI may include inaccuracies, posing a potential problem in its application. Additionally, the use of generative AI could lead users to unintentionally create and disseminate misinformation through social media.
If humans could process information correctly, the impact of fake news would be minimal. However, cognitive biases make it difficult for individuals to detect misinformation accurately. Several reasons can explain this difficulty, such as confirmation bias, where individuals focus only on information consistent with their prior beliefs, or inattentional blindness, where critical information is overlooked. Traditional research has often employed nudging techniques to correct such cognitive biases (Pennycook et al., 2021). However, the effectiveness of nudges is generally low, averaging around 2% (DellaVigna and Linos, 2022).
Although generative AI has hallucination issues, it may also help reduce cognitive biases when used effectively. Previous studies indicate that AI can sometimes outperform humans in systematically error-prone tasks (Chen et al., 2023). Additionally, since generative AI enables interactive use, its personalized nature may result in higher acceptance by users, making it potentially more effective than traditional nudges.
Therefore, this study empirically examines whether using generative AI improves individuals' ability to discern fake news accurately.