Back to History

Fields Changed

Registration

Field Before After
Trial Status in_development completed
Last Published June 23, 2023 05:20 PM July 23, 2024 03:57 PM
Study Withdrawn No
Data Collection Complete Yes
Data Collection Completion Date August 01, 2023
Back to top

Papers

Field Before After
Paper Abstract We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. Informed participants are, overall, more likely to choose costly fact-checking. These outcomes suggest that trust in AI-generated content is context-dependent.
Paper Citation Buchanan, Joy and William Hickman (2024) "Do people trust humans more than ChatGPT?" Journal of Behavioral and Experimental Economics, Volume 112, 102239.
Paper URL https://doi.org/10.1016/j.socec.2024.102239
Back to top