Abstract
Text-based misinformation is pervasive, yet evidence is scarce regarding people’s ability to differentiate truth from deceptive content in textual form. We conduct a laboratory experiment utilizing data from a TV game show, where natural conversations surrounding an underlying objective truth between individuals with conflicting objectives lead to intentional deception. Initially, we elicit participants’ guesses about the underlying truth by exposing them to transcribed conversations from random episodes. Borrowing tools from computing, we demonstrate that certain AI algorithms exhibit comparable truth detection performance to humans, despite the algorithms relying solely on language cues while humans have access to language and audio-visual cues. Our model identifies accurate language cues not always detected by humans, suggesting the potential for collaborative efforts between humans and algorithms to enhance truth detection abilities. Our research takes an interdisciplinary approach and aims to ascertain whether human-AI teams can outperform individual humans in spotting the truth amid misinformation appearing in textual form. Subsequently, we pursue several lines of inquiry: Do individuals seek the assistance of an artificial intelligence (AI) tool to aid their discernment of truth from text-based misinformation? Are individuals willing to pay for the service provided by the AI? We also investigate factors that may influence individuals’ reluctance in or excessive dependence on seeking AI assistance, such as “AI aversion” or its absence, as well as overconfidence in one’s ability to identify the truth. Furthermore, we examine, while controlling for the predictive accuracies of both the majority of humans and the AI tool, whether individuals, in comparison to the AI tool, are more or less inclined to submit the same guess that a majority of other individuals had submitted for that episode as their own. Lastly, we examine potential gender differences concerning these questions.