Humans, Artificial Intelligence and Text-based Misinformation

Last registered on August 09, 2024

Pre-Trial

Trial Information

General Information

Title
Humans, Artificial Intelligence and Text-based Misinformation
RCT ID
AEARCTR-0012535
Initial registration date
November 18, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 01, 2023, 4:52 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 09, 2024, 3:22 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
The University of Utah

Other Primary Investigator(s)

PI Affiliation
University of Utah
PI Affiliation
University of Utah
PI Affiliation
Allen Institute for Artificial Intelligence

Additional Trial Information

Status
In development
Start date
2023-11-25
End date
2024-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Text-based misinformation is pervasive, yet evidence is scarce regarding people’s ability to differentiate truth from deceptive content in textual form. We conduct a laboratory experiment utilizing data from a TV game show, where natural conversations surrounding an underlying objective truth between individuals with conflicting objectives lead to intentional deception. Initially, we elicit participants’ guesses about the underlying truth by exposing them to transcribed conversations from random episodes. Borrowing tools from computing, we demonstrate that certain AI algorithms exhibit comparable truth detection performance to humans, despite the algorithms relying solely on language cues while humans have access to language and audio-visual cues. Our model identifies accurate language cues not always detected by humans, suggesting the potential for collaborative efforts between humans and algorithms to enhance truth detection abilities. Our research takes an interdisciplinary approach and aims to ascertain whether human-AI teams can outperform individual humans in spotting the truth amid misinformation appearing in textual form. Subsequently, we pursue several lines of inquiry: Do individuals seek the assistance of an artificial intelligence (AI) tool to aid their discernment of truth from text-based misinformation? Are individuals willing to pay for the service provided by the AI? We also investigate factors that may influence individuals’ reluctance in or excessive dependence on seeking AI assistance, such as “AI aversion” or its absence, as well as overconfidence in one’s ability to identify the truth. Furthermore, we examine, while controlling for the predictive accuracies of both the majority of humans and the AI tool, whether individuals, in comparison to the AI tool, are more or less inclined to submit the same guess that a majority of other individuals had submitted for that episode as their own. Lastly, we examine potential gender differences concerning these questions.
External Link(s)

Registration Citation

Citation
Bhattacharya, Haimanti et al. 2024. "Humans, Artificial Intelligence and Text-based Misinformation." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.12535-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Despite the proliferation of text-based misinformation on social media, there is a gap in our understanding of individuals' ability to detect deception in textual form. We use a transcribed version of a novel TV gameshow in which the conversations mimic a social media platform where a third party seeks to ascertain the actual truth concerning a particular topic (e.g., the accuracy of economic data or historical events) by observing online discussions among individuals who have conflicting motives related to that topic, and where there is an underlying objective truth. We ask the subjects to discern and guess the truth from these transcripts. Subsequently, we study the willingness of the subjects to switch to a guess made by an artificial intelligent (AI) system. We vary the accuracy of this AI system and aim to study how this has an effect on the switching behavior of individuals. Furthermore, we seek if subjects are willing to pay for this service provided by an AI.
Intervention Start Date
2023-11-25
Intervention End Date
2024-12-31

Primary Outcomes

Primary Outcomes (end points)
1. Individuals' capacity to identify truthful information from deceptive textual content in a strategic context.
2. The willingness of humans to depend on AI tools for identifying text-based misinformation in scenarios where people are either informed or uninformed about the AI's proficiency in discerning the truth.
3. Individuals' willingness to pay for the utilization of AI tools in identifying text-based misinformation.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct this project in two phases. The first phase focuses on building the AI system to detect false information with the help of ChatGPT. In the second phase of the project, we will utilize the methods of the economics experiments. We invite individual human subjects, who will be presented with the transcripts from one of the randomly chosen sessions of the game show. The subject will be entrusted with identifying the truth and will be paid a fixed sum of monetary rewards for correctly identifying, tantamount to detecting deception or sorting out misinformation.
Experimental Design Details
Not available
Randomization Method
Randomization is conducted using a random number generator in Python
Randomization Unit
Transcript level
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters

For the Baseline, we recruit 180 participants. We divide subjects into two urns (or sets of transcripts); each urn has 90 subjects. Of the 90 subjects, we study 45 male and 45 female subjects.

For the Black Box and Full Information treatments, we recruit 270 participants. We divide subjects into three urns; each urn has 90 subjects. Of the 90 subjects, we study 45 male and 45 female subjects.
Sample size: planned number of observations
900 observations in Baseline (each subject takes five decisions for the five transcripts) 1350 observations in the Black Box and Full Information treatments
Sample size (or number of clusters) by treatment arms
Baseline: We recruit 180 subjects per treatment, which is 900 observations per treatment (each subject takes five decisions for the five transcripts). For every treatment, we divide subjects into two urns (or sets of transcripts); each urn has 90 subjects. Of the 90 subjects, we study 45 male subjects and 45 female subjects.

Black Box and Full Information treatments: We recruit 270 subjects per treatment, which is 1350 observations per treatment (each subject takes five decisions for the five transcripts). For every treatment, we divide subjects into three urns (or sets of transcripts); each urn has 90 subjects. Of the 90 subjects, we study 45 male subjects and 45 female subjects.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board, University of Utah
IRB Approval Date
2023-08-30
IRB Approval Number
IRB_00167477