In AI We Trust? Understanding Human Trust in the Age of AI

Last registered on February 14, 2024

Pre-Trial

Trial Information

General Information

Title
In AI We Trust? Understanding Human Trust in the Age of AI
RCT ID
AEARCTR-0012977
Initial registration date
February 09, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 14, 2024, 4:45 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Groningen

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-03-05
End date
2024-05-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Intelligent machines, artificial intelligence (AI), and algorithms are reshaping our society in domains like hospitality, healthcare, and space exploration. As AI progresses, understanding human trust in AI and how it differs from scientific trust more generally, as well as the mechanisms shaping (mis)trust, is crucial for preserving the quality of the social fabric in a free society.

Using a survey experiment as part of a nationally-representative panel study in the United States, this project compares how news of AI progress in linguistics, medicine, and dating, versus scientific advancements in the same domains, impact trust across diverse societal groups. For example, an AI breakthrough in linguistics might be perceived differently than a non-AI scientific discovery in the same domain. The study investigates if AI progress causes more fear or excitement compared to traditional scientific progress and how these emotions influence trust. It unveils the factors shaping (mis)trust, including the innovation’s potential, fear, and perceptions of societal benefits. This project will contribute to the principles of AI design that respect and enhance the freedoms of society by providing data-driven insights into the public's trust in AI. These insights can help shape AI systems that are aligned with the ideals of autonomy, transparency, ethics, inclusivity, responsiveness, and balanced innovation—fundamental to a society that values freedom.
External Link(s)

Registration Citation

Citation
Nikolova, Milena. 2024. "In AI We Trust? Understanding Human Trust in the Age of AI." AEA RCT Registry. February 14. https://doi.org/10.1257/rct.12977-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-03-05
Intervention End Date
2024-05-06

Primary Outcomes

Primary Outcomes (end points)
Trust in the advancement
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Understanding, evaluation of the advance's potential, excitement, fear


Secondary Outcomes (explanation)

Experimental Design

Experimental Design
I will employ a survey experiment with a between-subjects design to assess if learning about AI/science advances influences trust. This involves exposing participants to AI vs. non-AI (science) advancements and analyzing the causes of the differences in trust. I will also investigate if and how outcomes vary by socio-demographic traits, offering insights into AI's differential societal impacts. The domains of linguistics, medicine, and dating, are selected as they have seen similar rates of advances in AI/science in recent years and can offer reasonable comparisons.

Specifically, I will collect data using a survey experiment from 1,500 respondents via the Understanding America Study (UAS), which is a nationally representative probability-based panel of U.S. households with over 8,000 respondents run by the University of Southern California (USC). UAS respondents answer the questionnaires on a computer, tablet, or smart phone. The target number of respondents is 1,500 to ensure sufficient statistical power to answer the research questions. The interviews will be conducted in both English and Spanish and the UAS staff will translate the questionnaire into Spanish. The UAS’ recruitment process includes paying initial monetary incentives of $5 and an additional $15 for the completed questionnaire. To minimize attrition rates and non-response, the UAS sends reminders and allows respondents ample time to complete the questionnaire. The collected and anonymized data will be made publicly available to benefit the broader research community.

Through random assignment, respondents will either read about AI advances or science advances in linguistics, medicine, and dating. Neutral, one-paragraph texts from science websites (e.g., Science Daily) will be used. Specifically, 750 participants in the "AI" condition will read about chatGPT advancements, while 750 participants in the "science" condition will learn about linguistic discoveries. Each respondent reads three excerpts about language, medicine, and dating.

After reading each excerpt, participants in the treatment (AI) and control (science) groups will rate the items below on a 5-point scale, where 1 = Completely Disagree and 5 = Completely Agree.
1) Reading this makes me trust this innovation/advance
2) This development has the potential to help the human race
3) I have a good understanding of this innovation/advance
4) Reading about this makes me afraid
5) Reading about this development makes me excited
Experimental Design Details
Not available
Randomization Method
Randomization done in office by computer, randomization is done by the UAS Staff.
Randomization Unit
individual respondent
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
1500
Sample size (or number of clusters) by treatment arms
1500
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number