The role of AI in trust: An experimental study using ChatGPT

Last registered on July 19, 2023


Trial Information

General Information

The role of AI in trust: An experimental study using ChatGPT
Initial registration date
July 08, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 19, 2023, 12:07 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

University of Oregon

Other Primary Investigator(s)

PI Affiliation
UC Irvine
PI Affiliation
University of Oregon
PI Affiliation
University of Oregon

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
This paper examines the function of artificial intelligence (AI) as an assistant in a two-player trust game. Prior to the commencement of the game, the second player has the option to transmit a message to the first player. With the aid of AI, the second player can potentially enhance and rephrase the message before sending it. Similarly, the first player can utilize AI to comprehend the message received from the second player. To investigate the impact of AI on players' communication and decision-making in this strategic scenario, we conduct an experiment with various conditions: with or without AI assistance for the second player, the first player, or both, and whether the players have knowledge about their opponents receiving AI assistance or not.
External Link(s)

Registration Citation

Bivins, Tanner et al. 2023. "The role of AI in trust: An experimental study using ChatGPT." AEA RCT Registry. July 19.
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Second mover's original message. AI's rephrase of second mover's original message, AI's interpretation of second mover's sent message; First mover's trusting behavior; Second mover's trustworthiness behavior; First mover's belief of second mover's trustworthiness behavior; Second mover's belief of first mover's belief.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In the experiment, subjects are randomly matched in pairs to play a two-player two-stage binary trust game. Before the game starts, the second mover has the option to send a free-form message to the first mover and either one or both or none of the two players in each pair have an AI assistant to help with elaboration of the message (for the sender) and interpretation of the message (for the receiver).

Treatments: Benchmark with communication; AI for first mover only (public information); AI for second mover only (public information); AI for both players (public information); AI for first mover only (private information); AI for second mover only (private information); AI for both players (private information)

The experiment is programmed with oTree and incorporate GPT 3.5 Turbo as the AI.
Experimental Design Details
Not available
Randomization Method
The treatment randomization is done at the session level. All other randomizations are done by computer.
Randomization Unit
Treatment is randomized at the session level. Pair matching and the game are done at the individual level.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
210 pairs
Sample size: planned number of observations
420 subjects
Sample size (or number of clusters) by treatment arms
Each treatment with 30 pairs of players.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
University of Oregon
IRB Approval Date
IRB Approval Number