AI-Powered Promises: The Influence of ChatGPT on Trust and Trustworthiness

Last registered on June 23, 2023


Trial Information

General Information

AI-Powered Promises: The Influence of ChatGPT on Trust and Trustworthiness
Initial registration date
June 13, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 23, 2023, 4:23 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

University of Amsterdam

Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
As Large Language Models (LLM) continue to improve at fast rate, it will not be long before they become an integral part of many of the digital tools we use today, including those focused on communication. In this project, we investigate how the availability of LLMs during written communication affects trust and trustworthiness between participant pairs in a trust game.
External Link(s)

Registration Citation

Greevink, Ivo. 2023. "AI-Powered Promises: The Influence of ChatGPT on Trust and Trustworthiness." AEA RCT Registry. June 23.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
- "In", and "Roll" rates in each treatment arm (Communication, and GPT).
- The frequency that messages contain promises (this will be rated independently by external coders).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
- Joint outcome ("IN", "ROLL")
- first-order beliefs about trust (fraction of subjects playing “IN”), trustworthiness (fraction of subjects playing “ROLL”), and whether the message was generated by GPT.
- Whether participants choose to rewrite the message generated by GPT
- participants’ social value orientation
- participants’ attitudes toward artificial intelligence
- correlations between behaviors across the two periods
- risk aversion
- demographic characteristics
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will run a trust game with hidden action.

Two players will play as Trustee or Trustor.

The treatments differ in whether an artificial-intelligence-based language model can be used in order to communicate.
Experimental Design Details
We carry out a hidden action trust game over two treatments. Participants will be randomly matched in pairs of two (player A and player B) at the start of the session. Each player will be told the experiment consists of a comprehension quiz, two decision tasks, a survey, and then finally payment feedback. They will be told that one of the two decision tasks will be randomly chosen as a payment.

After instructions and the quiz, the first decision task starts with a phase where player B can send a written message to player A. In the “GPT” treatment player B has access to GPT-4 in generating this message, which both players are aware of. We preload the instructions for the game into the application and tell the application that it will be asked for advice by Participant B. We also instruct participants on some 'GPT basics', to account for limited experience with this novel technology.

Afterward, simultaneously:

- Each player A will see the message player B wrote and will indicate whether they wish to choose “In” or “Out”. If A chooses “Out”, the game ends. If A chooses "In" player B's choice will decide the final outcome.
- Afterwards, each player A will make an incentivized guess as to the probability that player B chooses “Roll”. [if he is in the GPT treatment, he will also be asked what his confidence is that the message was written using GPT, this is not incentivized]

- Each player B will indicate whether they wish to choose “Roll” or “Don’t Roll” (a computerized 6-sided die). Note that at this point, B will not know whether A has chosen “In” or “Out”; However since B’s decision will only make a difference when A has chosen “In”, we ask B’s (for the purpose of making this decision) to suppose that A has chosen “In”.
- Afterwards, each player B will make an incentivized guess as to the probability that player A has chosen “In”.

Decision Task 2 will be a repeat of Decision Task One where roles are switched. Players will not know this in advance, although participants know that a second decision task was coming. Each player will be grouped with another player (and all players will switch roles). This switching will be kept within clusters of four, or six at maximum if the participant number in a session is not divisible by four.
Randomization Method
- Within each session, there is a randomized pairing before decision task 1 occurs. This is done by computer, namely via the method 'group_randomly' in Otree. This shuffles players such that they can end up paired with any random player.

- After decision task one is done, all players will deterministically shift 'roles' in the group before decision task 2, from player A to player B and vice versa. They are then randomly repaired within their cluster (which is decided randomly from the start due to the random allocation into groups, and therefore clusters)
Randomization Unit
Individual-level randomization into groups of two and clusters of four (or six at maximum).

Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
For the initial round, 160 participants in groups of two.
For the analysis of the second round, ideally, 70-80 clusters of four (two groups). This depends on how many instances of clusters of 6 we will have.
Sample size: planned number of observations
320 student participants
Sample size (or number of clusters) by treatment arms
160 participants in the GPT communication treatment,
160 participants in the Communication treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Economics and Business Ethics Committee
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials