The Economic Impact of Generative AI through Communication Ability

Last registered on April 16, 2024

Pre-Trial

Trial Information

General Information

Title
The Economic Impact of Generative AI through Communication Ability
RCT ID
AEARCTR-0013309
Initial registration date
April 05, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 16, 2024, 12:51 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of California, Santa Cruz

Other Primary Investigator(s)

PI Affiliation
University of California Santa Cruz

Additional Trial Information

Status
In development
Start date
2024-01-15
End date
2024-12-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We study how AI can influence economic interactions by enhancing humans' communication abilities. In particular, we use classic experimental games to investigate the effectiveness of AI-assisted communication within these economic settings: the Dictator Game, the Ultimatum Game, the Trust Game, and the Public Goods Game, aiming to discern the impact of suggestions generated by the state-of-the-art language model, GPT-4, on enhancing the communication abilities of participants. In our design, we allow players to send AI-aided unilateral pre-play messages to obtain better economic outcomes. Specifically, we assess whether AI assistance can enable players to persuade for larger transfers in the Dictator Game, achieve lower rejection rates or propose more successful lower offers in the Ultimatum Game, secure higher investments or return transfers in the Trust Game, and foster heightened cooperation in the Public Goods Game.


External Link(s)

Registration Citation

Citation
López Vargas, Kristian, Julian Martinez and Gonzalo Martín Respighi Grasso. 2024. "The Economic Impact of Generative AI through Communication Ability." AEA RCT Registry. April 16. https://doi.org/10.1257/rct.13309-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
AI - aided pre play communication in four experimental games
Intervention Start Date
2024-05-15
Intervention End Date
2024-12-15

Primary Outcomes

Primary Outcomes (end points)
Different outcomes will be analyzed for different environments. In the Trust Game (TG): transfer, return; in the Dictator Game (DG): transfer; in the Ultimatum Game (UG): offer, acceptance/rejection decision; in the Public Goods Game (PGG): contributions.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experimental Design (Public):

In this experiment, we explore the influence of generative AI on economic decision-making by studying four classic experimental economics games: the Ultimatum Game, the Dictator Game, the Trust Game, and the Public Goods Game. Our approach involves assessing the impact of AI-generated suggestions, specifically from the currently most advanced LLM, GPT-4, on players' communication abilities and effectiveness, potentially leading to improved economic outcomes. These AI-aided messages serve as pre-play unilateral communications. In the Dictator Game, we investigate if the recipient can leverage AI to persuade for larger transfers. At the same time, in the Ultimatum Game, we examine if the sender can reduce rejection rates or submit lower offers successfully. Similarly, in the Trust Game, we assess whether AI-assisted messages can influence higher investments or higher return transfers from trustees.
In the Public Goods Game, we test the potential for increased cooperation levels through AI-aided pre-play messages.
Our design is informed by the understanding that non-binding communication, or "cheap talk," can significantly affect outcomes in these games. Thus, our treatments incorporate the opportunity for players to utilize AI assistance in crafting messages, aiming to quantify its impact within these economic interactions.



# Environment 1: The Dictator Game (DG)

Two participants out of the set of human subjects in the experimental session, are matched randomly and
anonymously. Then, roles are assigned at random; one of the participants in each pair receives the role of the sender (S or player 1) and the other of the receiver (R or player 2). This setup exhibits motivational tensions between equity and altruism on the one hand and self-interest on the other.

# Control and Treatment Conditions

Our study features two control and three treatment conditions with varying degrees of AI assistance. For the DG, we will have the following conditions.

a) Control Condition DG-C-NM (No Message): The standard dictator game without the option for the receiver to communicate with the sender.

b) Control Condition DG-C-HM (Human Message): The receiver can send a pre-play message to the proposer without AI assistance.

b) Treatment DG-AI-v (voluntary use of AI): The receiver can use an AI tool to craft a message, but it is up to them to decide on the prompt and whether to modify the AI's output or even use it.

d) Treatment DG-AI-f (mandatory use of AI and fixed prompt): The receiver uses a fixed prompt provided by the experimenters and sends the unaltered AI-generated message to the sender.



# Environment 2: The Ultimatum Game

The ultimatum game (UG) is also a widely used setting in economic and behavioral research to study social preferences. The difference with the DG (see above) is that the second player now has a non-trivial role. Upon being matched random and anonymously, Player 1, also referred to as the proposer (P) in this game, is given a sum of money, denoted by m, and must decide on an offer, o, representing the part of m they wish to allocate to Player 2, the responder (R). The proposer’s offer must satisfy 0 < o < m, reflecting all possible splits of m from nothing to the entire amount.
The responder then makes a binary choice, represented as a, to accept or reject the proposer’s offer.
Acceptance (a = 1) means the money is split according to the proposer’s offer. Rejection (a = 0) results in neither player receiving any payout. That is, the proposer and responder obtain payoffs equal to payoffs_1 = a X (m - o) and payoffs_2 = a X o in that round, respectively.


Control and Treatment Conditions

For the Ultimatum Game, we maintain the two control conditions and introduce three AI-assisted treatment conditions to examine the impact of generative AI on negotiation outcomes:

a) Control Condition UG-C-NM (No Message): The standard ultimatum game format without the opportunity for pre-play communication.

b) Control Condition UG-C-HM-P (Human Message): The proposer can include a message with their offer without AI assistance.

c) Control UG-C-HM-R (voluntary use of AI): The responder sends a message before the proposer decides how much to send, presumably aiming to influence the proposer's initial offer.

## Proposer treatments

d) Treatment UG-AI-v-P (voluntary use of AI): The proposer voluntarily uses AI to craft a pre-play (persuasive) message to accompany their offer. The choice of prompt and whether to modify the AI's output is at the proposer's discretion.

e) Treatment UG-AI-f-P (mandatory use of AI and fixed prompt): The proposer must use a fixed AI-generated message provided by the experimenters to accompany their offer. This message must be sent without alteration.

## Responder treatments

f) Treatment UG-AI-v-R (voluntary use of AI): The responder voluntarily uses AI to craft a persuasive message before the proposer decides how much to send, presumably aiming to influence the proposer's initial offer. The choice of prompt and whether to modify the AI's output is at the responder's discretion.

g) Treatment UG-AI-f-R (mandatory use of AI and fixed prompt): The responder must use a fixed AI-generated message provided by the experimenters before the proposer decides how much to send. This message must be sent without alteration.


# Environment 3: The Trust Game

The trust game explores the dynamics of trust and reciprocity in economic interactions. Participants are randomly matched as trustors (T) and trustees (R). The trustor is given a sum of money, m, and decides how much to transfer to the trustee. This amount is tripled upon receipt by the trustee, who then decides how much to return to the trustor. The game measures trust via the trustor's initial transfer and trustworthiness via the trustee's return amount.

Control and Treatment Conditions

For the Trust Game, we maintain the two control conditions and introduce three AI-assisted treatment conditions to examine the impact of generative AI on trust and trustworthiness outcomes:

a) Control Condition TG-C-NM (No Message): The standard trust game format without the opportunity for pre-play communication.

b) Control Condition TG-C-HM-P (Human Message): The trustor can send a message along with the transfer without AI assistance before the transfer decision.

c) Control TG-C-HM-R (voluntary use of AI): The trustee sends a message before the trustor decides how much to send, presumably aiming to influence the trustor’s initial transfer.

## Trustor treatments

d) Treatment TG-AI-v-P (voluntary use of AI): The trustor can use AI to craft their message to be sent with the transfer, with autonomy over the prompt and potential edits to the AI’s response.

e) Treatment TG-AI-f-P (mandatory use of AI and fixed prompt): The trustor must use a fixed AI-generated message provided by the experimenters to accompany their transfer. This message must be sent without alteration.

## Trustee treatments

f) Treatment TG-AI-v-R (voluntary use of AI): The trustee voluntarily uses AI to craft a persuasive message before the trustor decides how much to send, presumably aiming to influence the amount of the transfer. The choice of prompt and whether to modify the AI's suggestion is at the trustee's discretion.

g) Treatment TG-AI-f-R (mandatory use of AI and fixed prompt): The trustee must use a specific AI-generated message without any alterations, aiming to influence the trustor’s decision.


# Environment 4: The Public Goods Game (PGG)

This game assesses cooperative behavior within a group. Each participant is endowed with an amount and decides how much to contribute to a public pot. Contributions are added together, multiplied (typically by 2), and then evenly redistributed among all players, regardless of individual contributions. The game explores the conflict between individual incentives and the cooperative collective benefit.

Control and Treatment Conditions

a) Control Condition PGG-C-NM (No Message): The standard public goods game without pre-play communication.

b) Control Condition PGG-C-HM (Human Message): A randomly chosen (or possibly voluntary) member of the group can send a message to their group without AI assistance, encouraging contributions.

c) Treatment PGG-AI-v (voluntary use of AI): A randomly chosen (or possibly voluntary) member of the group uses AI to craft a message encouraging contributions. The participant decides on the prompt and can edit the AI's output.

d) Treatment PGG-AI-f (mandatory use of AI and fixed prompt): A randomly chosen (or possibly voluntary) member of the group is required to use a specific AI-generated message to encourage contributions, with the message sent as is.
Experimental Design Details
Not available
Randomization Method
Computer-based. For recruitment ORSEE randomizes invitations; for treatment assignment, we will program the oTree interface to select, in every session, a treatment at random.
Randomization Unit
Human participant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
Planned Number of Observations: For all the games DG, TG UG: 60 pairs (120 observation units). This yields a sub-total number of 180 pairs (360 observation units). In PGG: 60 groups of 4 (240 observation units). This yields 600 observation units.
Sample size (or number of clusters) by treatment arms
Planned Number of Observations: For all the games DG, TG UG: 60 pairs (120 observation units). This yields a sub-total number of 180 pairs (360 observation units). In PGG: 60 groups of 4 (240 observation units). This yields 600 observation units.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number