The Rationality of GenAI-Human Trade-Off

Last registered on June 27, 2025

Pre-Trial

Trial Information

General Information

Title
The Rationality of GenAI-Human Trade-Off
RCT ID
AEARCTR-0016234
Initial registration date
June 18, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 27, 2025, 2:16 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Jinan University

Other Primary Investigator(s)

PI Affiliation
Lingnan University
PI Affiliation
University of Science and Technology of China

Additional Trial Information

Status
In development
Start date
2025-06-21
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines individuals' economic preferences and rationality in task allocation between human labor and Generative Artificial Intelligence (GenAI) with varying returns. In the experimental design, participants assume the role of employers and make allocation decisions across three conditions: (1) between two human workers, (2) between one GenAI and one human worker, and (3) between another GenAI and one human worker. Each condition comprises 22 decision rounds with different returns. The research aims to investigate individuals' preferences regarding different worker types and to analyze how considerations of equity and efficiency influence their decision-making processes.
External Link(s)

Registration Citation

Citation
Chen, Yiting, You Shan and Shuangyu Yang. 2025. "The Rationality of GenAI-Human Trade-Off." AEA RCT Registry. June 27. https://doi.org/10.1257/rct.16234-1.0
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2025-06-21
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
Labor-Type Allocation Bias: The difference in allocation to a specific labor type (Human/ GenAI) compared to the profit-maximizing benchmark.
DIffernt levels of rationality in different conditions.
Primary Outcomes (explanation)
Rationality based on revealed preference toolkit
Preference estimation

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This experiment consists of three randomized decision sessions/conditions to measure employer preferences, followed by survey questions.
All participants need to complete the three sessions of decision-making and the survey. The decision sessions and the tasks within each session are in random order.
Participants (acting as employers) allocate tasks between different types of labor under varying return conditions. We employ a budget-based design to operationalize these varying returns, adopting the original parameters from Halevy et al. (2018).

The three sessions are presented to subjects in random order:
Session/condition 1: Employers allocate tasks between two human workers under varying returns.
Session/condition 2: Employers allocate tasks between a human worker and an GenAI worker under varying returns.
Session/condition 3: Employers allocate tasks between a human worker and a different GenAI worker under varying returns.

Implementation Details:
Human workers: Tasks assigned to human workers are completed by real Prolific participants, who are compensated at our own expense.
GenAI workers: Tasks assigned to GenAI workers are executed by corresponding generative AI models, with payments directed to the company providing the service at our own expense.
Compensation: All participants receive a base payment as a participation fee. Additionally, bonuses are determined by one randomly selected decision round, chosen by the computer.

Other procedure:
Understanding checks: Before the decision rounds, participants complete comprehension tests.
Post-experiment survey: After the three decision sessions, participants answer survey questions assessing: perceived closeness to humans vs. GenAI, basic demographic information, and attitudinal measures to analyze heterogeneity in decision-making.
Experimental Design Details
Randomization Method
Randomization is performed by a built-in computer algorithm during the experiment process.
All participants need to complete the three sessions of decision-making and the survey. The decision sessions and the tasks within each session are in random order.
Randomization Unit
Experimental sessions;
Experimental tasks in each sessions.
All participants need to complete the three sessions of decision-making and the survey. The decision sessions and the tasks within each session are in random order.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
300 individuals
Sample size: planned number of observations
300 individuals
Sample size (or number of clusters) by treatment arms
300 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Office of Research and Knowledge Transfer, Research Ethics and Safety, Lingnan University
IRB Approval Date
2025-04-03
IRB Approval Number
EC170-2425

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials