Human-AI interaction in the workplace: Understanding workers' Internal Motivation

Last registered on December 05, 2025

Pre-Trial

Trial Information

General Information

Title
Human-AI interaction in the workplace: Understanding workers' Internal Motivation
RCT ID
AEARCTR-0016059
Initial registration date
December 02, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 05, 2025, 9:40 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Utrecht University

Other Primary Investigator(s)

PI Affiliation
Emlyon Business School
PI Affiliation
Utrecht University

Additional Trial Information

Status
In development
Start date
2025-12-03
End date
2029-01-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project explores how working with generative AI, compared to a human co-worker, affects intrinsic motivation in the workplace. Using an online experiment, participants are randomly matched with either a human or an AI co-worker. Then they are asked to complete a task with their assigned co-worker. We measure motivation both through self-reported measures (including Perceived Competence, Perceived Autonomy, Perceived Relatedness, Perceived Enjoyment / Interest (primary affective component of intrinsic motivation)) and willingness to repeat the task without pay. The study contributes to understanding how AI reshapes social incentives and motivation at work, with implications for organizational design and human-AI collaboration.

External Link(s)

Registration Citation

Citation
Corgnet , Brice , Elena Fumagalli and Sarah Rezaei. 2025. "Human-AI interaction in the workplace: Understanding workers' Internal Motivation." AEA RCT Registry. December 05. https://doi.org/10.1257/rct.16059-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Participants are randomly assigned to complete a creative task, designing a book cover based on a summary, in collaboration with either a human co-worker (another participant) or a generative AI (ChatGPT). The task is conducted in a collaborative setting, allowing us to examine how the type of co-worker (human vs. AI) influences participants' intrinsic motivation.
Intervention Start Date
2025-12-03
Intervention End Date
2029-01-01

Primary Outcomes

Primary Outcomes (end points)
Primary Outcome 1: Self-reported intrinsic motivation
Intrinsic motivation will be measured using validated survey scales derived from Self-Determination Theory (Deci & Ryan). These include Perceived Competence, Perceived Autonomy, Perceived Relatedness, Perceived Enjoyment / Interest (primary affective component of intrinsic motivation).

Primary Outcome 2: Behavioral motivation proxy
we will measure participant self motivation by asking participants whether they want to repeat the same task or whether they want to complete an alternative task. By varying the length of the alternative task, we are able to measure the strength of their internal motivation. This behavioral measure complements self-reported intrinsic motivation.

Specifically we test the following hypotheses:
Hypothesis 1a. Algo–human teams will decrease perceived relatedness among team members.
Hypothesis 1b. Algo–human teams will increase perceived team functioning, as measured by task orientation.
Hypothesis 2a. Higher perceived relatedness among team members will increase perceived enjoyment of the team task.
Hypothesis 2b. Higher perceived team functioning will increase perceived enjoyment of the team task.
Hypothesis 3. Higher enjoyment of the team task will increase task interest and the willingness to complete the task again.

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)

Secondary outcomes correspond to the following hypotheses (AH1–AH3). These outcomes focus on performance, distributional patterns, and moderators:
AH1. Motivational outcomes—measured through task interest and willingness to perform the task—will predict performance on the task. Each individuals' performance will be assessed by several external raters recruited from the same population.
AH2. Algo–human teams in the top decile of motivational outcomes or performance will be underrepresented relative to fully human teams.
AH3. Differences in motivational outcomes and performance between treatments will be moderated by team composition, including personality traits, familiarity with large language models, and teamwork experience.
AH4: Task performance: Task performance will be measured as the difference in participants’ performance across the two treatments, as evaluated by multiple external raters recruited from the same population.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We use a between-subjects, online experiment with a 2×1 design. Participants are randomly assigned to complete a creative task, designing a book cover based on a book summary, in collaboration with either a human co-worker (another participant) or a generative AI co-worker (ChatGPT).

The task is framed as a collaborative activity; all participants receive identical instructions and materials. Random assignment is handled automatically by the experimental platform.

After completing the task, participants respond to a survey measuring intrinsic motivation, using validated scales based on self-determination theory (perceived competence, autonomy, and relatedness). In addition, participants answer three open-ended questions about their engagement during the task, their interaction with their partner (AI or human), and the steps they followed to coordinate the task.

As a behavioral proxy of motivation, participants are also given the option to continue working on either:

1) a similar creative task, or

2) a different repetitive task requiring little creativity.

This choice allows us to assess both their willingness to continue.
Experimental Design Details
Not available
Randomization Method
A computer automatically randomizes participants through the experimental platform. At the time they begin the task, participants are randomly assigned to one of the two treatment conditions (human co-worker vs. AI co-worker) using a built-in randomization function.
Randomization Unit
The unit of randomization is the individual participant. Each participant is independently assigned to one of the treatment conditions (human co-worker vs. AI co-worker).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Human–AI condition: Treatment is clustered at the individual level (1 participant per cluster). Planned number of clusters = N individuals.

Human–Human condition: Treatment is clustered at the dyad level (2 participants per cluster). Planned number of clusters = N/2 dyads (each dyad = 1 cluster).
Sample size: planned number of observations
We collect data from Prolific and stop the experiment when 900 participants have completed the experiment.
Sample size (or number of clusters) by treatment arms
600 participants – Human co-worker treatment

300 participants – Generative AI (ChatGPT) co-worker treatment

Randomization is done at the individual level. Upon starting the experiment, each participant is randomly assigned to one of the two arms.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our main outcome (willingness to repeat the task) is measured on a 0–10 scale. Based on pilot data, the standard deviation of this outcome is 3.68. Using a significance level of α = 0.05 and accounting for the clustered sample design in the Human–Human condition, the minimum detectable effect size (MDE) is: 0.97 when the observations within the cluster are assumed to be fully dependent; 0.86 when the observations within the cluster are assumed to be fully independent 0.87 when we assume an intra cluster correlation of 0.08 as measured in our pilot.
IRB

Institutional Review Boards (IRBs)

IRB Name
Utrecht University- faculty of law, economics and governance
IRB Approval Date
2025-06-18
IRB Approval Number
25-010