Spillover of Empathy: Does Receiving AI Empathy Make People More Prosocial toward Others?

Last registered on May 11, 2026

Pre-Trial

Trial Information

General Information

Title
Spillover of Empathy: Does Receiving AI Empathy Make People More Prosocial toward Others?
RCT ID
AEARCTR-0017550
Initial registration date
May 08, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 11, 2026, 9:20 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
UCLA Anderson

Other Primary Investigator(s)

PI Affiliation
Guanghua School of Management, Peking University
PI Affiliation
Guanghua School of Management, Peking University

Additional Trial Information

Status
In development
Start date
2026-05-08
End date
2026-09-02
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This randomized controlled trial studies whether receiving empathy from an AI chatbot spills over into greater prosocial behavior toward other people. Participants will be randomly assigned to one of three 10-minute AI conversation conditions: a neutral-topic AI conversation, a high-empathy AI conversation about a recent personal experience, or a low-empathy AI conversation about the same type of personal experience. After the conversation, we measure prosocial behavior toward another participant through helping willingness and subsequent helping quality. The study asks whether AI empathy merely affects users’ own experience or can also change how they subsequently treat real strangers.
External Link(s)

Registration Citation

Citation
Che, Chelsea Tianyi, Tingzhu Fan and Juanjuan Meng. 2026. "Spillover of Empathy: Does Receiving AI Empathy Make People More Prosocial toward Others?." AEA RCT Registry. May 11. https://doi.org/10.1257/rct.17550-1.0
Experimental Details

Interventions

Intervention(s)
Participants will be randomly assigned to one of three experimental conditions. In all conditions, participants will engage in a 10-minute text-based conversation with an AI chatbot. The conditions vary in conversation topic and chatbot empathy. After the interaction, participants complete behavioral tasks designed to assess subsequent helping behavior toward other people.
Intervention Start Date
2026-05-09
Intervention End Date
2026-08-31

Primary Outcomes

Primary Outcomes (end points)
• Extensive margin of helping: the maximum number of minutes, from 0 to 15, that the participant is willing to spend helping another participant, elicited using a BDM-style time mechanism.
• Intensive margin of helping: the quality of helping behavior among participants who complete a follow-up helping task.
Primary Outcomes (explanation)
The extensive margin is measured for all participants after the AI conversation and before the follow-up task is assigned. Participants report a maximum willingness-to-spend time X between 0 and 15 minutes. If the writing task is selected, a random time P between 0 and 15 minutes may be drawn. If X is greater than or equal to P, the participant completes the writing task for P minutes; if X is less than P, the participant does not complete the task. The reported X is analyzed as the extensive-margin helping outcome.
The intensive margin is measured among participants who complete a relevant follow-up helping task. It captures the quality of helping behavior using ratings and/or coding.

Secondary Outcomes

Secondary Outcomes (end points)
• Donation: the number of bonus tokens, from 0 to 150, that the participant chooses to donate to a nonprofit organization.
• Follow-up interaction ratings: participant ratings of the follow-up helping interaction, when applicable.
Secondary Outcomes (explanation)
Donation is measured for all participants as a secondary behavioral outcome. Follow-up interaction ratings are measured among participants who complete a relevant follow-up helping interaction.

Experimental Design

Experimental Design
This study uses a between-subjects randomized controlled trial design. Eligible participants are recruited through online experimental platforms and randomly assigned to one of three AI conversation conditions. Each participant completes a 10-minute text-based interaction, followed by behavioral tasks measuring helping willingness and subsequent helping quality. Additional measures are collected after the primary helping outcomes.
Experimental Design Details
Not available
Randomization Method
Participants will be randomized into one of the three treatment arms in a 1:1:1 ratio using a computerized random number generator embedded in the experimental platform. Randomization occurs after eligibility screening and before the 10-minute AI conversation. Follow-up task assignment, where applicable, will also be randomized at the individual level.
Randomization Unit
Individual participant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No clusters
Sample size: planned number of observations
The target is 450-600 completed eligible participants in total, corresponding to approximately 150-200 completed participants per treatment arm. This target refers to the full combined study sample and does not pre-specify allocation by recruitment platform.
Sample size (or number of clusters) by treatment arms
• Control AI conversation: 150-200 completed participants
• High-empathy AI conversation: 150-200 completed participants
• Low-empathy AI conversation: 150-200 completed participants
The number initially recruited may exceed the number of completed eligible observations due to screening, exclusion, attrition, or technical issues.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With 150–200 completed participants per arm and two-sided alpha = 0.05, a pairwise comparison between two arms has approximately 80% power to detect a standardized mean difference of roughly 0.28–0.32 standard deviations for full-sample outcomes. Detectable effects for outcomes measured in follow-up task subsamples may be larger.
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board, Guanghua School of Management, Peking University
IRB Approval Date
2025-12-04
IRB Approval Number
#2025-38