AI Neutrality, Social Image Concerns, and Uptake of Mental Health Support: A Randomized Experiment with Incentive-Compatible Design

Last registered on April 24, 2026

Pre-Trial

Trial Information

General Information

Title
AI Neutrality, Social Image Concerns, and Uptake of Mental Health Support: A Randomized Experiment with Incentive-Compatible Design
RCT ID
AEARCTR-0018271
Initial registration date
April 17, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 24, 2026, 8:40 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Peking University

Other Primary Investigator(s)

PI Affiliation
Peking University

Additional Trial Information

Status
On going
Start date
2026-03-22
End date
2026-07-17
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines individuals’ willingness to pay for emotional support provided by either a human listener or an AI system. Participants take part in an incentivized online experiment in which they receive a fixed monetary endowment and report their maximum willingness to pay for a 10-minute anonymous text-based conversation about a recent personal concern. The listener is either another randomly matched participant or a trained AI chatbot. Actual purchases are determined using a Becker–DeGroot–Marschak (BDM) mechanism with real monetary consequences.

The experiment uses a three-arm between-subjects design. In the control condition, participants receive no additional information prior to the conversation. In Treatment 1, participants are provided with neutral information explaining that the AI system generates responses based on statistical language patterns and does not form subjective judgments. In Treatment 2, participants receive the same AI neutrality information and are additionally told that, after the conversation, the listener (human or AI) will generate a brief evaluative summary of the participant’s personal characteristics based on the interaction.

We collect detailed measures of social image concern, perceived judgment, empathy, privacy concern, prior experience with AI, prior use of mental health services, and demographic characteristics. Following the main experiment, participants are given free access to the same AI emotional support platform for two weeks, after which we conduct a follow-up survey to measure subsequent usage and updated beliefs.

The study aims to provide causal evidence on how concerns about social evaluation and perceived AI neutrality shape preferences for AI versus human-provided emotional support services.
External Link(s)

Registration Citation

Citation
Du, Jiachuan and Juanjuan Meng. 2026. "AI Neutrality, Social Image Concerns, and Uptake of Mental Health Support: A Randomized Experiment with Incentive-Compatible Design." AEA RCT Registry. April 24. https://doi.org/10.1257/rct.18271-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2026-04-17
Intervention End Date
2026-07-17

Primary Outcomes

Primary Outcomes (end points)
Willingness to Pay (WTP) for Emotional Support.
Maximum stated willingness to pay for a 10-minute emotional support chat with (a) a human listener, and (b) an AI listener. These are incentivized using a Becker–DeGroot–Marschak (BDM) mechanism.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study is an incentivized online experiment examining demand for emotional support provided by either a human listener or an AI system.

Participants are recruited to complete an online survey and receive a fixed endowment. They report their maximum willingness to pay (WTP) for a 10-minute anonymous, text-based emotional support conversation under two possible providers: (1) a randomly matched human participant, or (2) an AI chatbot designed to provide emotional companionship. WTP is elicited using an incentive-compatible Becker–DeGroot–Marschak (BDM) mechanism. After all responses are collected, one provider type is randomly assigned (with equal probability in most cases), and a random price is drawn. Participants purchase the assigned service if and only if their stated WTP is weakly greater than the drawn price.

The experiment uses a three-arm between-subjects design.

Control: No additional information.

T1: Participants are informed that the AI generates responses based on statistical patterns and does not form subjective judgments.

T2: Same as T1, with the addition that after the conversation, the listener (human or AI) will generate a brief evaluative summary of the participant.

Participants are randomly assigned to one of the three treatment arms.

If participants do not purchase the service, they complete a compensated reading comprehension task instead. In a subset of cases, participants may also be randomly assigned to serve as a human listener for another participant.

The study further collects survey measures on other channel variables and heterogeneous variables.
Experimental Design Details
Not available
Randomization Method
Randomization is conducted automatically by a computer program embedded in the online experimental platform. Treatment arm assignment, role assignment (speaker vs. listener, when applicable), BDM arm assignment (human vs. AI), and BDM random price draws are all generated by computer-based random number generators without researcher discretion.
Randomization Unit
The primary unit of randomization is the individual participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
600 individuals
Sample size: planned number of observations
600 individuals
Sample size (or number of clusters) by treatment arms
The study includes 200 individuals in the control group, 200 individuals assigned to Treatment 1 (AI Neutrality Information only), and 200 individuals assigned to Treatment 2 (AI Neutrality Information + Post-Conversation Evaluation).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
AI Neutrality, Social Image Concerns, and Uptake of Mental Health Support: A Randomized Experiment with Incentive-Compatible Design
IRB Approval Date
2026-02-12
IRB Approval Number
2026-07