Abstract
This study examines individuals’ willingness to pay for emotional support provided by either a human listener or an AI system. Participants take part in an incentivized online experiment in which they receive a fixed monetary endowment and report their maximum willingness to pay for a 10-minute anonymous text-based conversation about a recent personal concern. The listener is either another randomly matched participant or a trained AI chatbot. Actual purchases are determined using a Becker–DeGroot–Marschak (BDM) mechanism with real monetary consequences.
The experiment uses a three-arm between-subjects design. In the control condition, participants receive no additional information prior to the conversation. In Treatment 1, participants are provided with neutral information explaining that the AI system generates responses based on statistical language patterns and does not form subjective judgments. In Treatment 2, participants receive the same AI neutrality information and are additionally told that, after the conversation, the listener (human or AI) will generate a brief evaluative summary of the participant’s personal characteristics based on the interaction.
We collect detailed measures of social image concern, perceived judgment, empathy, privacy concern, prior experience with AI, prior use of mental health services, and demographic characteristics. Following the main experiment, participants are given free access to the same AI emotional support platform for two weeks, after which we conduct a follow-up survey to measure subsequent usage and updated beliefs.
The study aims to provide causal evidence on how concerns about social evaluation and perceived AI neutrality shape preferences for AI versus human-provided emotional support services.