Experimental Design
With our experiment we want to test two hypotheses. First, we expect that people will donate more when they report their donation amount to either a chatbot or a human agent, compared to a non-interactive form. This prediction is based on the idea that both the human-like features of a chatbot and the real human presence can activate social image concerns. Second, we expect that donations will be higher when participants interact with a human rather than a chatbot. Here, the presence of a real person may enhance feelings of social image concerns, social and emotional closeness, and the desire for social approval, all of which can lead to more generous behavior.
To test these hypotheses, we conduct a between-subject online experiment. In the first part, participants complete a real-effort task (counting zeros) to earn money and then in the second part they play a dictator game in which they decide how much of their earnings to donate to charity. We will offer well-known charities. Participants are randomly assigned to one of three treatments. In the FORM treatment, participants read FAQs, select their preferred charity, and indicate their donation amount via a simple, non-interactive input field. To maintain interactivity and comparability with the other treatments, information is revealed step-by-step, mimicking a conversational flow. In the CHATBOT treatment, they enter a chat with a chatbot. In the third treatment, called CHAT HUMAN, they enter a live chat with a real person. In all treatments, the procedure follows the same structure: a brief greeting, a limited-time opportunity to ask questions/in the form treatment they can access a FAQ page, the selection of a charity, the input of a donation amount, and a closing thank-you message including the opportunity to receive confirmation for their donation. In the CHATBOT condition, the chatbot is explicitly prompted to provide these elements, while in the CHAT HUMAN condition the human agent receives the same instructions as the chatbot prompts, ensuring comparability across treatments.
Participants are informed in advance about their reporting channel (form, chatbot, human). Those in the CHATBOT treatment know they are chatting with a chatbot, while those in the CHAT HUMAN condition know they are chatting with a real person. To further enhance the credibility of these framings, we inform participants in advance that a voluntary verification step will follow the experiment. In the CHAT HUMAN condition, participants are given the option to initiate a one-way live call, accompanied by a short chat option, with the assistant they have been chatting with. In the CHATBOT condition, we provide subtle cues for verification, such as requests for the bot to generate a specific poem.
In addition, we will elicit decision makers’ beliefs on injunctive and descriptive norms. Belief elicitation is incentivized and takes place after participants made their donation decision.
Participants will also complete a short survey on self-report scales (social image concerns, Inclusion of the Other in the Self (IOS) Scale, emotions, technical affinity, perception of the donation process, and socio-demographic variables (e.g., age, gender, income, education).