Donations in the digital age: Effects of human-machine interaction on donation behavior

Last registered on November 25, 2025

Pre-Trial

Trial Information

General Information

Title
Donations in the digital age: Effects of human-machine interaction on donation behavior
RCT ID
AEARCTR-0017292
Initial registration date
November 20, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 25, 2025, 7:48 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Bern

Other Primary Investigator(s)

PI Affiliation
University of Bern
PI Affiliation
University of Bern
PI Affiliation
University of Bern

Additional Trial Information

Status
In development
Start date
2025-11-24
End date
2027-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The rise of digital donations and the potential of artificial intelligence in fundraising are reshaping charitable giving. While fundraising has traditionally relied on human interaction or non-interactive channels, digitalization and rising costs are prompting charities to explore automated alternatives such as chatbots. Research suggests that people respond differently to machines than to humans, often showing weaker emotional and social reactions, yet little is known about how such differences affect actual donation behavior. We address this gap with a controlled online experiment using a real-effort task followed by a dictator game in which participants decide how much of their earnings to donate to a charity. Participants are randomly assigned to one of three conditions, differing through which channel they report their donation amount: (i) FORM, where they report donations via a non-interactive input form, (ii) CHATBOT, where they interact with a chatbot (based on Open AI technology), or (iii) CHAT HUMAN, where they interact with a real person via live chat. All conditions follow the same structure for comparability, with participants fully informed about whether they are interacting with a chatbot or a human agent. We test whether interactive chats increase donations compared to forms, and whether human chats elicit higher giving than chatbot chats. We also examine mechanisms such as social image concerns, perceived closeness, and emotional responses. The findings will provide novel insights for behavioral research and practical guidance for nonprofit fundraising in the digital age.
External Link(s)

Registration Citation

Citation
Dadic, Hana et al. 2025. "Donations in the digital age: Effects of human-machine interaction on donation behavior." AEA RCT Registry. November 25. https://doi.org/10.1257/rct.17292-1.0
Experimental Details

Interventions

Intervention(s)
The online experiment uses a between-subject design with three conditions:

1. FORM: Participants provide their donation choice via a non-interactive online form. They select a charity from a list and enter the amount they wish to donate in a simple input field. Instead of a chat, they can access an FAQ page to address questions about the charities.
2. CHATBOT: Participants provide their donation choice in a chat with a chatbot. Within the chat, they can ask questions about the charities (FAQ), select their preferred charity, and indicate their donation amount. Participants know they are communicating with a machine.
3. CHAT HUMAN: Participants provide their donation choice in a live chat with a real person. As in the chatbot condition, the chat includes the FAQ function, the selection of a charity, and the input of the donation amount. Participants know they are communicating with a real person.
Intervention Start Date
2025-11-24
Intervention End Date
2026-01-31

Primary Outcomes

Primary Outcomes (end points)
Amount of money donated
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Beliefs on social norms, social image concerns, Emotions, Other in the Self (IOS) Scale, use of skip-button, time to skip (right-censored if no skip)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
With our experiment we want to test two hypotheses. First, we expect that people will donate more when they report their donation amount to either a chatbot or a human agent, compared to a non-interactive form. This prediction is based on the idea that both the human-like features of a chatbot and the real human presence can activate social image concerns. Second, we expect that donations will be higher when participants interact with a human rather than a chatbot. Here, the presence of a real person may enhance feelings of social image concerns, social and emotional closeness, and the desire for social approval, all of which can lead to more generous behavior.

To test these hypotheses, we conduct a between-subject online experiment. In the first part, participants complete a real-effort task (counting zeros) to earn money and then in the second part they play a dictator game in which they decide how much of their earnings to donate to charity. We will offer well-known charities. Participants are randomly assigned to one of three treatments. In the FORM treatment, participants read FAQs, select their preferred charity, and indicate their donation amount via a simple, non-interactive input field. To maintain interactivity and comparability with the other treatments, information is revealed step-by-step, mimicking a conversational flow. In the CHATBOT treatment, they enter a chat with a chatbot. In the third treatment, called CHAT HUMAN, they enter a live chat with a real person. In all treatments, the procedure follows the same structure: a brief greeting, a limited-time opportunity to ask questions/in the form treatment they can access a FAQ page, the selection of a charity, the input of a donation amount, and a closing thank-you message including the opportunity to receive confirmation for their donation. In the CHATBOT condition, the chatbot is explicitly prompted to provide these elements, while in the CHAT HUMAN condition the human agent receives the same instructions as the chatbot prompts, ensuring comparability across treatments.

Participants are informed in advance about their reporting channel (form, chatbot, human). Those in the CHATBOT treatment know they are chatting with a chatbot, while those in the CHAT HUMAN condition know they are chatting with a real person. To further enhance the credibility of these framings, we inform participants in advance that a voluntary verification step will follow the experiment. In the CHAT HUMAN condition, participants are given the option to initiate a one-way live call, accompanied by a short chat option, with the assistant they have been chatting with. In the CHATBOT condition, we provide subtle cues for verification, such as requests for the bot to generate a specific poem.

In addition, we will elicit decision makers’ beliefs on injunctive and descriptive norms. Belief elicitation is incentivized and takes place after participants made their donation decision.

Participants will also complete a short survey on self-report scales (social image concerns, Inclusion of the Other in the Self (IOS) Scale, emotions, technical affinity, perception of the donation process, and socio-demographic variables (e.g., age, gender, income, education).
Experimental Design Details
Not available
Randomization Method
Computer (online experiment)
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The experiment will be run on Prolific. The number of participants planned for the experiment is 519 people completing the study. We advertise for 580 participants to take into account individuals who might not be matched with a human agent and/or who must be excluded due to the restrictions below.

We will exclude subjects who:
- complete the survey faster than two standard deviations from the average completion time;
- do not complete the study within 45 minutes of starting;
- do not complete parts with a time limit within the given time;
- do not complete the study for other reasons (e.g., no match in the defined time)
- exit and then re-enter the task as a new subject (as these individuals might see multiple treatments);
- do not answer one of the control question correctly the third time
- fail the attention check question
- incorrectly answer the control question on the channel
- engage in behavior that suggests deliberate disruption of the task (e.g., offensive content)

Based on previous research, we expect to exclude about 5 percent of participants because they do not meet the restrictions above.
Sample size: planned number of observations
The experiment will be run on Prolific. The number of participants planned for the experiment is 519 people completing the study. We advertise for 580 participants to take into account individuals who might not be matched with a human agent and/or who must be excluded due to the restrictions below. We will exclude subjects who: - complete the survey faster than two standard deviations from the average completion time; - do not complete the study within 45 minutes of starting; - do not complete parts with a time limit within the given time; - do not complete the study for other reasons (e.g., no match in the defined time) - exit and then re-enter the task as a new subject (as these individuals might see multiple treatments); - do not answer one of the control question correctly the third time - fail the attention check question - incorrectly answer the control question on the channel - engage in behavior that suggests deliberate disruption of the task (e.g., offensive content) Based on previous research, we expect to exclude about 5 percent of participants because they do not meet the restrictions above.
Sample size (or number of clusters) by treatment arms
For each of the experimental treatments: About 184 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on a two-sided Wilcoxon-Mann-Whitney test, an error probability of 0.05, and a power of 0.80, we require about 184 individuals per treatment to detect an effect of Cohen’s d of 0.30.
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee of the Faculty of Business, Economics and Social Sciences of the University of Bern
IRB Approval Date
2025-11-11
IRB Approval Number
2025-10
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information