Emotion Expression with Human vs. Robot

Last registered on October 31, 2022


Trial Information

General Information

Emotion Expression with Human vs. Robot
Initial registration date
October 29, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 31, 2022, 4:43 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

Monash University

Other Primary Investigator(s)

PI Affiliation
Shanghai Jiao Tong University
PI Affiliation
Wichita State University
PI Affiliation
Shanghai Jiao Tong University

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
People may have negative emotions when being treated unfairly. In this project, we investigate whether different communication protocols can be effective in moderating such negative emotions in comparison to when such communication opportunities are not available. We test several scenarios where such negative emotions can be expressed through n human-human/ human-robot interactions. The evidence can shed light on the effectiveness of the widespread use of robot customer service.
External Link(s)

Registration Citation

Feng, Xuechun et al. 2022. "Emotion Expression with Human vs. Robot." AEA RCT Registry. October 31. https://doi.org/10.1257/rct.10216-1.0
Sponsors & Partners


Experimental Details


Communication channel to express emotion is varied by treatment.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Amount required by the "proposer"
Decision made by the "responder"
Message sent by the "responder" in all treatments with communication
Primary Outcomes (explanation)
Xiao and Houser (2005) found that when the responder could express her negative emotion about the unfair offer towards the proposer, there is less rejection in an ultimatum game. If the moderating effect of communication can be extended to our game setting, our main hypothesis will be:

Hypothesis 1: Rejection Rate in CWP≤ Rejection Rate in CWTP<Rejection Rate in CWB≤ Rejection Rate in Baseline

Here we predict that treatment CWB may not be that effective since bots are lack of the ability to empathize. For CWTP, the treatment may be not as effective as CWP since in CWP a soft punishment can be given directly to the take authority.

For the take authority, if they reason strategically so that they could anticipate the moderating effect of the different treatments, it is likely that they will propose a greedier split of the endowment:

Hypothesis 2: Average amount required in Baseline≤ Average amount required in CWB<Average amount required in CWTP≤ Average amount required in CWP

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We test several environments where negative emotion can be expressed directly towards the source of unfairness/ an irrelevant third party/ a bot.
Experimental Design Details
There are a baseline and three treatments.

Baseline (No Communication): Subjects start with "counting zero" tasks to get initial endowment. Then they are randomly paired to play a simplified power to take game (see Bosman and Winden 2002). The take authority (randomly assigned) first proposes a split of X(0≤X≤50) points, which is the amount he wants to take from the responder. Then the responder (randomly assigned) chooses whether to accept or reject. The game is played only once.

CWP (Communication with the Proposer): The responder can choose to construct a free-form message (sending empty message is allowed) to the take authority when she settles the decision of "accept" or "reject". The message will be confirmed by the take authority, and the responder will be informed that her message has been received.

CWB (Communication with the Bot): Similar to the CWP treatment. The difference is that the message is sent to a bot instead. The responder will receive an automatic reply of "Message received".

CWTP (Communication with the Third Party): In this treatment, subjects are divided into a group of three rather than two. A third party is introduced. The message and the decision made by the take party will be confirmed by the third party. The responder will be informed that her message has been received.

Risk preference and some demographic information are collected for each subject in the post-questionnaire.
Randomization Method
The randomization is generated by the computer. We randomly assign students into different treatments.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Around 240 of either 2-person or 3-person groups depending on the treatments. In total, we expect to have around 540 subjects.
Sample size: planned number of observations
See "Planned Number of Clusters"
Sample size (or number of clusters) by treatment arms
Around 45-60 groups for each treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials