Hard vs Soft Ultimatums

Last registered on May 05, 2022


Trial Information

General Information

Hard vs Soft Ultimatums
Initial registration date
May 04, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 05, 2022, 8:47 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Naveen Jindal School of Management, The University of Texas at Dallas

Other Primary Investigator(s)

PI Affiliation
Bilkent University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Negotiation scholars agree on the following recommendation: avoid (hard) ultimatums and do not leave your opponent without any options. Using online experiments on Prolific, we will investigate the effect of the availability of soft ultimatums (ultimatum offers that specify a range of amounts – instead of a specific number – that the proposer is willing to give to the responder) on offers made, the likelihood of acceptance, and responders’ satisfaction and fairness perceptions of the negotiated outcome. There are two agents: a proposer and a responder. In the control treatment, they play the standard ultimatum game, where the proposer offers an amount of the given pie, and the responder accepts or rejects. In the experimental treatment, proposer can —at his/her will— specify a minimum amount and a maximum amount s/he would be willing to give to the responder. We are interested in the following markers: (i) how frequently is this option used, (ii) does the availability of this option influence the offers made by proposers (e.g., how do the maximum offers in the experimental treatment compare with the offers in the control treatment), (iii) how does the rejection rate for maximum offers in the experimental treatment compare to the rejection rate of equivalent offers in the control treatment, and (iv) how do the satisfaction and fairness perceptions of responders in the experimental treatment compare with those in the control treatment (for comparable offers)?
External Link(s)

Registration Citation

Hyndman, Kyle and Emin Karagözoğlu. 2022. "Hard vs Soft Ultimatums." AEA RCT Registry. May 05. https://doi.org/10.1257/rct.9319-1.0
Experimental Details


The purpose of our intervention is to understand the influence of soft ultimatums on (i) offers made by proposers; (ii) the acceptance/rejection behavior of responders; (iii) their satisfaction with the outcome of bargaining'; and (iv) how fair they perceive the offer/outcome. To study this, we compare agents’ behavior (and answers to some post-experimental questions) across two treatments that vary the availability/presence of soft ultimatums.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
• Offers by proposers in the control treatment
• Minimum and maximum offers by proposers in the experimental treatment
• Accept/reject decisions by responders in both treatments
• Answers to outcome satisfaction and fairness perception questions by responders in both treatments
Primary Outcomes (explanation)
These are the outcomes that allow us to test our primary hypotheses.

Secondary Outcomes

Secondary Outcomes (end points)
• Answers to open ended questions that aim to understand proposers’ mindset (e.g., how they perceive the soft ultimatum option, why they used/did not use it, their intentions) by proposers in the experimental treatment.
• Decision times (i.e., how long does it take for the subjects to make their decisions)
Secondary Outcomes (explanation)
These outcomes would allow us to gain further insights into subjects’ behavior in the experiment –especially in case their behavior do not meet our predictions.

Experimental Design

Experimental Design
The experiment has two treatments: standard ultimatum game (UG - control treatment) and modified ultimatum game (MUG - experimental treatment). These games are implemented using the direct-response method. We explicitly avoid implementing the strategy method since we believe that the treatment variation we implement can make a behavioral impact when the responders are in a hot emotional state. The experiment has a between-subjects design, i.e., each subject will participate only in one of the treatments. We decided against a within-subject design since the treatment variation is simple and transparent, which can easily reveal the research question and lead to an experimenter demand effect. The game is one-shot. Each subject only plays one role, and roles are randomly assigned with equal probabilities. There will be comprehension questions before the game. Only the participants who answer these questions correctly can proceed with the experiment. Instructions are common knowledge. The experiment will be conducted on Prolific.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer.
Randomization Unit
Groups of participants (sessions) are randomized to experimental treatments.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
We plan to conduct data collection in waves of approximately 20-24 pairs per session, which means that we will need to conduct 4 or 5 “sessions” per treatment.
Sample size: planned number of observations
85 complete pairs per treatment, with 2 treatments for a total of 340 subjects.
Sample size (or number of clusters) by treatment arms
85 complete pairs – Control Treatment (UG)
85 complete pairs – Experimental Treatment (MUG)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Assuming that Proposers offer approximately 43% of the pie, with a standard deviation of 14, we have 80% power to detect an effect size of approximately 6 (two-sided test). Since our hypothesis on proposers’ offers between treatments is one-sided, we should be able to detect smaller differences at the same power or the same difference with higher power. For the test that the average minimum offer is different from the average maximum offer, using a similar starting point of 43% of the pie and a standard deviation of 14, we have 80% power to detect an effect size of approximately 4.3.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Institutional Review Boards (IRBs)

IRB Name
UTD Office of Research Integrity and Outreach
IRB Approval Date
IRB Approval Number
IRB-22-516: Negotiation Study