Hard vs Soft Ultimatums

Last registered on May 05, 2022

Pre-Trial

Trial Information

General Information

Title
Hard vs Soft Ultimatums
RCT ID
AEARCTR-0009319
Initial registration date
May 04, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 05, 2022, 8:47 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Naveen Jindal School of Management, The University of Texas at Dallas

Other Primary Investigator(s)

PI Affiliation
Bilkent University

Additional Trial Information

Status
In development
Start date
2022-05-15
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Negotiation scholars agree on the following recommendation: avoid (hard) ultimatums and do not leave your opponent without any options. Using online experiments on Prolific, we will investigate the effect of the availability of soft ultimatums (ultimatum offers that specify a range of amounts – instead of a specific number – that the proposer is willing to give to the responder) on offers made, the likelihood of acceptance, and responders’ satisfaction and fairness perceptions of the negotiated outcome. There are two agents: a proposer and a responder. In the control treatment, they play the standard ultimatum game, where the proposer offers an amount of the given pie, and the responder accepts or rejects. In the experimental treatment, proposer can —at his/her will— specify a minimum amount and a maximum amount s/he would be willing to give to the responder. We are interested in the following markers: (i) how frequently is this option used, (ii) does the availability of this option influence the offers made by proposers (e.g., how do the maximum offers in the experimental treatment compare with the offers in the control treatment), (iii) how does the rejection rate for maximum offers in the experimental treatment compare to the rejection rate of equivalent offers in the control treatment, and (iv) how do the satisfaction and fairness perceptions of responders in the experimental treatment compare with those in the control treatment (for comparable offers)?
External Link(s)

Registration Citation

Citation
Hyndman, Kyle and Emin Karagözoğlu. 2022. "Hard vs Soft Ultimatums." AEA RCT Registry. May 05. https://doi.org/10.1257/rct.9319-1.0
Experimental Details

Interventions

Intervention(s)
The purpose of our intervention is to understand the influence of soft ultimatums on (i) offers made by proposers; (ii) the acceptance/rejection behavior of responders; (iii) their satisfaction with the outcome of bargaining'; and (iv) how fair they perceive the offer/outcome. To study this, we compare agents’ behavior (and answers to some post-experimental questions) across two treatments that vary the availability/presence of soft ultimatums.
Intervention (Hidden)
Our intervention is designed to investigate the following hypotheses regarding the effectiveness of soft ultimatums.
1. Proposers understand the value of making a soft ultimatum. Evidence to support this will be two-fold: (i) the average minimum offer and the average maximum offer in the experimental treatment will be significantly different from each other and (ii) the fraction of subjects utilizing soft ultimatums will be significantly and substantially higher than 0.
2. The average maximum offer in the experimental treatment will be lower than the average offer in the control treatment.
3. The average acceptance rate of maximum offers in the experimental treatment will be higher than the average acceptance rate of comparable offers in the control treatment.
4. The average fairness rating, as well as average satisfaction rating of an accepted offer in the experimental treatment, will be higher than those of a comparable accepted offer in the control treatment.
Intervention Start Date
2022-05-15
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
• Offers by proposers in the control treatment
• Minimum and maximum offers by proposers in the experimental treatment
• Accept/reject decisions by responders in both treatments
• Answers to outcome satisfaction and fairness perception questions by responders in both treatments
Primary Outcomes (explanation)
These are the outcomes that allow us to test our primary hypotheses.

Secondary Outcomes

Secondary Outcomes (end points)
• Answers to open ended questions that aim to understand proposers’ mindset (e.g., how they perceive the soft ultimatum option, why they used/did not use it, their intentions) by proposers in the experimental treatment.
• Decision times (i.e., how long does it take for the subjects to make their decisions)
Secondary Outcomes (explanation)
These outcomes would allow us to gain further insights into subjects’ behavior in the experiment –especially in case their behavior do not meet our predictions.

Experimental Design

Experimental Design
The experiment has two treatments: standard ultimatum game (UG - control treatment) and modified ultimatum game (MUG - experimental treatment). These games are implemented using the direct-response method. We explicitly avoid implementing the strategy method since we believe that the treatment variation we implement can make a behavioral impact when the responders are in a hot emotional state. The experiment has a between-subjects design, i.e., each subject will participate only in one of the treatments. We decided against a within-subject design since the treatment variation is simple and transparent, which can easily reveal the research question and lead to an experimenter demand effect. The game is one-shot. Each subject only plays one role, and roles are randomly assigned with equal probabilities. There will be comprehension questions before the game. Only the participants who answer these questions correctly can proceed with the experiment. Instructions are common knowledge. The experiment will be conducted on Prolific.
Experimental Design Details
Control Treatment (UG). Each subject is randomly and anonymously paired with another participant. One of the subjects in the pair is assigned to the role of a Proposer and the other to the role of a Responder. Each pair has $6 to split between the Proposer and the Responder. The Proposer must decide how much of the $6 to offer to his/her matched Responder. Proposals can be in increments of $0.1. The Responder will observe the Proposer's offer and either accept or reject this proposal. If the Responder accepts, then if the Proposer offers $X to the Responder, the Proposer will earn $(6 - X) and the Responder will earn $X. If the Responder rejects, then both the Proposer and Responder will earn $0.

Experimental Treatment (MUG). Each subject is randomly and anonymously paired with another participant. One of the subjects in the pair is assigned to the role of a Proposer and the other to the role of a Responder. Each pair has $6 to split between the Proposer and the Responder. The Proposer must decide how much of the $6 to offer to his/her matched Responder. Proposals can be in increments of $0.1. In doing this, the proposer is asked to specify to numbers: a minimum offer and a maximum offer, with the natural requirement that the former cannot be greater than the latter. Entering the same number for the minimum offer and the maximum offer is allowed, and it means the proposer does not use the option to make an interval offer. The Responder will observe the Proposer's offer and first either accept or reject this proposal. If s/he accepts, then s/he also chooses the amount that s/he wants to receive, which must be between the minimum offer and the maximum offer. If the Proposer offers $X as the minimum and $Y as the maximum offer to the Responder, and upon accepting the offer if the Proposer wants to receive $Z (X≤Z≤Y), then the Proposer will earn $(6 - Z) and the Responder will earn $Z. If the Responder rejects, then both the Proposer and Responder will earn $0.

Once the game (in each treatment) is played and the subjects are informed about the outcome, the Responder is asked two questions: (i) how would you rate the fairness of the outcome in the game you played? and (ii) how would you rate your satisfaction with the outcome of the game you played? The answers will be an 11-point Likert scale. Similarly, the Proposer is asked a few questions about his/her decision in the experiment: (i) Did you make minimum and maximum offers that are different from each other? (ii) Can you explain the rationale behind your offer decision?

Subjects are paid based on the outcome of the game they played. There is also a show-up fee, 50 cents. In the post-experimental questionnaire data on the following will be collected: age, gender, education, competitiveness (self-assessment), risk-attitude (self-assessment). The experiment is expected to take around 20 minutes.
Randomization Method
Randomization done in office by a computer.
Randomization Unit
Groups of participants (sessions) are randomized to experimental treatments.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We plan to conduct data collection in waves of approximately 20-24 pairs per session, which means that we will need to conduct 4 or 5 “sessions” per treatment.
Sample size: planned number of observations
85 complete pairs per treatment, with 2 treatments for a total of 340 subjects.
Sample size (or number of clusters) by treatment arms
85 complete pairs – Control Treatment (UG)
85 complete pairs – Experimental Treatment (MUG)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Assuming that Proposers offer approximately 43% of the pie, with a standard deviation of 14, we have 80% power to detect an effect size of approximately 6 (two-sided test). Since our hypothesis on proposers’ offers between treatments is one-sided, we should be able to detect smaller differences at the same power or the same difference with higher power. For the test that the average minimum offer is different from the average maximum offer, using a similar starting point of 43% of the pie and a standard deviation of 14, we have 80% power to detect an effect size of approximately 4.3.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
UTD Office of Research Integrity and Outreach
IRB Approval Date
2022-05-18
IRB Approval Number
IRB-22-516: Negotiation Study

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials