Direct Tests of Cumulative Prospect Theory II

Last registered on January 21, 2020


Trial Information

General Information

Direct Tests of Cumulative Prospect Theory II
Initial registration date
January 11, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 13, 2020, 10:21 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 21, 2020, 12:30 AM EST

Last updated is the most recent time when changes to the trial's registration were published.


Primary Investigator

UC San Diego

Other Primary Investigator(s)

PI Affiliation
UC San Diego

Additional Trial Information

In development
Start date
End date
Secondary IDs
Kahneman and Tversky’s (1992) Cumulative Prospect Theory (CPT) implies an assumption that agents exhibit rank dependence. Through experimentation on 153 subjects, Sprenger and Bernheim (2019) find evidence that agents do not exhibit rank-dependent probability weighting. These findings inspire our proposed research to delve deeper into investigating the implications of CPT. Specifically, this project aims to study three things: 1) reproduce and extend the analysis of Bernheim and Sprenger (2019) for rank dependence; 2) investigate sign dependence in weighting; 3) investigate sign dependence in utility. The project makes use of the equalizing reduction technique that elicits a marginal rate of substitution between two outcomes Y and Z for a given ranking and signing of outcomes. This allows us to make non-parametric statements about each of our three study objects by simply altering outcome ranks or the location of the reference point (in this study assumed to be the monetary amount provided in an envelope at the beginning of the study).
External Link(s)

Registration Citation

Royer, Rebecca and Charles Sprenger. 2020. "Direct Tests of Cumulative Prospect Theory II." AEA RCT Registry. January 21.
Experimental Details


We randomly assign subjects to three different reference points (r = $1, $20, $40), where we define the reference point to be the amount of money subjects are given in an envelope at the start of the experiment. We then ask them to complete 34 tasks in which they make a series of choices between lotteries. We vary probability vectors and outcomes for each of these tasks. For more details, see the Analysis Plan for study screenshots.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Rank dependence, sign dependence in utility, sign dependence in probability weighting
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study consists of 34 tasks in total; 27 tasks each containing 33 choices that allow us to evaluate their equalizing reductions and an additional 7 certainty equivalent tasks each consisting of 26 choices to evaluate parameters of their utility functions and probability weighting functions.

The first 27 tasks consist of choices between two binary lotteries, referred to as Option A and Option B, each consisting of three outcomes with positive probability. Option A remains the same throughout all of the choices in the task, while the third outcome for Option B decreases by increments of 50 cents, allowing us to pin down the participant's equalizing reduction, k. Nine values of X (2, 3, 4, 21, 22, 23, 30, 31, 32) each for three different probability vectors {p, q, 1-p-q} = {20, 60, 20}, {40, 20, 40}, and {40, 30, 30} give us the 27 total tasks.

In the next seven tasks, participants complete a set of similar choices, where instead, Option A is a lottery between two outcomes: $24 and $-1 for the gain condition, $5 and $-20 for the gains and losses condition, and $-15 and $-40 for the loss condition. We discuss these three conditions in the following section. Option B is a certain amount of money, starting with greatest outcome in Option A and dropping by 50 cents for each following choice within the task. We ask participants to complete this task for seven different probability vectors for the two outcomes in Option A. We include {p, 1-p} = {95, 5}, {90, 10}, {75, 25}, {50, 50}, {25, 75}, {10, 90}, and {5, 95}. Through this set of tasks, we are able to identify participants' parameters on their utility function and probability weighting functions, using Kahneman and Tversky's (1992) functional forms. Using the imputed parameters, we can predict how much the change in the rank of X should affect certainty equivalents according to CPT.

The experiment will be run using oTree, an online survey interface. It integrates a point and click interface called oTree Studio with Django, HTML, and Python. See the appendix in the Analysis Plan for screenshots of the study for participants in the gain condition. After completing the equalizing reductions and certainty equivalents tasks, participants finish the study by answering a short post-study survey containing demographic questions, logic questions, and questions on their perceptions of hypothetical lotteries.

We will recruit 150 undergraduate students at UC San Diego to the economics lab. Participants will be randomized into one of three groups. Each group will receive a different amount of money in an envelope upon entry to the experimental lab: $1, $20, or $40. They are told that the money inside the envelope is their own money. We treat the money in the envelope as the subject's reference point. Once the participant opens the envelope, we have no degrees of freedom on this parameter.
Experimental Design Details
Randomization Method
Randomization done by a computer
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
150 individuals
Sample size: planned number of observations
150 individuals
Sample size (or number of clusters) by treatment arms
50 individuals in gains condition (reference point is $1), 50 individuals in losses condition (reference point is $40), 50 individuals in some-gains-some-losses condition (reference point is $20)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See Table 2 in Analysis Plan for sample sizes for each primary outcome of interest.

Institutional Review Boards (IRBs)

IRB Name
UC San Diego Human Research Protections Program
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials