Mixture Preference and Stochastic Choice

Last registered on July 03, 2025

Pre-Trial

Trial Information

General Information

Title
Mixture Preference and Stochastic Choice
RCT ID
AEARCTR-0016288
Initial registration date
June 30, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 03, 2025, 2:55 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
California Institute of Technology

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-07-15
End date
2025-08-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Randomizing over simple menus is pervasive in both experimental and empirical settings. However, there is still very little understanding as to why decision makers may make choices that are inconsistent with a linear ordering over alternatives. Leading decision theoretic models used to rationalize such behavior over binary menus can broadly be split into three categories: 1) decision makers have a strict preference for mixtures, 2) randomizing is the realization of strict preferences with noise, and 3) randomization is a representation of uncertainty over preferences. Which of these classes of models most accurately represents mixing behavior over simple choice tasks is still an open question. We design an experiment in which participants can construct convex combinations of simple lotteries, and then mix again over the reduced form lottery represented by this convex combination, and one of the original lotteries.
External Link(s)

Registration Citation

Citation
Adeney, Jack. 2025. "Mixture Preference and Stochastic Choice." AEA RCT Registry. July 03. https://doi.org/10.1257/rct.16288-1.0
Experimental Details

Interventions

Intervention(s)
There are three treatments for this study. All of the treatments allow participants to make decisions over binary menus containing simple lotteries. The difference between the treatments is the method through which they can provide their answers.

Treatment 1 allows participants to construct a mixture of the two lotteries using a slider. The mixture is shown on the screen and dynamically updates as they move the slider. The mixture that they generate using the slider is then shown as an alternative within the menus in Part 2 questions.

Treatment 2 allows participants to select either Lottery A or Lottery B 10 times. The mixture shown in Part 2 is then the mixture constructed using a weight over the two lotteries corresponding to the proportion of times they chose Lottery A over Lottery B.

Treatment 3 is the same as Treatment 1 except that they are informed before starting Part 1 that one of the lotteries in the following questions will be mixtures that they constructed in Part 1.

For all treatments, the mixture elicitation mechanism is the same for both Part 1 and Part 2 questions.

The experiment takes a between subject design, meaning that all participants participate in only one treatment.
Intervention (Hidden)
Intervention Start Date
2025-07-15
Intervention End Date
2025-08-15

Primary Outcomes

Primary Outcomes (end points)
The primary outcome is to test which of three popular models most accurately predict randomization and mixing behavior. The first model broadly states that individuals mix because they consider the convex set of all available options, and choose a non-degenerate mixture because they strictly prefer this mixture to all other convex combinations. This theory implies that if individuals choose to specify a mixture in Part 1, they should place full weight on that mixture when faced with that mixture and one of the original lotteries from Part 1.

The second theory suggests that mixing or randomizing occurs as a result of individuals having strict preference that are revealed after an interaction with noise. Under standard assumptions on noise, such as i.i.d realizations, more weight should be placed on the most preferred lottery in Part 1. Assuming quasi-concavity and quasi-convexity of preferences, the utility of the mixture must sit in between the utility of the most preferred lottery and the least preferred lottery. As a result, Part 2 answers must again place more weight on the most preferred lottery over the mixture, and more weight placed on the mixture over the least preferred lottery. Given that the utility difference between the two original lotteries and one of the lotteries and the mixture, the absolute difference between the weights and 0.5 must be less in Part 2 answers than Part 1 answers.

Finally, mixing might be a result of uncertainty over preferences. In other words, an individual might randomize over choices if they are uncertain about which alternative is best. Considering the Multi Expected Utility Model as a benchmark for this explanation, incomparability between Lottery A and Lottery B in Part 1 implies incomparability between Lottery A or Lottery B and a mixture of the two.

In order to identify which of these models most accurately predicts mixing behavior, we look at two main outcomes of interest. The first is the weights placed on lotteries in Part 1 and the weights placed on the mixtures in Part 2. The second is the rate at which individuals choose not to specify their preferred lotteries in exchange for a pre-determined yet undisclosed lottery. We view this as a proxy for preference uncertainty.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes include learning about the characteristics of mixtures across different treatments, and the relationship between mixture characteristics and characteristics of the original lotteries within the menu. We construct the lotteries such that we have differing support sizes, differing expected values and differing mean preserving spreads.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experimental Design

There are three treatments in this study. All treatments are split into two parts, where each part features 12 binary comparisons between simple lotteries. Part 1 contains binary comparisons between lotteries taken from an initial set, whereas Part 2 comparisons include lotteries from the initial set and lotteries that are equivalent to mixtures provided in Part 1.

For each comparison, decision makers have the opportunity to specify their preferred lottery choices, or move on to the next question without specifying. If choices are not provided, then a pre-specified, yet undisclosed lottery is used as default. We provide further details as to how this is determined in following sections.

Part 1

There are a total of 24 initial lotteries from which the binary comparisons in Part 1 are constructed. These lotteries are split into two groups, where each group contains lotteries that are approximate mean preserving spreads of each other. These groups have an expected value of approximately $12 and $14 respectively, while the support size of lotteries vary from 2 to 4.

Binary comparisons are then constructed both within group and between group. Three comparisons are constructed containing lotteries from only group 1, and three are constructed containing lotteries only from group 2. A further six comparisons are constructed containing one lottery from each group. This makes the total of 12 binary comparisons in Part 1. All binary comparisons are shown in random order to each participant and no lottery appears in more than one comparison.

Part 2

Every comparison in Part 2 contains one mixture from Part 1, and one of the initial lotteries from which the mixture was generated. This means that for each of the 12 mixtures provided in Part 1, there are two possible binary comparisons to choose from in Part 2. The way in which the mixtures are constructed by the participant depends on the treatment. Details are provided below.

The binary comparisons for Part 2 are then chosen as follows. We randomly select up to four mixtures that were specified by the decision maker, and ask both binary comparisons for each mixture. This makes a total of up to eight questions. The remaining four questions are taken from two randomly chosen mixtures that were not specified by the decision maker. If there were less than four specified mixtures, or less than two unspecified mixtures, we randomly select questions in order to get as close to that proportion as possible. These proportions are selected such that we have sufficient data to make comparisons between Part 1 and Part 2 both for questions where mixtures were set, as well as for questions where mixtures were not set.

Treatments

We previously mentioned that there are three main treatments. Each treatment is designed to capture a different setting in which we might consider mixing to be prevalent. The experiment takes a between-subject design, meaning that each participant only participates in a single treatment.

Treatment 1 provides an illustration of the two simple lotteries in the menu at the top of the screen, and a third box in the middle titled `Your Preferred Lottery'. Participants specify their preferred lottery using a slider that ranges from 0 to 10. As they move the slider, the mixture that is constructed according to the value on the slider is presented in the preferred lottery box. This image adjusts dynamically as the slider moves. The slider is used for both Part 1 and Part 2 questions in Treatment 1.

Treatment 2 speaks more directly to the repeated choice representation of mixing. Instead of having a slider, participants are shown the two original lotteries and are asked to provide 10 answers. Each answer is a forced choice between 'Lottery A' and 'Lottery B'. Participants are informed that, if they are eligible for bonus payment, the lottery that they answered in one of their ten answers for one random question will be simulated. The mixtures in Part 2 are constructed according to the proportion of the ten 'Lottery A' answers versus 'Lottery B' answers in Part 1.

Finally, Treatment 3 is identical to Treatment 1, except that they are informed at the beginning of Part 2 that the specified or non-specified preferred lotteries will be shown again in Part 2 questions. The exact wording states, ''...in every question, one of the lotteries (either Lottery A or Lottery B) will be a preferred lottery that either you specified in Part 1 or was chosen for you.''

Incentives and Payments

Participants will be provided a participation fee of $6. (This may be subject to change if the pilot results imply a longer completion time). One in five participants will also be selected for bonus payment. If they are selected, a random question will be selected as the bonus question.

In this question, the participant may or may not have chosen to specify their preferred lottery/lotteries. If they did choose to specify, then for Treatment 1 and 3, the reduced lottery associated with that mixture will be simulated by the computer and a payoff will be provided according to the outcome. In Treatment 2, one of the answers will be drawn at random and the preferred lottery for that answer will be simulated. The bonus payment will then be the simulated outcome of that lottery.

If the participant did not choose their preferred lottery/lotteries, then, in Treatment 1 and 3, the computer resorts to a pre-specified mixture over the two lotteries within the menu. This mixture is generated uniformly at random across the convex combination of the two lotteries. The bonus payment is then equal to the simulated outcome of that lottery. In Treatment 2, a number between 0 and 10 is drawn at random to denote the number of Lottery A choices (10 minus this number is the number of Lottery B choices). These are then shuffled, and the lottery corresponding to the previously designated bonus answer is simulated. This methodology ensures that the payment mechanism when the lotteries are not chosen is equivalent across treatments.

Participants will also have to answer two comprehension questions at the end of each treatment. These are designed to test the participants' understanding of the study. Either of these questions could also be selected as the bonus question. If this is the case, then they receive a fixed bonus of $5 if the question is answered correctly, and $0 otherwise.
Experimental Design Details
Randomization Method
Randomization is done by the computer code using oTree software. We randomize over the lotteries within the menus, and the order of menus. We also randomize over the participants that are eligible for bonus payment, which question is chosen for bonus payment, and in treatment 2, which answer is chosen for bonus payment. All of these variables are pre-determined at the beginning of the experimental session. Pre-specified lotteries are also randomly generated as described in the experimental design section.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
approximately 300 participants per treatment. 900 participants total.
Sample size: planned number of observations
approximately 300 times 24 answers. Making a total of 7200 observations.
Sample size (or number of clusters) by treatment arms
approximately 300 participants for each treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
An expected sample size of 300 participants for each treatment will be sufficient to capture most of the effects designed to test the three hypotheses. This sample size allows for a minimum detectable effect (at 5-percent Type I error rate and 20-percent Type II error rate) of 0.426 for Hypothesis 1. For Hypothesis 2, the MDE sizes are 0.415 and 0.487 for tests 1 and 2 respectively. The weaker test for Hypothesis 3 has a MDE size of 0.0438. These values for Hypotheses 1 and 2 are differences in slider values. Divide by 10 to get average change in weights within the interval 0 to 1. Hypothesis 3 refers to changes in proportions, and therefore already sits within the 0 to 1 interval. Means and standard deviations are provided in the supplementary pre-analysis Plan, which is appended to the pre-registration.
IRB

Institutional Review Boards (IRBs)

IRB Name
Caltech Institutional Research Board
IRB Approval Date
2025-05-01
IRB Approval Number
IR25-1544
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials