Testing Multiple Multi-Attribute Choice Models

Last registered on April 12, 2024

Pre-Trial

Trial Information

General Information

Title
Testing Multiple Multi-Attribute Choice Models
RCT ID
AEARCTR-0013151
Initial registration date
March 07, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 15, 2024, 4:16 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 12, 2024, 6:38 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
The Ohio State University

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2024-04-12
End date
2025-03-07
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This paper experimentally tests multiple multi-attribute choice models such as a range normalization model and a pairwise normalization model in a consumer context. The experimental design generates choice sets that can distinguish the models as each model predicts a distinct set of demand types. Specifically, we do comparative statics by investigating how choice probabilities of alternatives vary between choice sets. Moreover, we do a cross-validation exercise by constructing a maximum likelihood in the spirit of Harless and Camerer (1994).
External Link(s)

Registration Citation

Citation
Im, Changkuk. 2024. "Testing Multiple Multi-Attribute Choice Models." AEA RCT Registry. April 12. https://doi.org/10.1257/rct.13151-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-04-12
Intervention End Date
2024-05-03

Primary Outcomes

Primary Outcomes (end points)
Choices from the main choice task.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment is composed of two parts: (i) a quality elicitation step (Task 1) and (ii) a main choice task (Task 2).

In the quality elicitation task, we elicit subjective quality values of 24 consumer products using a multiple price list method. Consumer products used in the experiment are snacks that are familiar to student subjects. In each round, a picture, a name, and a weight of a product appear on a screen. Then a subject is asked whether they prefer the product (Option A) to an amount of money (Option B). This binary question is asked multiple times by varying the amount of money in Option B ranging from $0.01 to $8 in increments of $0.01. Instead of asking a subject to make 800 choices in each round, we ask them at which dollar value they would like to switch from Option A to Option B. Once they report the switch point, then Option A is chosen for the first question to the question right before the switch point, and Option B is chosen for all questions after the switch point. We say that a dollar value at the switch point is a subjective quality value of a product. The order of rounds is randomized at the subject level.

Before the main choice task, we select four sets of two-paired products in which (i) the elicited quality values are greater than or equal to $0.10, (ii) the elicited quality values are smaller than or equal to $6, and (iii) the difference in elicited quality values in each product set is greater than or equal to $0.5.

In the main choice task, a subject chooses the most preferred alternative from a choice set consisting of three alternatives. Note that the first and the second alternatives are bundles of a product and money while the third alternative is simply a money option, i.e., (Product 1, Money 1), (Product 2, Money 2), (Money 3). Product 1 and Product 2 are from a selected product set before the main choice task as described above. Money 3 is an endowment, and in this experiment, it can be either $8 or $10. In this case, the third alternative can be interpreted as an outside option, which is simply taking an endowment. Money 1 and Money 2 are calculated at the subject level based on the elicited quality values and the endowment so that we can distinguish and test the pairwise normalization model and the range normalization model. Specifically, once a product set and an endowment are fixed, we generate six choice sets in which the pairwise normalization model and the range normalization model provide different predictions. Remember that we have four product pair sets and two endowments. Hence, each subject makes 48 choices.
Experimental Design Details
Not available
Randomization Method
Randomization is done by the computer server that hosts the experiment.
Randomization Unit
Each subject faces 24 rounds in Task 1 and 48 rounds in Task 2. Remember that a subject makes a switch point decision in each round in Task 1, and a subject chooses the most preferred alternative in each round in Task 2. Each subject always starts with Task 1 and then Task 2. The order of rounds in Task 1 is randomized. In Task 2, the order of rounds is randomized, and the order of alternatives displayed on a computer screen in each round is randomized as well. In the payment stage, one round in Task 1 and Task 2 is randomly chosen. If Task 1 is randomly selected, then one binary question is randomly selected to determine the payment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Based on a pre-analysis including a power analysis by simulation, the target number of subjects is 60 students.
Sample size: planned number of observations
The main variable of interest is 48 choices in the main choice task. If we have 60 subjects, then the total number of choices is 2,880.
Sample size (or number of clusters) by treatment arms
Based on a pre-analysis including a power analysis by simulation, the target number of subjects is 60 students.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
The Office of Responsible Research Practices at The Ohio State University
IRB Approval Date
2024-02-14
IRB Approval Number
2024E0171
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information