Back to History Current Version

Testing Multiple Multi-Attribute Choice Models

Last registered on March 15, 2024

Pre-Trial

Trial Information

General Information

Title
Testing Multiple Multi-Attribute Choice Models
RCT ID
AEARCTR-0013151
Initial registration date
March 07, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 15, 2024, 4:16 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
The Ohio State University

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2024-03-07
End date
2025-03-07
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This paper experimentally tests multiple multi-attribute choice models such as a range normalization model and a pairwise normalization model in a consumer context. The experimental design generates choice sets that can distinguish the models as each model predicts a distinct set of demand types. Specifically, we do comparative statics by investigating how choice probabilities of alternatives vary between choice sets. Moreover, we do a cross-validation exercise by constructing a maximum likelihood in the spirit of Harless and Camerer (1994).
External Link(s)

Registration Citation

Citation
Im, Changkuk. 2024. "Testing Multiple Multi-Attribute Choice Models." AEA RCT Registry. March 15. https://doi.org/10.1257/rct.13151-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-03-07
Intervention End Date
2024-03-29

Primary Outcomes

Primary Outcomes (end points)
Choices from the main choice task.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment consists of two parts: (i) a quality elicitation step (Task 1) and (ii) a choice task (Task 2). In the quality elicitation step, we elicit subjective quality values of 24 consumer products using a multiple price list (MPL) method. Specifically, it measures the quality value of a product by asking a subject whether they prefer the product (Option A) to an amount of money (Option B). We ask this binary question multiple times by varying the amount of money in Option B ranging from $0.01 to $8 in increments of 1 cent. We expect that a subject chooses Option A in some initial questions, switches to Option B at a certain question, and chooses Option B for the remaining questions. We say that the dollar value at the switch point is the subjective quality of a product.

Before the main choice task, we select two sets of two-paired products in which (i) the elicited quality values are greater than or equal to $0.05, (ii) the elicited quality values are smaller than or equal to $6, and (iii) the difference in elicited quality values in each product set is greater than or equal to $0.5.

In the main choice task, a subject chooses the most preferred alternative from a choice set consisting of three alternatives. Note that the first and the second alternatives are bundles of a product and money while the third alternative is simply a money option, i.e., (Product 1, Money 1), (Product 2, Money 2), (Money 3). Product 1 and Product 2 are from a selected product set before the main choice task as described above. Money 3 is an endowment, and in this experiment, it can be either $8 or $10. In this case, the third alternative can be interpreted as an outside option, which is simply taking an endowment. Money 1 and Money 2 are calculated at the subject level based on the elicited quality values and the endowment so that we can distinguish and test the pairwise normalization model and the range normalization model. Specifically, once a product set and an endowment are fixed, we generate four choice sets in which the pairwise normalization model and the range normalization model provide different predictions. Remember that we have two product sets (i.e., product set 1 and product set 2) and two endowments (i.e., $8 and $10). Hence, each subject makes decisions for 16 choice sets.
Experimental Design Details
Not available
Randomization Method
Randomization is done by the computer server that hosts the experiment.
Randomization Unit
Each subject faces 24 rounds in Task 1 and 16 rounds in Task 2. Remember that a subject makes a switch point decision in each round in Task 1, and a subject chooses the most preferred alternative in each round in Task 2. Each subject always starts with Task 1 and then Task 2. The order of rounds in Task 1 is randomized. In Task 2, the order of rounds is randomized, and the order of alternatives in each round is randomized as well. In the payment stage, one round in Task 1 and Task 2 is randomly chosen. If Task 1 is randomly selected, then one binary question is randomly selected to determine the payment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Based on a pre-analysis including power analysis by simulation, the target number of subjects is 80 students. However, we may have to exclude some observations if elicited quality values do not satisfy the criteria of choosing product sets as described above. To collect 80 subjects who pass the criteria, the actual number of collected subjects can be more than 80.
Sample size: planned number of observations
The main variable of interest is choices in the main choice task. If we have 80 subjects, then we have 1,280 choices since each subject makes 16 choices in the main choice task (80x16=1,280).
Sample size (or number of clusters) by treatment arms
Based on a pre-analysis including power analysis by simulation, the target number of subjects is 80 students. However, we may have to exclude some observations if elicited quality values do not satisfy the criteria of choosing product sets as described above. To collect 80 subjects who pass the criteria, the actual number of collected subjects can be more than 80.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
The Office of Responsible Research Practices at The Ohio State University
IRB Approval Date
2024-02-14
IRB Approval Number
2024E0171