Communicating multiattribute preferences: An experiment

Last registered on March 19, 2024

Pre-Trial

Trial Information

General Information

Title
Communicating multiattribute preferences: An experiment
RCT ID
AEARCTR-0013135
Initial registration date
March 04, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 19, 2024, 4:49 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Lausanne

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

Status
In development
Start date
2024-03-05
End date
2024-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The field of matching has recently made significant strides in the development of mechanisms for allocations, particularly where participants may express preferences over distinct attributes. In some applications, the preferences are multidimensional and often should be reported separately. For instance, in matching cadets to military branches in the US, participants report not only preferences over the branches but also over the length of the contract. In Chinese college admissions, applicants report the university, field of study and tuition waiver as separate attributes. This raises a practical issue of the potential complexity of preference reporting by participants. While theory ignores this complexity and assumes that agents could rank all possible combinations of attributes, practitioners challenge this view and often opt for separate ranking of attributes, which typically leads to welfare loss, as the prominent mechanism needs to operate on a single ranking. The loss can be especially high if the different attributes might exhibit complementarities and the payoff is not monotonic in quantity.

Our study aims to develop and test preference reporting methods that effectively balance theoretical richness with practical simplicity for participants. To this end, we examine six preference reporting methods under three levels of preference complexities. One method is the richest—ranking the smallest items, which we call "bundles," i.e., all combinations of attributes. It reaches full efficiency in theory under any complexity of preferences but is behaviorally complex. Two other methods are simpler and currently used in practice but limited in expressing complex preferences, leading to efficiency loss theoretically. The last three, our contributions, that we hypothesize will improve efficiency by simplifying the task of reporting.
External Link(s)

Registration Citation

Citation
Hakimov, Rustamdjan and Manshu Khanna. 2024. "Communicating multiattribute preferences: An experiment." AEA RCT Registry. March 19. https://doi.org/10.1257/rct.13135-1.0
Experimental Details

Interventions

Intervention(s)

The study is laboratory experiment.
In our experiment, participants will partake in the college admission market and will be assigned to programs. Each program is characterized by University prestige, measure of fit of the field, and tuition discount. ”Bundle” refers to ranking each program, which might be hard as there are in total 24 programs. As an innovation, participants are assigned a utility function that determines the score of each program, which they should use to infer ordinal ranking over programs. The payoff of participants will depend on the rank of the program they will be matched. We run the following treatment in the complexity of the preference dimension within-subjects:
1. Lexicographic preferences (LEX): Participants will have additive preferences over the three features (university, field of study, and tuition discount) with the university being the most important, then the field, then tuition.
2. Separable Preferences (SEP): Participants will have additive preferences over the three features (university, field of study, and tuition discount), but the weights of each component might differ, thus violating the lexicographic structure.
3. Non-separable Preferences (COMP): Participants will experience a complementary component in the utility function, where the importance of the tuition discount is contingent on the field of study.
Intervention (Hidden)
Intervention Start Date
2024-03-05
Intervention End Date
2024-08-01

Primary Outcomes

Primary Outcomes (end points)
Efficiency, truthfulness
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design

Our main interest is different reporting languages. We run three preference reporting treatments between-subjects.
• Separate ranking of attributes with exogenous weights: Participants will rank each attribute separately, and the designer will aggregate these rankings and assign weights externally, imposing the lexicographic structure. Prediction: Participants can precisely communicate their preferences only in LEX.
• Separate ranking of attributes with reported weights: Participants will rank each attribute separately and report the weights assigned to each attribute. Prediction: Participants can precisely communicate their preferences only in LEX and SEP.
• Bundles: Participants report a rank-ordered-list (ROL) of programs. Prediction: Participants can precisely communicate their preferences in all three preference treatments: LEX, SEP, and COMP.
Experimental Design Details
Additinally, after main 3 treatments we plan 3 behavioral treatments.

We hypothesize, however, that while theoretically superior, the preference reporting language of Bundles might be complex for participants, potentially leading to behavioral inefficiencies. This is why we propose two treatments with algorithmic assistance for reporting bundles and one with the possibility to correct mistakes.
• Behavioral 1: Bundles with smaller dimensions: First, the algorithm asks participants to select a dimension where the ranking of bundles should be consistent with a ranking in this dimension. For the selected dimension, participants report ranked order list in this dimension. Next, participants report the ranking of the remaining two-dimensional bundles. The mechanism uses the ranking of bundles and imposes the reported ranked order list on the third dimension.
• Behavioral 2: Bundles + algorithm audit: Participants submit ranked order lists of programs. An algorithm is developed to identify bundles with reversed rankings in each dimension while keeping two other dimensions fixed. The algorithm asks the participant to doublecheck the ranking of selected inconsistent programs and impose the change if reversed.
• Behavioral 3: Linear weights+ and correction possibility: Participants will separately rank each attribute and report the weights assigned to each feature. After observing the resulting ranking of bundles, participants will have a two-minute window to modify any bundles ranking.

Randomization Method
Computerized randomization into treatments.
Randomization Unit
Session of 27
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
6 sessions per treatment
Sample size: planned number of observations
972
Sample size (or number of clusters) by treatment arms
972
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials