Experimental Investigation of “Raffle” mechanism

Last registered on July 21, 2022


Trial Information

General Information

Experimental Investigation of “Raffle” mechanism
Initial registration date
July 20, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 21, 2022, 11:22 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

University of San Francisco

Other Primary Investigator(s)

PI Affiliation
University of Pittsburgh
PI Affiliation
Microsoft Research
PI Affiliation
Microsoft Research
PI Affiliation
Northwestern University
PI Affiliation
Microsoft Research

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
In this project, we study the relative performance of the "raffle" mechanism compared to standard resource allocation mechanisms used today. Paper by Immorlica, Lucier, Mollner, and Weyl (20017) proposes a practical and straightforward ``raffle” mechanism for allocating a limited supply of heterogeneous goods among consumers. They show theoretically that the raffle mechanism could outperform ordinal algorithms in terms of efficiency because consumers can signal not only their ordinal but cardinal utilities as well. To test this hypothesis among human subjects, we designed an experiment.
External Link(s)

Registration Citation

Immorlica, Nicole et al. 2022. "Experimental Investigation of “Raffle” mechanism." AEA RCT Registry. July 21. https://doi.org/10.1257/rct.9742-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Please find the attached document for the details in the Analysis Plan section.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The main variable that we use to compare the performance of a mechanism is efficiency.
Primary Outcomes (explanation)
The sum of cardinal utilities received by students

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment is in three parts. In both treatments, the first part is the main game. The second and third parts are respectively risk elicitation (Holt and Laury (2002)) and the k-level reasoning elicitation game (Arad and Rubinstein (2012)). Below we describe the main game for each treatment.

In both treatments, subjects first read the textual instructions where the respective allocation algorithm is explained. To improve their understanding of the algorithms we designed the two videos, where the algorithm is explained with animations. After reading the instructions and watching the video, subjects need to complete 4 quiz questions. They are allowed to advance to the school choice game only if they answer all 4 questions correctly. They have two tries after which they are disqualified from the study and restricted to participate in the same study again.

Subjects who complete the comprehension quiz successfully play 6 rounds of matching games. In each round, subjects are randomly selected in groups of three. Each subject receives information about their own and two other subjects' preferences. What changes from round to round is preference structure. After each round subjects receive no feedback, instead we ask them to explain their reasoning.

Please see the attached document for more details.
Experimental Design Details
Randomization Method
Randomization done on an online platform - prolific.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
2 treatments
Sample size: planned number of observations
204 subjects
Sample size (or number of clusters) by treatment arms
Treatment 1: 34 groups, treatment 2: 34 groups. Each group has 3 subjects.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
effect size=0.5, standard deviation=0.82, 10% increase.

Institutional Review Boards (IRBs)

IRB Name
University of Pittsburgh Human Research Protection Office
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials