Eliciting time preferences using high stakes lottery tickets

Last registered on January 31, 2021


Trial Information

General Information

Eliciting time preferences using high stakes lottery tickets
Initial registration date
November 25, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 26, 2019, 10:35 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
January 31, 2021, 12:18 AM EST

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator


Other Primary Investigator(s)

PI Affiliation
PI Affiliation

Additional Trial Information

Start date
End date
Secondary IDs
We investigate the performance of a new approach to measure time preferences. The new approach entails asking individuals to make choices between receiving high stakes lottery tickets at different points in time, in a similar way as conventional time-preference elicitation methods. The study will compare measurements using conventional methods involving low stakes (convex budget sets, Andreoni et al., 2012) with measurements using the new approach, across (i) a sample of students participating in November 2019, (ii) a sample of students participating in February/March 2020 and (iii) a sample of unemployed job seekers participating in 2020 (exact time to be determined). The first sample may face large expenditure shocks due to St Nicholas and Christmas time, the second sample is unlikely to face any expenditure/income shocks, and the third sample is expected to face large income shocks related to finding employment. These features are key to testing the performance of the new time-preference elicitation method.
External Link(s)

Registration Citation

Belot, Michele, Philipp Kircher and Paul Muller. 2021. "Eliciting time preferences using high stakes lottery tickets." AEA RCT Registry. January 31. https://doi.org/10.1257/rct.5115-1.1
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The main outcomes of interest are measures of present-bias (“beta”) and long-term and short-term discount factors (“delta” and beta*delta) obtained from Convex Time Budget Sets (CTB) responses and from the Lottery Ticket Questions (LTQ) responses.
Primary Outcomes (explanation)
We will construct these measures in the following way:
- For estimating discount factors from CTB responses we will follow the methodology from Andreoni et al. (2012), who outline the estimation of the per-period discount factor delta as well as the estimation of the present-bias parameter beta for each individual. For each individual we will use the point estimates obtained from this methodology, normalized such that delta is the “long-run discount factor” that measures the discounting between options in the future that are five weeks apart, while the “short-term discount factor” beta times delta measures the discounting between the present and five weeks later. We normalize to five weeks as this is the time horizon we use for the LTQ discount measurement.
- For assigning a discount factor for the LTQ method, we follow the method of multiple price lists as reviewed, e.g., in Anderson et al (2006), only that monetary rewards in that paper are replaced by the number of lottery tickets that individuals receive in our setup. In this methodology, for each time horizon, individuals answer several questions whether to take rewards earlier or later, where the amount of rewards for the later choice is increasing across questions. For individuals that switch only once from choosing current rewards to later rewards, the methodology establishes – at the individual level - a range for their short-run discount factor that rationalizes their switching point. This is backed out from questions involving today and five weeks later. The method also gives a range for their long-run discount factor that rationalizes their switching point for questions involving rewards 8 weeks from today and 13 weeks from today (i.e., also five weeks apart but in the future). We choose the mid-point of each interval to assign a short-run and a long-run discount factor. In cases an explicit value for present-bias (“beta”) is required, we take the ratio of this short-run discount factor over this long-run discount factor. For individuals with inconsistent answers (people switching “back and forth” between early and late choices involving the same time frame but increasing later rewards), we will consider the lower and upper bounds of the intervals that rationalize these switching points and choose the mid-point of that larger interval as measure of the discount factor.
- When we compare CTB with the LTQ method, it is important to take into account that LTQ assigns discount factors only at a few coarse values. It is of less interest in for our study whether these intervals were chosen optimally as this is easy to vary in later work, but rather whether the new LTQ method picks up the relevant discount factors correctly given these intervals. Therefore, for some of the comparisons it is useful to map the continuous measures of discount factors for the short run and the long run from the CTB method to the discrete points of the LTQ method. To do this, recall that LTQ assigns intervals of discount factors that would rationalize a single switching point in the answers in the LTQ elicitation. To convert a CTB point estimate, consider first into which LTQ interval the point estimate falls, and then assign the midpoint of that interval as the “coarse CTB discount factor”. This can be done both for the short-run and the long-run discount factor. To obtain the “coarse CTB present bias”, take the ratio of the coarse CTB short-run discount factor and divide by the coarse CTB long-run discount factor for the individual under consideration. We will only use these coarse values in the analysis where explicitly stated.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We explore a new method to elicit time preferences relative to the standard method from Convex Time Budget Sets. We expect these methods to perform similarly in settings without income and expenditure shocks. We expect the new method to give similar results also in the presence of income and expenditure shocks for an otherwise similar population, while we do not expect this for Convex Time Budget Sets.
Experimental Design Details
Randomization Method
We compare the discount factor measures of individuals collected differently. We assign by coinflip half of the sample to receive the measures according to one method first, and the other according to the other measure. This is done by a computer.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Not applicable.
Sample size: planned number of observations
200 students and 100 job seekers
Sample size (or number of clusters) by treatment arms
100 students receive the questions from one of the elicitation methods first and 50 of the job seekers, the others receive the questions from the other elicitation method first.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents

Preanalysis plan for Eliciting Time Preferences Using High Stakes Lottery Tickets

MD5: 13584bf970e20ea0b5e0a960472f4d7c

SHA1: 96ad04e64cc88aa49c02f92bf6a5d699f16d57c9

Uploaded At: November 25, 2019


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Intervention Completion Date
February 29, 2020, 12:00 +00:00
Data Collection Complete
Data Collection Completion Date
July 31, 2020, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
213 participants
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
108 in wave 1 and 105 in wave 2
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials