TIME PREFERENCES USING A MULTIPLE LOTTERY LIST: Validation

Last registered on July 10, 2023

Pre-Trial

Trial Information

General Information

Title
TIME PREFERENCES USING A MULTIPLE LOTTERY LIST: Validation
RCT ID
AEARCTR-0011729
Initial registration date
July 05, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 10, 2023, 5:01 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 10, 2023, 6:52 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
Cornell University
PI Affiliation
VU University Amsterdam

Additional Trial Information

Status
In development
Start date
2023-07-10
End date
2023-12-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Belot et al (2021) lay out a theory of discount factor measurement aimed to deal with changes in income and associated background consumption over time. In many existing methods, experimental payments are assumed to be added to a “background consumption” to generate a stream of consumption, and constant background consumption is required for discount factor elicitation. Consider two individuals who have identical discount factor and identical late background consumption, but the first individuals has the same background consumption also in the early period while the second individual has less background consumption in the early period. If confronted with choices about having money early or late, the second individual has a higher marginal valuation of money early and is therefore more likely to choose the earlier payment for that reason.

Belot et al (2021) lay out a method based on choices about high-stakes lottery tickets. Applying the economic model that underlies their method to the scenario outlined in the previous paragraph shows that both individuals should be equally eager to choose the early lottery. Choices should directly reflect time preferences, rather than changes in background consumption.

This validation experiment aims to test this by triggering changes in perceived background consumption through detailed questions about near-term expenditures for some of the participants. This is intended to make them realize that their expenditures are higher than expected in the near-term, and therefore they feel that they have to reduce near-term background consumption. If they feel poor in the near-term relative to similar individuals who have not been treated, this should effect measured discount factors in the convex budget set method. Under the multiple lottery list method of Belot et al (2021), this should not happen. We intend to test this.
External Link(s)

Registration Citation

Citation
Belot, Michele, Philipp Kircher and Paul Muller. 2023. "TIME PREFERENCES USING A MULTIPLE LOTTERY LIST: Validation." AEA RCT Registry. July 10. https://doi.org/10.1257/rct.11729-1.1
Experimental Details

Interventions

Intervention(s)
We aim to induce experimentally a perceived variation in background consumption that generates changes in measured present-bias in the convex budget set (CBS) elicitation method of Andreoni and Sprenger (2012). We aim to test that this does not induce variations in measured discount factors elicited via multiple lottery lists proposed by Belot et al (2021).
Intervention Start Date
2023-07-10
Intervention End Date
2023-08-26

Primary Outcomes

Primary Outcomes (end points)
Individual discount factors elicited with the method of Convex Budget Sets (CBS) proposed by Andreoni and Sprenger (2012): especially the individual "beta" for each participant and treatment status

Individual discount factors elicited with the method of Multiple Lottery Lists (MLL) proposed by Belot et al (2021): especially the individual "beta" for each participant and treatment status
Primary Outcomes (explanation)
The variable for CBS will be computed for each individual and treatment status through the regression proposed in Andreoni and Sprenger (2012), applied to the choices of participants across a number of CBS questions that ask the individual to allocate tokens to early and late outcomes.

The variable for MLL is constructed from the answers to questions about receiving a lottery ticket earlier or later. The switching point from earlier to later identifies their discount factor over a particular time horizon. The present bias will be elicited by dividing the discount factor for payment within three days and payment in 2 months by the discount factor between payment in 2 month and 4 month. Both have similar waiting times, but one has immediate rewards. We will apply the same methodology to delays of 4 months, and average. Please see the full uploaded pre-analysis plan for details.

Please see longer pre-analysis plan for the exact construction of variables.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Individuals will be randomly allocated to one of four treatment arms with 100 participants each:
(1) CBS no prompting / MLL no prompting / CBS prompting
(2) MLL no prompting / CBS no prompting / MLL prompting
(3) CBS prompting / MLL prompting
(4) MLL prompting / CBS prompting

We will test for differences between groups 1+2 and 3+4 for the first two questions. We will also test for differences between the first and third question for groups 1 and 2. We expect differences when these are measured with CBS, and not if they are measured with MLL.

The experimental design relies on our intervention to successfully alter expenditure expectations. See the full pre-analysis plan for details.
Experimental Design Details
Individuals in treatments 3 and 4 will first answer detailed questions about their near-term expenditures along a number of categories. They will then be asked to answer a series of elicitation questions for discount factors according to MLL and CBS, where the order depends on treatment

Individuals in treatments 1 and 2 are first asked asked elicitation questions for CBS and MLL methods without being prompted.

The difference between answers of those in treatment 1 and 2 vs those in treatments 3 and 4 is used for between-individual comparison.

After answering to CBS and MLL questions, individuals in treatments 1 and 2 are also asked detailed "prompting" questions about their near-term expenditures. They are then asked again a set of questions about time preferences, in treatment 1 via CBS and in treatment 2 via MLL.

In a within-individual design, the difference between present-bias elicited via early CBS answers before prompting and via late CBS answers after prompting for individuals in treatment 1 is used to study the effects of prompting on measured present-bias under CBS.

In a within-individual design, the difference between present-bias elicited via early MLL answers before prompting and via late MLL answers after prompting for individuals in treatment 2 is used to study the effects of prompting on measured present-bias under MLL.

Individuals are informed that one of the questions they answered will be randomly selected for payment, which also includes one question about risk aversion.

We expect treatments to be balanced. We also expect individuals to report higher expected expenditures after prompting than before. In particular, for the between-individual variation we expect individuals in treatments 3 and 4 to report on average higher expenditures after the prompting then individuals in treatment 1 and 2 before prompting. For within-individual variation we expect individuals in treatments 1 and 2 to report higher expected expenditures after prompting than before. These are necessary for the subsequent hypothesis.

Our core hypothesis is a joint hypothsis test:

The average beta measured under MLL for individuals in treatments 3 and 4 (after prompting) is not significantly difference from that of individuals in treatments 1 and 2 (before prompting).
AND
The difference between the beta elicited with MLL after prompting and the beta measured before prompting for individuals in treatment 2 is not significantly different from zero.
AND
at least one of the following holds:
[The average beta measured under CBS for individuals in treatments 3 and 4 (after prompting) is significantly lower that of individuals in treatments 1 and 2 (before prompting). OR The difference between the beta elicited with CBS after prompting and the beta measured before prompting for individuals in treatment 1 is significantly negative. OR BOTH]

The first two tests about MLL are at the heart of our investigation. The last part about CBS is there only to illustrate that we are operating in a setting that would be challenging for other monetary methods. If the part about CBS fails, the experimental variation in background consumption would not be sufficient to significantly alter standard discount factor measurement, and therefore we would not have managed to create an environment in which a new method is warranted. If at least one of the measures for CBS reveals significant differences, this indicates that standard methods might have an issue, and the tests for MLL become more meaningful.

Please see the detailed pre-analysis plan for secondary hypothesis and more details.
Randomization Method
Randomization to one of the treatment arms is done via Prolific.
Randomization Unit
Sample size: 400

We will recruit participants using the on-line research platform Prolific.

Eligibility criteria:
● currently residing in New York State
● with a household income below 99,999 US dollars
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
There are no clusters. Each unit is one independent observation.
Sample size: planned number of observations
400 individuals
Sample size (or number of clusters) by treatment arms
100 per treatment arm, across 4 arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculation for detecting differences in the cross-section comparison (one-sided t-test) From Belot et al (2021), we use the following parameter values: Mean estimate for β without prompting: μ_(β-CBS-NP)=0.992 Standard deviation estimates for β: σ_(β-CBS-NP)=σ_(β-CBS-P)=0.236 Significance level (type-1 error): α=0.05 Sample size: n_(CBS-NP)=n_(CBS-P)=200 Power (1-Type II error): 80% Given these parameters the minimum detectable effect size is -0.059, meaning that β_(i,CBS-P) should be at most 0.933. This is for CBS measurement. For MLL we do not expect to detect differences. If differences were of the size of CBS, this also indicates our ability to detect such (unexpected) differences. Power calculation for detecting difference in the within-individual design (one-sided paired t-test) Standard deviation for the difference (β_(i,CBS-P)-β_(i,CBS-NP)): 0.15 Significance level (type-1 error): 0.05 Sample size: 100 Power (1-Type II error): 80% Given these parameters the minimum detectable effect size is -0.038. This is for CBS measurement. For MLL we do not expect to detect differences. If differences were of the size of CBS, this also indicates our ability to detect such (unexpected) differences.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Cornell University Institutional Review Board for Human Participants
IRB Approval Date
2023-07-10
IRB Approval Number
IRB0147750
IRB Name
Cornell University Institutional Review Board for Human Participants
IRB Approval Date
2023-07-07
IRB Approval Number
IRB0147750
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials