Using Equivalent Offsets to Test Reference Dependence: Evidence from Three Experimental Paradigms

Last registered on June 04, 2025

Pre-Trial

Trial Information

General Information

Title
Using Equivalent Offsets to Test Reference Dependence: Evidence from Three Experimental Paradigms
RCT ID
AEARCTR-0015748
Initial registration date
May 25, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 04, 2025, 9:49 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
National University of Singapore

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-06-02
End date
2025-07-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This paper tests reference dependence by examining whether the effect of a payoff increase on choice behavior is offset by an equivalent increase in the reference point, following the framework of Rees-Jones and Wang (2022). We apply this approach across three experimental paradigms: an effort task, an investment game, and a binary lottery choice experiment.
External Link(s)

Registration Citation

Citation
Wang, Ao. 2025. "Using Equivalent Offsets to Test Reference Dependence: Evidence from Three Experimental Paradigms." AEA RCT Registry. June 04. https://doi.org/10.1257/rct.15748-1.0
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2025-06-02
Intervention End Date
2025-07-31

Primary Outcomes

Primary Outcomes (end points)
How choices—such as selecting a risky investment, supplying more labor, or opting for a tri-outcome lottery—respond to experimental variation in payoff levels and hypothesized reference points.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Below we summarize the main features of the planned experiments. For detailed experimental design, screenshots, data-generating processes, theoretical analysis, and proposed statistical methods, please refer to the analysis plan.

There are three experimental settings: an investment game, an effort task, and a binary lottery choice experiment.

In the Investment Game, subjects make repeated choices between safe and risky options to accumulate earnings. Across-subject variation comes from random assignment to either [Baseline], where earnings depend solely on the subject’s choices, or [AutoInvest], where earnings also reflect automatic exposure to an index fund. Within-subject variation arises from random variation in payoff magnitudes across rounds.

In the Effort Task, subjects first complete transcription tasks and then answer 50 questions involving a choice between a low-paying lottery requiring no further effort and a high-paying lottery requiring additional effort. Variation is entirely within-subject, as both payments and effort requirements vary randomly across questions.

In the Binary Lottery Choice experiment, subjects make 50 choices between two lotteries. Across-subject variation comes from assignment to either [Plain], where lotteries are presented in a standard format, or [Contingent], where the same lotteries are structured to isolate the contingency of the common consequence. Within-subject variation is introduced through random variation in lottery payoffs.
Experimental Design Details
Randomization Method
oTree, Prolific
Randomization Unit
Individual Level and Question Level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
1,400 subjects
Sample size (or number of clusters) by treatment arms
Investment Game: 500 subjects
- 250 for "Baseline"
- 250 for "AutoInvest"

Effort Task: 500 subjects

Binary Lottery Choice: 400 subjects
- 200 for "Plain"
- 200 for "Contingent"
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
NUS Departmental Ethics Review Committees
IRB Approval Date
2023-12-07
IRB Approval Number
NA
Analysis Plan

Analysis Plan Documents

Analysis_Plan.pdf

MD5: d11d676f62956bc9b4d6fd73f5628cf5

SHA1: e66c719b8be1dfa65980ed9323b795d3545d01c5

Uploaded At: June 01, 2025

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials