Exogenous and Endogenous Social Reference Points

Last registered on February 03, 2023


Trial Information

General Information

Exogenous and Endogenous Social Reference Points
Initial registration date
August 06, 2018

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 09, 2018, 1:43 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
February 03, 2023, 11:24 AM EST

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator


Other Primary Investigator(s)

PI Affiliation
PI Affiliation

Additional Trial Information

Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
This document describes the design and the analysis plan for an experiment aimed at evaluating the effects of social reference points on effort provision. In the context of our study, social reference points are understood as peers with which individuals compare themselves.
Broadly speaking, our experiment is aimed at better understanding a) whether exogenously assigned social reference points affect performance, b) whether endogenously chosen social reference points affect performance differently than exogenously assigned social reference points, c) how people choose their social reference points, and d) how social reference points compare to and interact with incentives. We also plan to study heterogeneity in treatment effects along several dimensions.
External Link(s)

Registration Citation

Schmitz, Jan, Julien Senn and Christian Zehnder. 2023. "Exogenous and Endogenous Social Reference Points." AEA RCT Registry. February 03. https://doi.org/10.1257/rct.3217-1.1
Former Citation
Schmitz, Jan, Julien Senn and Christian Zehnder. 2023. "Exogenous and Endogenous Social Reference Points." AEA RCT Registry. February 03. https://www.socialscienceregistry.org/trials/3217/history/170647
Experimental Details


After Amazon Mechanical Turk users click select our task, they are randomized into different social reference points treatments.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The key outcome variable is the number of points scored by the subject in round 2 of the experiment (effort2), where each point is scored as a result of pressing ‘a’ then ‘b’. Additionally, for each subject, we store data on their button presses over time. We will also use transformations of the key outcome variables: we will a zscore it, i.e. z(effort2). We will also look at the relative increase in performance between round 1 and round 2, i.e. (effort2-effort1)/effort1.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We will also investigate whether the di erent treatments a ect the probability of im-
proving performance. To that end, we will replace the dependend variable in the analysis
described above by a dummy variable taking value 1 if performance improves between
period 1 and period 2, and 0 otherwise.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Subjects will choose to participate in this study by selecting it on Amazon's Mechanical Turk service. Before the participants choose to participate, they will be provided with a brief description of the study; which also tells a guaranteed flat-pay for successful submission and a time estimate for completion. Once in the survey, participants will fill out a brief set of questions providing demographic information like age, sex, and education. After this, participants will be directed to complete a button pressing task for 5 minutes (round 1). The task is consists of pressing the 'a' and 'b' as often as possible, within a given time period (5 minutes). After that, they are randomized to one of the treatments. In the different treatments, we manipulate social reference points and/or monetary incentives. Participants have to complete a second round of 5 minutes of the a-b pressing task. There is no deception at any point in this task.
Experimental Design Details
More details in the PDF attached.
Randomization Method
Computer assigns individuals to treatments.

Randomization Unit
individual subject
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Please see below. The number of clusters is the same as the number of observations.
Sample size: planned number of observations
We plan to recruit 6500 participants in total (ideal). We want to reach a sample of • 500 subjects per treatment for the treatments EXO-NO, EXO-LO, EXO-MID, EXO-HI, EXO-BEST, EXOBEST&PIECERATE, PIECERATE • 1000 subjects per treatment for the treatments RANKONLY, ENDO, ENDO&PIECERATE so as to attain sufficiently precise estimates of the productivity per treatment.
Sample size (or number of clusters) by treatment arms
It was determined that we wanted to reach a sample of at least 500 subjects per so as to attain sufficiently precise estimates of the productivity per treatment (in some cases, we have 1000 participants in one treatment, see above and PDF attached for details).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
In DellaVigna and Pope (What motivates effort? REStud 2017), participants perform the same task for 10 consecutive minutes. They report an average number of clicks of 1936 and a standard deviation of 668 (for the treatment with no piece rate, the average is 1550 and the standard deviation is 720). In our experiment, task duration is set to 5 minutes. We therefore expect an average performance of about 900 and a standard deviation of approximately 350. Assuming that this is approximately the standard deviation of each treatment in the experiment and assuming a sample of 500 per treatment, we have a power of 80% to reject the null hypothesis of zero difference in average points between two treatments when the actual difference between the two treatments is 62 points (an effect size much smaller than what is documented in DellaVigna and Pope).

Institutional Review Boards (IRBs)

IRB Name
Ethics committee (HEC Lausanne)
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents


MD5: 22def8289399a88af5741c1c4baf8c31

SHA1: 66b6e89061dbcd3eedca703644546231035d8395

Uploaded At: August 06, 2018


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials