We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Exogenous and Endogenous Social Reference Points
Last registered on August 09, 2018


Trial Information
General Information
Exogenous and Endogenous Social Reference Points
Initial registration date
August 06, 2018
Last updated
August 09, 2018 1:43 AM EDT
Primary Investigator
Other Primary Investigator(s)
PI Affiliation
PI Affiliation
Additional Trial Information
In development
Start date
End date
Secondary IDs
This document describes the design and the analysis plan for an experiment aimed at evaluating the effects of social reference points on effort provision. In the context of our study, social reference points are understood as peers with which individuals compare themselves.
Broadly speaking, our experiment is aimed at better understanding a) whether exogenously assigned social reference points affect performance, b) whether endogenously chosen social reference points affect performance differently than exogenously assigned social reference points, c) how people choose their social reference points, and d) how social reference points compare to and interact with incentives. We also plan to study heterogeneity in treatment effects along several dimensions.
External Link(s)
Registration Citation
Schmitz, Jan, Julien Senn and Christian Zehnder. 2018. "Exogenous and Endogenous Social Reference Points." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.3217-1.0
Former Citation
Schmitz, Jan, Julien Senn and Christian Zehnder. 2018. "Exogenous and Endogenous Social Reference Points." AEA RCT Registry. August 09. https://www.socialscienceregistry.org/trials/3217/history/32841
Experimental Details
After Amazon Mechanical Turk users click select our task, they are randomized into different social reference points treatments.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The key outcome variable is the number of points scored by the subject in round 2 of the experiment (effort2), where each point is scored as a result of pressing ‘a’ then ‘b’. Additionally, for each subject, we store data on their button presses over time. We will also use transformations of the key outcome variables: we will a zscore it, i.e. z(effort2). We will also look at the relative increase in performance between round 1 and round 2, i.e. (effort2-effort1)/effort1.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
We will also investigate whether the di erent treatments a ect the probability of im-
proving performance. To that end, we will replace the dependend variable in the analysis
described above by a dummy variable taking value 1 if performance improves between
period 1 and period 2, and 0 otherwise.
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Subjects will choose to participate in this study by selecting it on Amazon's Mechanical Turk service. Before the participants choose to participate, they will be provided with a brief description of the study; which also tells a guaranteed flat-pay for successful submission and a time estimate for completion. Once in the survey, participants will fill out a brief set of questions providing demographic information like age, sex, and education. After this, participants will be directed to complete a button pressing task for 5 minutes (round 1). The task is consists of pressing the 'a' and 'b' as often as possible, within a given time period (5 minutes). After that, they are randomized to one of the treatments. In the different treatments, we manipulate social reference points and/or monetary incentives. Participants have to complete a second round of 5 minutes of the a-b pressing task. There is no deception at any point in this task.
Experimental Design Details
More details in the PDF attached.
Randomization Method
Computer assigns individuals to treatments.

Randomization Unit
individual subject
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Please see below. The number of clusters is the same as the number of observations.
Sample size: planned number of observations
We plan to recruit 6500 participants in total (ideal). We want to reach a sample of • 500 subjects per treatment for the treatments EXO-NO, EXO-LO, EXO-MID, EXO-HI, EXO-BEST, EXOBEST&PIECERATE, PIECERATE • 1000 subjects per treatment for the treatments RANKONLY, ENDO, ENDO&PIECERATE so as to attain sufficiently precise estimates of the productivity per treatment.
Sample size (or number of clusters) by treatment arms
It was determined that we wanted to reach a sample of at least 500 subjects per so as to attain sufficiently precise estimates of the productivity per treatment (in some cases, we have 1000 participants in one treatment, see above and PDF attached for details).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
In DellaVigna and Pope (What motivates effort? REStud 2017), participants perform the same task for 10 consecutive minutes. They report an average number of clicks of 1936 and a standard deviation of 668 (for the treatment with no piece rate, the average is 1550 and the standard deviation is 720). In our experiment, task duration is set to 5 minutes. We therefore expect an average performance of about 900 and a standard deviation of approximately 350. Assuming that this is approximately the standard deviation of each treatment in the experiment and assuming a sample of 500 per treatment, we have a power of 80% to reject the null hypothesis of zero difference in average points between two treatments when the actual difference between the two treatments is 62 points (an effect size much smaller than what is documented in DellaVigna and Pope).
IRB Name
Ethics committee (HEC Lausanne)
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers