The American Economic Association's registry for randomized controlled trials
Please fill out this
short user survey
of only 3 questions in order to help us improve the site. We appreciate your feedback!
Understanding Peer Pressure in Education
Last registered on December 13, 2015
View Trial History
Understanding Peer Pressure in Education
Initial registration date
December 13, 2015
December 13, 2015 5:10 PM EST
United States of America
University of Chicago
Contact Primary Investigator
Other Primary Investigator(s)
University of Pennsylvania
Additional Trial Information
In this project, we combine theory with field experiments to identify the relative importance of different mechanisms underlying peer pressure in education in different school settings.
Bursztyn, Leonardo, Georgy Egorov and Robert Jensen. 2015. "Understanding Peer Pressure in Education." AEA RCT Registry. December 13.
Bursztyn, Leonardo et al. 2015. "Understanding Peer Pressure in Education." AEA RCT Registry. December 13.
Sponsors & Partners
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Our key outcome variable of interest is the student's decision to sign up for the SAT prep package lottery. It is a dummy variable that is equal to one if the student signs up and zero otherwise.
Primary Outcomes (explanation)
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Based on a simple model, we hypothesize that when both schooling effort and output are potentially observable to peers, students may lower their educational investment to avoid social penalties, according to two different mechanisms. We show theoretically that if students are concerned about appearing "smart" in front of their peers, or if students care about appearing "cool"/social, the result might be a lower level of schooling effort when compared to a situation where these concerns do not exist.
We designed a simple field experiment, in the spirit of the one used by Bursztyn and Jensen (2015) to identify the importance of these two mechanisms. We plan to visit high schools in the LA area, targeting 11th grade classrooms. Students will be offered the chance to sign up for a lottery to access a free online SAT prep package. We will randomly vary at the student level:
1) whether students believe their decision to sign up and the score of an individual diagnostic test score will be observed by their classmates;
2) the probability (25% vs 75%) that students gain access to the package conditional on signing up for the lottery.
Our hypotheses are:
1) "public" sign up rates should be lower than "private" sign up rates;
2) changes in the probability of winning of lottery should have a small or null effect on the "private" sign up rates;
3) changes in the probability of winning of lottery are expected to have an effect on the "public" sign up rates:
a) in settings where trying to appear cool/social is the predominant mechanism, a higher probability should lead to a higher sign up rate;
b) in settings where trying to appear smart is the predominant mechanism, a higher probability should lead to a lower increase in the sign up rate and potentially even a negative effect on the sign up rate.
We will visit schools where we expect different mechanisms to be dominant:
1) schools in low-income areas: we expect "signaling that one is cool" to be the dominant mechanism;
2) schools in middle or higher income areas: we expect ""signaling that one is smart" to be the dominant mechanism.
Experimental Design Details
The differing sign up forms will be shuffled and distributed to students in the order they sit in the room.
Students, within classrooms
Was the treatment clustered?
Sample size: planned number of clusters
Approximately 16-18 classrooms (the exact number will depend on agreements with each school).
Sample size: planned number of observations
500 (approximate, since classroom sizes will vary)
Sample size (or number of clusters) by treatment arms
About 60-65 observations for each one of the four treatments in "signaling that one is cool" schools, and about 60-65 observations per treatment arm in "signaling that one is smart" schools (sample sizes are approximate, since classroom sizes will vary).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Approval Date
IRB Approval Number
Post Trial Information
Is the intervention completed?
Is data collection complete?
Is public data available?
Reports, Papers & Other Materials
REPORTS & OTHER MATERIALS