x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
How does relative performance feedback affect beliefs and academic decisions?
Last registered on December 23, 2020

Pre-Trial

Trial Information
General Information
Title
How does relative performance feedback affect beliefs and academic decisions?
RCT ID
AEARCTR-0006970
Initial registration date
December 22, 2020
Last updated
December 23, 2020 6:43 AM EST
Location(s)
Region
Primary Investigator
Affiliation
Other Primary Investigator(s)
Additional Trial Information
Status
Completed
Start date
2018-01-15
End date
2018-10-31
Secondary IDs
Abstract
I design field and lab-in-the field experiments at a test preparation institute in Colombia to understand how students incorporate relative performance feedback into their beliefs and decisions. I focus on a high-stakes context where relative performance beliefs are particularly consequential - college entrance exams - and aim to answer the following question: How do beliefs about academic ability and academic choices change when students of different ability levels learn about their performance relative to their peers? I conduct an RCT in which I randomly assign students taking the course either to receive or not to receive feedback about their quartile in the score distribution in weekly practice tests. I elicit beliefs about their perceived quartiles after each practice test and provide the treatment group with an above-/below- median signal to study belief updating. Combining elicited beliefs and administrative data from the course and university admissions, I study (i) belief updating, (ii) academic investments, (iii) performance, and (iv) academic decisions.
External Link(s)
Registration Citation
Citation
Franco, Catalina. 2020. "How does relative performance feedback affect beliefs and academic decisions?." AEA RCT Registry. December 23. https://doi.org/10.1257/rct.6970-1.0.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
The intervention consists of two parts: (i) I collect beliefs about relative performance in practice tests from all participants in a weekly lab-in-the-field experiment, and (ii) I provide relative performance feedback to a randomly selected sample of students preparing to take a college entrance exam at a test preparation institute in Medellin, Colombia. After each practice test students take as part of the preparation course, I elicit probabilities of falling in each of the four quartiles of the math and reading practice-test score distributions. I modify the institute's results report to provide relative performance feedback to treated students.
Intervention Start Date
2018-02-05
Intervention End Date
2018-04-30
Primary Outcomes
Primary Outcomes (end points)
1. Indicator for whether students take the entrance exam
2. Indicator for whether students take the weekly practice tests
3. Performance in the college entrance exam and practice tests
4. Indicator for whether the applicant was admitted to college
5. Indicator for whether the applicant registers for two consecutive admission cycles
6. Self-reported study time
7. Accurateness in their relative performance beliefs
Primary Outcomes (explanation)
Performance in the test is measured by the university using an algorithm to convert the number of questions correct into a standardized score. Accurateness in relative performance beliefs is measured based on the probabilities that the students assign to the for quartiles of performance in math and reading. If they assign the highest probability that they are in, they are classified as having correct beliefs. If the sum of probabilities assigned to worse quartiles than the one they are in, they are classified as underestimating. If the sum of probabilities assigned to better quartiles than the one they are in, they are classified as overestimating.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
I randomly assign half of the students who consented to participate to a treatment group that received weekly relative performance feedback in the two subjects covered by the exam over 8 weeks. To reduce sample variability and to conduct heterogeneity analysis, the randomization was stratified based on gender, whether they had taken the exam in the past, quartile in the initial practice test, and type of course they were enrolled in (morning, afternoon/evening, weekends, pre-medicine, joint preparation for two entrance exams at different universities).

To deliver the relative performance feedback, I separately compute quartiles of the math and reading practice test-score distributions. The quartiles are calculated based on the scores of all students taking the same weekly practice test. To circumvent the problem of ties in practice test scores that may lead to quartiles of unequal sizes, students who are in the limit between two quartiles are randomly assigned to one or the other. The decision to provide relative performance feedback in terms of quartiles and not other finer measure is related to the belief elicitation task, which would be much more time-consuming and error-prone if students had to assign probabilities to deciles or ventiles of the score distribution.

Using individual surveys, I elicited prior beliefs across 10 rounds after each practice test except after the initial one. From the experimental performance report, I obtain posterior beliefs across 8 rounds. After every practice test, I administered a survey with questions about students' expected absolute and relative performance, hours of study in the previous week, and the perceived difficulty of the test. I provided paper or online surveys depending on the type of practice test (in-person or online). Overall, there were 10 rounds of prior belief elicitation, excluding the first practice test. I collected posterior beliefs (online only) using the experimental performance report.
Experimental Design Details
Randomization Method
The randomization was performed at the individual level using a computer and based on individual characteristics from administrative records at the institute.
Randomization Unit
The unit of randomization is the individual level. This is because the performance reports are customized for every student and to increase power. Concerns with spillover effects may arise because students are organized in classrooms at the beginning of the course. However, several features of this setting minimize the role for spillover effects. First, the performance report was designed to be delivered online and students were instructed to check it on a computer rather than on a phone to be able to complete the belief elicitation task and play the game to learn how much they could earn if selected in the weekly raffle. Because the institute does not have a computer lab that students can use, it is likely that students checked the report at home. Second, students are assigned to classrooms randomly, so it is unlikely that they know each other from before the preparation course. In addition, they may make friends over time but the course is very short (3 months) so they probably do not spend too much together after class. Third, to infer one's quartile based on another students' report is difficult because the two students would basically need to have the same score.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
1,000 students.
Sample size: planned number of observations
1,000 students.
Sample size (or number of clusters) by treatment arms
500 students in the treatment group (receiving relative performance feedback) and 500 in the control group (receiving absolute scores only)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
University of Michigan Health Sciences and Behavioral Sciences Institutional Review Board (IRB-HSBS)
IRB Approval Date
2017-02-06
IRB Approval Number
HUM00124049
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS