Incentives: Evidence from Uganda

Last registered on July 03, 2019

Pre-Trial

Trial Information

General Information

Title
Incentives: Evidence from Uganda
RCT ID
AEARCTR-0002781
Initial registration date
July 03, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 03, 2019, 3:24 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
City, University of London

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2011-09-01
End date
2013-08-15
Secondary IDs
Abstract
Throughout our lives, we are routinely offered different incentives as a way to motivate us. Many researchers have studied the effects of incentives on people’s performance. There can also be important psychological outcomes in terms of stress and happiness. The current project contributes to the literature by explicitly accounting for this performance-versus-well-being trade-off introduced by incentives. In total four different incentives and their interactions were provided to students. Such cross-cutting design allows me to study the complementarities of the incentives. Additional outcome of tee project is to study the role of feedback in shaping subjects' expectations. Finally, having friendship network structure of each student within his/her class allows me to study how various competitive environments influence group formations in classes at different school levels.

External Link(s)

Registration Citation

Citation
Celik Katreniak, Dagmara. 2019. "Incentives: Evidence from Uganda." AEA RCT Registry. July 03. https://doi.org/10.1257/rct.2781-1.0
Former Citation
Celik Katreniak, Dagmara. 2019. "Incentives: Evidence from Uganda." AEA RCT Registry. July 03. https://www.socialscienceregistry.org/trials/2781/history/49308
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
I implement two types of social comparative feedback regimes, within and across-class group comparisons, and two types of incentive regimes, financial and reputation rewards. I also allow for their interaction. In total, eight different treatment groups are formed. More than 5000 primary and secondary school students in Uganda, who were repeatedly tested and interviewed over one academic year. In total, five testing rounds were administered between January and December 2012.
Intervention Start Date
2012-01-01
Intervention End Date
2012-12-31

Primary Outcomes

Primary Outcomes (end points)
Performance in Mathematics and English, students' expectations about their performance in Mathematics and English collected before and after every testing (which happened 5 times in total), students' subjective levels of effort exerted before and after every testing, students' subjective happiness before and after every testing, and students overall happiness and stressed measured by standard psychological questionnaires. Finally I have data about friends of each student within his/her class before and after the experiment.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Students were randomly assigned to two feedback treatment groups and a control group. Students in the within-class feedback group (within) were randomly divided into units of three to four classmates and evaluated as units (using the unit averages) within their respective classes. Students in the within group received feedback about their own performance, the performance of their unit members, and the relative position of their unit within their class. Moreover, they received detailed information about whether they and the position of their unit have improved or worsened from one testing round to the next.
Students in the across-class feedback group (across) were evaluated as a class (using the class average) and were compared to other classes of the same grade in the district. Students in the across group received feedback about their own performance, the average performance of their class, the relative position of their class with respect to other classes, and whether they and/or their class have improved or worsened between two subsequent testing rounds
Students in the control group received no information but only took exams. In order to motivate all students to participate in the experiment, we began our visits reminding students that the exams serve as an additional practice for national leaving examinations (which are compulsory for seventh graders in primary schools and fourth and sixth graders in secondary schools).
In order to study the effects of monetary and non-monetary rewards, students were orthogonally re-randomized at the school level into a tournament for financial or reputational rewards. Re-randomization took place after round 4 and students had no information about tournaments beforehand. The qualification criteria differed based on initial randomization into treatments but the general rule was to reward the 15% best performing students/units/classes, and the 15% most improved students/units/classes. In order to avoid confusion, students were given exact information regarding the number of winning students/units/classes. Therefore, all students, despite the class size or treatment allocation, had the same probability of winning. Students in the monetary reward treatment groups could win 2,000 UGX (about .80 USD). Students in the reputational reward group could receive a certificate and their names were announced in the most popular local newspaper in the region, Bukedde.
Overall, the orthogonal randomization divided the sample into 9 groups – one control group, four sole treatment groups (i.e., one type of treatment only) and four combined treatment groups (two types of feedback interacted with two types of rewards). Such cross-cutting design allows me to compare the impact of feedback and reward incentives on students’ performance expectations and their well-being, and to study the complementarities of feedback and rewards.
Experimental Design Details
Randomization Method
In order to increase the balance between control and treatment groups, the sample was stratified along three dimensions: school area (the sample divided into four areas differing by level of remoteness), average school performance in national testing (above average or below average), and student level (grade 6 and 7 of primary education, and grades 1 to 4 of secondary education). The randomization was done in two stages. First, after the stratification of the sample by school performance and area, I randomized the whole sample of 53 schools into treatment and control groups in a 2:1 ratio (which resulted into 36 (17) schools being selected into treatment (control) group). In the second stage, I divided classes of the treatment schools randomly into within-class feedback and across-class feedback groups in a 1:1 ratio (class-level randomization). In this scenario, no student in a control-group school received any treatments, and students in the treatment group schools received either within- or across-class feedback depending on the type of the intervention their class was randomized into. Exposure to the treatment is the only difference in the outcomes between the control and treatment groups.
Randomization Unit
I did two-stage randomization, first at the school level (to randomize the schools into feedback-treatment and control groups) and then at the class level I randomized classes into within group versus across group feedback treatment groups.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
In total I had 53 schools in my sample - 31 primary and 22 secondary schools. In primary schools I included students in the 6th and 7th grades, while in the secondary schools I included students in the "O-level", which means from grade 1 to grade 4. In total, 150 classes participated.
Sample size: planned number of observations
I had 7,209 students who participated in the first testing round. The number changes in each round due to new admissions and high absence and drop-out rates.
Sample size (or number of clusters) by treatment arms
There are 17 schools in the control group, 36 in the treatment group (combining within-class and across-class feedback treatment groups). In terms of classes, there are 48 classes in the control group, 50 in the within-class feedback group and 52 in the across-class feedback group. In terms of students, based on the number of students present in the baseline testing, I had 2,343 students in the control group, 2,395 among the within-class feedback, and 2,412 among the across-class feedback group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The minimum detectable effect size equals 0.15.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
August 15, 2013, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
August 15, 2013, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
52 schools, 146 classes
Was attrition correlated with treatment status?
Yes
Final Sample Size: Total Number of Observations
5,108 students present in the first and fifth (last) testing round.
Final Sample Size (or Number of Clusters) by Treatment Arms
There are 16 schools in the control group and 35 schools (combined within and across-class feedback treatments, some have only within-group treatment, some only across-group treatment, some both). In terms of classes, there are 44 classes in the control group, 50 in the within-class feedback group and 49 in the across-class feedback group. In terms of students, 1,462 students were present in the first and last testing rounds, 1,809 students in the within-group treatment group, and 1,837 in the across-class treatment group.
Data Publication

Data Publication

Is public data available?
No

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
In this paper, I study persistency of overconfidence. I bring evidence from a large-scale field experiment (N=5,102) on whether providing detailed repeated feedback to students evaluated in groups in different types of tournaments helps them predict their performance more accurately. Students in my experiment are overly overconfident and group feedback only helps them offset their inflated beliefs created by task repetition. Feedback exhibits diminishing returns to information. Improving self-assessment is associated with higher performance but lower happiness. Finally, inability to accurately assess own performance may serve as an additional channel to explain gender differences in Math performance.
Citation
Katreniak Celik, Dagmara. 2019. “Persistent overconfidence: evidence from a field experiment in Uganda.” Working paper.
Abstract
Throughout our lives, we are routinely offered different incentives as a way to motivate us, such as absolute and relative performance feedback, and symbolic, reputation or financial rewards. Many researchers have studied the effects of one or more of these incentives on how people change their performance. However, there can also be important psychological outcomes in terms of stress and happiness. The current paper contributes to the literature by explicitly accounting for this performance-versus-well-being trade-off introduced by incentives. I implement two types of social comparative feedback regimes, within and across-class group comparisons, and two types of incentive regimes, financial and reputation rewards. The results show that rewards can lead to an increase in students’ performance up to 0.28 standard deviations (depending on whether students received feedback and what type), but at a cost of higher stress and lower happiness, whereas comparative feedback alone (without rewards) increases performance only mildly, by 0.09 to 0.13 standard deviations, but without hurting students’ well-being. More stressed students exert less effort, perform worse and attrite by 29 percent more compared to those who are stressed minimally. Furthermore, the results help to identify gender-specific responses to different incentive schemes. Boys strongly react to rewards with or without feedback. In contrast, girls react to rewards only if they are also provided with feedback. Finally, the paper contributes to the expanding literature on incentivized interventions in developing countries by using a rich dataset of more than 5000 primary and secondary school students in Uganda, who were repeatedly tested and interviewed over a full academic year.
Citation
Katreniak Celik, Dagmara. 2019. “Dark side of incentives: evidence from a field experiment in Uganda.” CERGE-EI

Reports & Other Materials