Experimental Design
The evaluation used an experimental phase-in design. Participants were randomly assigned to two cohorts, a treatment group that started the program immediately, and a control group that started the program around 4 months later on average, right around the time of the follow-up survey. We are therefore able to report short-run effects of training. Two thirds of the 1,900 eligible youth were assigned to treatment and the remaining third to the control group.
The baseline survey was collected in March-April 2010 on a random subset of the youth selected. We surveyed 1,122 individuals of the original 1,900, of whom 363 were in the control group and 759 were in the treatment group. Summary statistics from the baseline survey indicate that randomization was successful in achieving balance across treatment and control groups. Trainees reported to training between August 2010 and May 2011; the specific start date varied by district and by MC. Training lasted for three months on average, but varied depending on the type of skill being taught.
The follow-up survey was conducted in June-August, 2011. The follow-up survey included questions on time use, employment, psychological well-being, risky sexual behavior, and trainee assessments of training quality. In order to increase the sample size, we returned to the original pool of 1,900 youth who had been selected to participate in the study. The sample at follow-up is composed of the 755 baseline respondents who we were able to find at the time of follow-up, plus 274 new participants (181 treatment, 93 control), for a total of 1,029 respondents.
In addition, we surveyed all MCs regarding their experience as trainers and their perception of each of the trainees’ skills, diligence, effort, attendance, and so on. Finally, we also conducted a brief qualitative survey with the implementing agency’s desk officers regarding their experience with the intervention to inform future program design.