Minimum detectable effect size for main outcomes (accounting for sample
design and clustering)
Our power analysis provides at least 80% power for the main study outcome, teaching quality, at a significance level of p < 0.05. Optimal Design software was used for the determinations of the minimum detectable effect sizes (Raudenbush et al., 2011). For teacher-level teaching practices during the first year of permanent posting of newly qualified teachers’ we assume an R-squared of 0.20 – this is conservative given the extensive baseline data that will be collected on teachers: With 137 teachers across the two treatment conditions, the MDES is 0.43 s.d. Notably, school-based intervention research finds effect sizes of less intensive teacher training programmes on similar classroom outcomes ranging from 0.50 – 0.89 standard deviations (Durlak et al., 2011; Brown et al., 2010; Raver et al., 2009; Rivers et al., 2013).
The statistical power analysis related to the child-level outcomes, which we are hoping to be adding to this study, also provides at least 80% power for child outcomes, at a significance level of p < 0.05. We assume an R-squared of 0.20 taking into account the measurement of a number of covariates, and an intra-class correlation of children in schools ρπ=0.15 (based on estimates from another study of ours in the Greater Accra Region of Ghana). With these assumptions, 15 children in each classroom in each of the 137 schools, the Minimum Detectable Effect Size (MDES) is 0.22 standard deviations. This is a reasonable estimate given the intensity of the intervention. Recent studies from the U.S. context show similarly sized impacts of comparable preschool interventions (e.g., Morris et al., 2014).