Back to History

Fields Changed

Registration

Field Before After
Trial Status in_development on_going
Last Published September 23, 2024 03:28 PM March 18, 2025 06:01 PM
Primary Outcomes (End Points) Short-run outcomes (collected at midline and endline surveys): - Interest in mentorship / further exposure to researcher-assigned universities* - Stated interest in applying to researcher-assigned universities* Long-run outcomes (collected by schools after university applications are submitted): - Applications / attendance to researcher-assigned universities* - Applications / attendance to mentor's university - Applications / attendance to university with subsidized visit *Note: researcher-assigned universities are sets of universities that are suitable for the student based on their grade profiles (universities to which students with similar grades have attended based on national data). Researcher-assigned universities will exclude universities already familiar to the students such as the universities their parents and/or siblings have attended as well as universities that are commonly attended by students at their school. Short-run outcomes (collected at midline and endline surveys): Beliefs about outcomes at universities where students are assigned videos, mentors, and visits “How likely are you to make friends and fit in at [university]?” “How likely are you to receive an offer from [university]”? “How likely are you to graduate successfully from [university]?” Stated preferences over universities and application intent Long-run outcomes (collected by schools after university applications are submitted): - Applications / attendance to universities where students are assigned videos* - Applications / attendance to mentor's university - Applications / attendance to university with subsidized visit Note that we notionally assign each student in the control group in one of the treatment arms, and apply the same algorithm to assign them universities for videos, mentors and visits, but do not actually offer them these treatments. Having assigned universities for the control group allows the university- specific outcomes described above to be defined in the control group consistently, meaning that we can construct placebo versions of these outcomes in the control group for use in analysis. .
Experimental Design (Public) The experimental design consists of two treatment arms and a control group with randomization at the individual student level. All students will complete baseline, midline, and endline surveys before and after the intervention. The experimental design consists of two cross-randomized treatments and a control group with randomization at the individual student level. All students will complete baseline, midline, and endline surveys before and after the intervention.
Randomization Method Randomization will be conducted on a computer. Within each participating school, we will receive a list of participating students, and then randomly assign students from this list to either the control arm C, treatment T1, or treatment T2. Randomization will be conducted on a computer. We randomize students between arm C and the treatment arms when they complete the baseline survey using randomization in Qualtrics, stratifying by school. As noted above, for Queen Elizabeth’s Grammar School Faversham, randomization in the baseline survey only randomized between arm C and a treatment arm providing both mentors and visits without distinctions between demographically matched and unmatched mentors.
Planned Number of Observations Our central estimate is that we will have a sample size of 2000 individuals, but we may get fewer or more as we are still in the process of recruiting schools to the study. We present MDEs as a function of sample size below to give a sense of how power will depend on the sample size. Our central estimate is that we will have a sample size of 2000 individuals, but we may get fewer or more as we are still in the process of recruiting schools to the study and confirming participation.
Sample size (or number of clusters) by treatment arms Given budgetary constraints, we plan to assign 100 students to treatment arm T2, 400 to arm T1, and the remainder to arm C. Under our central estimate of 2000 students, we will thus have 1500 students in arm C. Subject to budgetary and logistical constraints, we plan to assign 50% of participating students to the active control group, and evenly assign students to each of the four treatment arms, so that 12.5% of students are in each of arms T1a, T1b, T2a and T2b. Given our central estimate of 2000 participating students, this implies that we will have 250 students in each treatment arm and 1000 in the control arm. If we encounter constraints on our budget or the capacity of our mentors, we will reduce assignment to treatment evenly across the four treatment arms. Students in the control group C will be notionally assigned (for the purpose of constructing placebo outcomes) to one of the four treatment groups in even proportions; under our main plan this will mean notionally assigning them to each of the four groups with 12.5% probability.
Power calculation: Minimum Detectable Effect Size for Main Outcomes Our primary outcomes are all binary / probability outcomes, so for power calculations we use the standard formula for the variance of a proportion. The minimum detectable effect size then depends on the baseline probability of the outcome and the sample size in each arm. We have limited information about the baseline probabilities for most primary outcomes, but we expect many to fall between 0.05 and 0.2. Below we present power under baseline probabilities of 0.05, 0.1, and 0.2. We assume assignment of 1500 students to C, 400 to T1, and 100 to T2, as in our central estimate for sample size described above. For comparisons of arm T1 against C, we have the following MDEs: Baseline 0.05: 3.4 pp Baseline 0.1: 4.7 pp Baseline 0.2: 6.3 pp For comparisons of arm T2 against C, we have the following MDEs: Baseline 0.05: 6.3 pp Baseline 0.1: 8.7 pp Baseline 0.2: 11.6 pp We conduct power calculations via simulation for our main binary outcome of applications to university on the basis of the following assumptions: - The outcome has a baseline probability of 20% (this seems like a reasonable estimate for the specific universities that students are assigned) - We have 2000 participating students and assign each treatment with probability 12.5% at each school - We allow for positive spillovers between a students’ three best friends that they report in a school in the data-generation process - We regress the outcome on dummies for the treatment (or relevant pooled treatments) and a linear control for the number of friends treated (from 0-3) Under these assumptions, MDES are as follows for the following comparisons of interest, including some pooled comparisons: - T** vs. C (treatment effect of any given treatment arm): 8.5pp - [T2a + T2b] vs. [T1a + T1b] (marginal effect of demographic match): 8.5pp - [T1b + T2b] vs. [T1a + T2a] (marginal effect of visits): 8.5pp - [T1a + T1b + T2a + T2b] vs. C (pooled effect of any treatment): 5.1pp - [T2a + T2b] vs. C (effect of visits relative to control): 7.0pp
Intervention (Hidden) Low-income students in the UK “undermatch”, attending lower-ranked universities despite good academic credentials and uniform tuition fees. This project looks into whether and how social exposure to individuals who attend high-ranked universities shapes university aspirations. Using administrative data, we document that school-to-university pipelines are persistent and dense; on average, 32% of students attend the modal university attended by students in the prior cohort at their school. These effects are more pronounced in economically disadvantaged schools, where students who attend pipeline universities are also more likely to undermatch. Additionally, students are more likely to attend a university their schoolmates have historically attended, even when they have offers from higher-ranked universities. In light of these patterns, we propose learning about university opportunities through schoolmates as one driver of undermatching. To obtain causal evidence and disentangle mechanisms of this type of social learning, we will run an RCT on ~2,000 students in the UK applying to university. Mechanisms we will test include the effectiveness of social exposure compared to standard information interventions, whether students respond more to information from individuals who are demographically similar, and the importance of coordination within friend groups. The intervention will involve in-person workshops in secondary schools in the UK providing information about the university application process to participants, video clip anecdotes from current university students, matching participants with current university student mentors to discuss life at university and the application process, and subsidized in-person visits to universities. Our treatment arms will vary whether students see videos, are matched to mentors, and are subsidised for visits. The target population is Year 12 and/or Year 13 secondary school students in the UK in the year before they apply to university (typically January of Year 13). We plan on conducting the intervention in multiple waves in order to maximize the number of schools that can be included in the study. The first wave will take place in October through November 2024, and the second wave will take place between April through July 2025. Low-income students in the UK “undermatch”, attending lower-ranked universities despite good academic credentials and uniform tuition fees. This project looks into whether and how social exposure to individuals who attend high-ranked universities shapes university aspirations. Using administrative data, we document that school-to-university pipelines are persistent and dense; on average, 32% of students attend the modal university attended by students in the prior cohort at their school. These effects are more pronounced in economically disadvantaged schools, where students who attend pipeline universities are also more likely to undermatch. Additionally, students are more likely to attend a university their schoolmates have historically attended, even when they have offers from higher-ranked universities. In light of these patterns, we propose learning about university opportunities through schoolmates as one driver of undermatching. To obtain causal evidence and disentangle mechanisms of this type of social learning, we will run an RCT on ~2,000 students in the UK applying to university. Mechanisms we will test include the effectiveness of social exposure compared to standard information interventions and whether students respond more to information from individuals who are demographically similar. The intervention will involve in-person workshops in secondary schools in the UK providing information about the university application process to participants, video clip anecdotes from current university students, matching participants with current university student mentors to discuss life at university and the application process, and subsidized in-person visits to universities. Our treatment arms will vary whether students see videos, are matched to mentors, and are subsidised for visits. Each treatment arm will then be cross-randomized for mentors who are demographically similar to the student (share at least one of the characteristics of gender, ethnicity, and region with the student) or mentors who are dissimilar (share none of these characteristics with the student). The target population is Year 12 and/or Year 13 secondary school students in the UK in the year before they apply to university (typically January of Year 13). We plan on conducting the intervention in multiple waves in order to maximize the number of schools that can be included in the study. The first wave will take place in October through November 2024, and the second wave will take place primarily between May through July 2025, with some earlier interventions (detailed in the “experimental details” section below).
Back to top