Back to History Current Version

Factors affecting university application decisions

Last registered on July 01, 2024

Pre-Trial

Trial Information

General Information

Title
Factors affecting university application decisions
RCT ID
AEARCTR-0013680
Initial registration date
June 27, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 01, 2024, 12:14 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
MIT

Other Primary Investigator(s)

PI Affiliation
MIT

Additional Trial Information

Status
In development
Start date
2024-07-03
End date
2026-06-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study looks into the different factors that influence student decisions on university application and attendance.
External Link(s)

Registration Citation

Citation
Tadjfar, Nagisa and Kartik Vira. 2024. "Factors affecting university application decisions." AEA RCT Registry. July 01. https://doi.org/10.1257/rct.13680-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention will involve providing information and mentorship to secondary school students in the UK, with the treatment arms varying the content and nature of each.
Intervention Start Date
2024-07-03
Intervention End Date
2026-01-01

Primary Outcomes

Primary Outcomes (end points)
Short-run outcomes (collected at endline survey):
- Interest in mentorship / further exposure to researcher-assigned universities*
- Stated interest in applying to researcher-assigned universities*

Long-run outcomes (collected by schools after university applications are submitted):
- Applications / attendance to researcher-assigned universities*
- Applications / attendance to mentor's university

*Note: researcher-assigned universities are sets of universities that are suitable for the student based on their grade profiles (universities to which students with similar grades have attended based on national data). Researcher-assigned universities will exclude universities already familiar to the students such as the universities their parents and/or siblings have attended as well as universities that are commonly attended by students at their school.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We will look into heterogeneity of treatment effects on the primary outcomes by demographic similarity between participating students mentors (where demographics include socioeconomic status, ethnicity, gender, and location). For instance, we will examine whether female students matched with female mentors were more likely to apply to their mentor's university compared to female students matched to male mentors.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experimental design consists of two treatment arms and a control group with randomization at the individual student level. All students will complete baseline and endline surveys before and after the intervention.
Experimental Design Details
Not available
Randomization Method
Randomization will be conducted on a computer. Within each participating school, we will receive a list of participating students, and then randomly assign students to either the control arm, treatment 1, or treatment 2. Students in treatment 2 will then be randomized into 2A and 2B following the workshop and assigned mentors.
Randomization Unit
Individual, stratified by school
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
As the randomization is at the individual level, the number of clusters at the unit of randomization level is the same as the number of individuals. However, we will be sampling particular schools, and our central estimate is that we will recruit 20 schools.
Sample size: planned number of observations
Our central estimate is that we will have a sample size of 2000 individuals, but we may get fewer or more as we are still in the process of recruiting schools to the study. We present MDEs as a function of sample size below to give a sense of how power will depend on the sample size.
Sample size (or number of clusters) by treatment arms
We plan to randomize students to workshops in fixed proportions: ΒΌ of the sample in each school that we recruit will be randomized to the control group and to treatments 1, 2A, and 2B respectively, so we will have 500 students in each under our target sample size of 2000 individuals. However, we may not be able to find mentor matches for as much as a quarter of the sample, in which case we will reduce the assignment of students to arm 2B and correspondingly increase the assignment of students to 2A.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our primary outcomes are all binary / probability outcomes, so for power calculations we use the standard formula for the variance of a proportion. The minimum detectable effect size then depends on the baseline probability of the outcome and the sample size in each arm. We have limited information about the baseline probabilities for most primary outcomes, but we expect many to fall between 0.05 and 0.2. Below we present power under baseline probabilities of 0.05, 0.1, and 0.2. Our primary comparison compares the control group pooled with arm 1 against arm 2 (pooling 2A and 2B). For this comparison, at our central sample size estimate of 2000 individuals, with individual-level randomization, and given the baseline probabilities defined above, our minimum detectable effect sizes are: Baseline 0.2: 5.0 percentage points Baseline 0.1: 3.8 percentage points Baseline 0.05: 2.7 percentage points We will also compare each of the individual arms against each other. Power for these comparisons is smaller as we are comparing the 500 students in each arm against each other, as opposed to the 1000 students in each group that result once we pool arms as pre-specified above. For these comparisons, we have power of: Baseline 0.2: 7.1 percentage points Baseline 0.1: 5.3 percentage points Baseline 0.05: 3.9 percentage points The figure in the attached supporting documents and materials illustrates power under different assumptions about sample size and the baseline outcome level.
IRB

Institutional Review Boards (IRBs)

IRB Name
MIT Committee on the Use of Humans as Experimental Subjects (COUHES)
IRB Approval Date
2024-01-26
IRB Approval Number
2306001025