The Impact of a Personal Development Plan Program on Learning outcomes: Evidence from a Randomized Evaluation

Last registered on October 28, 2019


Trial Information

General Information

The Impact of a Personal Development Plan Program on Learning outcomes: Evidence from a Randomized Evaluation
Initial registration date
October 25, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 28, 2019, 1:28 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

Tilburg University

Other Primary Investigator(s)

PI Affiliation
Tilburg University
PI Affiliation
Tilburg University
PI Affiliation
Tilburg University

Additional Trial Information

On going
Start date
End date
Secondary IDs
We use an encouragement design to study the effectiveness of a mentoring program on university students’ grades and drop-outs. We further investigate potential mechanisms surveying students about their study habits and strategies, mood, and personality.
External Link(s)

Registration Citation

Dalton, Patricio et al. 2019. "The Impact of a Personal Development Plan Program on Learning outcomes: Evidence from a Randomized Evaluation." AEA RCT Registry. October 28.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


We use stratified randomization to allocate students into encouraged and non-encouraged group to take up a mentoring program. While both groups are allowed to subscribe, the encouraged group gets more promotional mailing and their commitment is nudged via a survey.

The students then follow a mentoring program designed by the university, which consists of group and individual meetings with mentors, homework, and workshops. This program runs for the entire bachelor program, and we use data to evaluate the first two years of the first cohort. Further, we use data of the second cohort to deepen our analysis, with the caveat that the second cohort only received one “pampering” letter, and so their encouragement was not program-specific, and was weaker.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
1. Grades, GPA
2. Drop-outs
Primary Outcomes (explanation)
Grades and GPA will be obtained directly from the university records, taking into consideration the final grades the students obtain (i.e., after resits). We do weigh the grades by the credits (such that more important courses in the curriculum carry higher weight).
Students are considered to have dropped out if they a) formally unenrolled from the program, b) did not obtain the minimum required number of credits to be legally allowed to continue their study, or c) are missing all grades of compulsory courses for a minimum of one semester and have not taken any more exams since. We differentiate between forced (type b) or voluntary (types a and c) drop-out.

Secondary Outcomes

Secondary Outcomes (end points)
1. Exercising (gym data)
2. Personality traits
3. Goals and aspirations
4. Mood
5. Learning strategies
Secondary Outcomes (explanation)
1. Exercising: data on ownership of university sports card and gym attendance
2. Personality traits: Big Five, locus of control, grit, self-confidence, self-control, patience
3. Goals and aspirations: whether people set goals, whether they find goal setting useful, whether they have a role model, what the students aspire for regarding their job and earnings prospects, confidence in graduating in three years, grades expectations
4. Mood: depression scale
5. Learning strategies: competition vs. groupwork, how structured students are when they encounter a study problem

Experimental Design

Experimental Design
1. Stratified randomization at baseline on demographic and academic characteristics
2. Encouragement of treatment group to take up mentoring program
3. Subscribed students follow program (meetings, workshops, homework), unsubscribed have no replacement activities
4. Evaluation after exam waves: 4 waves of exam data, 2 waves of midterms (midterms only in first year of the study)

Experimental Design Details
Randomization Method
Stratified on gender, age (older vs. younger), high school profile (Dutch only), English test scores (non-native speakers only), time of application to university (early vs. late), nationality (Dutch vs. international), university application score (higher vs. lower)
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
350 (we do not cluster in randomization)
Sample size: planned number of observations
350 students
Sample size (or number of clusters) by treatment arms
Half of 350 in control, half in treatment (encouragement); actual numbers after initial no-shows: 147 control, 138 treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Ex-post calculation once effect of encouragement was known: Power is calculated using the following information: standard deviation of grades is 2.0, the required minimum detectable effect size is 1 standard deviation, p(Type I error) < 0.10, the number of observations is 284 (the first even number below the actual sample size), the difference in take up is 0.23, and the explanatory power of the control variables is 20% of the variation in outcomes (R2= 0.20). Software used for the calculation of power is Optimal Design. This software does not allow for testing the power of encouragement designs, but for a given number of observations encouragement designs increase the minimum detectable effect size by the inverse of the difference in take-up rates, or in our case a factor of about 4.3 (= 1/0.23). The adjusted minimum detectable effect size is thus 0.23 standard deviations (= 0.23 times the actual minimum detectable effect size of 1 standard deviation). Choosing the option “Person Randomized Trials/ single level trial” and setting α = 0.10, N = 284, R2= 0.20 and plotting power as a function of the minimum detectable effect size, we find a power of 0.7.

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials