NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Using behavioural economic insights to overcome student procrastination
Last registered on June 17, 2020


Trial Information
General Information
Using behavioural economic insights to overcome student procrastination
Initial registration date
June 16, 2020
Last updated
June 17, 2020 10:33 AM EDT

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Queensland University of Technology
Other Primary Investigator(s)
PI Affiliation
University of Technology Sydney
PI Affiliation
University of Vienna
PI Affiliation
University of Sydney
Additional Trial Information
On going
Start date
End date
Secondary IDs
The ability to complete academic work in a timely manner is critical to students’ success at school. This project builds on insights from behavioural economics to investigate the mechanisms responsible for students’ tendency to procrastinate in their studies and to test simple interventions to overcome procrastination problems. We randomise a large sample of senior secondary students in Australia into a baseline and three treatment groups, in which we observe the timing and success of completion of a sustained real effort task. Treatments vary based on the length of the deadline given to complete the task and on whether students receive a reminder (unanticipated or anticipated) to do the task. In a second step, we elicit experimental measures of impatience and present bias, which are theoretically linked to procrastination behaviour, of these students to investigate how empirically observed procrastination behaviour correlates with students’ time preferences. Finally, we link our behavioural measures to administrative data to examine whether present-biased preferences and procrastination behaviour are predictive of students’ university entrance scores.
External Link(s)
Registration Citation
Albrecht, Sabina et al. 2020. "Using behavioural economic insights to overcome student procrastination." AEA RCT Registry. June 17. https://doi.org/10.1257/rct.4136-1.0.
Experimental Details
The first part of our study is designed to observe procrastination behaviour of high school students in their regular school setting. This part contains an RCT with four interventions, described below. (The second part of our study is an experimental elicitation of time preferences and does not contain any intervention as such. Details about that part can be found under Experimental Design.)
We give the students a real-effort task that is commonly used in laboratory experiments as an incentivised homework to be completed by a set deadline. We observe the students’ working patterns on the homework task, from which we derive measures of procrastination. By randomising classes into different interventions aimed at overcoming procrastination behaviour, we study the effectiveness of these interventions in terms of overcoming procrastination behaviour in a naturalistic setting. The interventions vary the length of the deadline to complete the homework task and whether students receive a reminder to complete the task before the end of the deadline. Specifically, the four interventions/treatment conditions are:
1) Baseline: Four-week deadline, no reminder
2) Short deadline: One-week deadline, no reminder
3) Anticipated reminder: Four-week deadline, anticipated reminder
4) Unanticipated reminder: Four-week deadline, unanticipated reminder
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The primary outcomes are based on the recorded attempts to complete the task.
- Dichotomous: Starting to complete the task
- Continuous: Time of starting to complete the task
- Dichotomous: Success of completion
- Continuous: Time of completion
- Continuous: Number of attempts at completion
- Continuous: Time of attempts
- Continuous: Progression before the reminder (anticipated/unanticipated)
Primary Outcomes (explanation)
All primary outcomes can be observed directly from the data.
Secondary Outcomes
Secondary Outcomes (end points)
No secondary outcomes considered at this time.
Secondary Outcomes (explanation)
No secondary outcomes considered at this time.
Experimental Design
Experimental Design
Our study design comprises two experimental parts, which are both artefactual field experiments in the classification of Harrison and List (2004)*. The first part is designed to observe procrastination – a behavioural implication of present bias – in high school students in an unobtrusive and naturalistic setting. In this part, we give the students an incentivised homework, consisting of a real-effort task requiring sustained attention, to be completed within a set deadline. We make use of the described interventions aiming at improving the timeliness of goal completion. The second part is designed to measure time preferences (i.e. an individual’s preference for when to do tasks and his or her bias for the present) for each student. For this part, we follow the experimental methodology of Augenblick, Niederle and Sprenger (2015)**.

Reflecting the two-part design, the study is executed over two separate periods. The first part takes place over the course of five weeks. At a first visit to the school, students fill out a background questionnaire, receive instructions and get to practice the homework task for the first part of the study. Two days later, the period to work on the homework task begins, which is up to four weeks long. Subsequently, we return to the school to introduce the second part of the study, which takes place over the course of three weeks.

*Harrison, G.W. and List, J.A. (2004). Field Experiments, Journal of Economic Literature, 1009-1055.
**Augenblick, N., Niederle, M. and Sprenger, C. (2015). Working Over Time: Dynamic Inconsistency in Real Effort Tasks, Quarterly Journal of Economics, 1067-1115.
Experimental Design Details
Not available
Randomization Method
Randomization is done by a computer prior to visiting the first school. We use the Research Randomizer (Version 4.0)*, a software provided by randomizer.org , which allows within-school random assignment of classes to one of the four treatment conditions (random assignment with blocked design). This ensures a higher balance in sample size between the treatment conditions and aligns with our empirical strategy that includes school fixed-effects.

*Urbaniak, G. C., & Plous, S. (2013). Research Randomizer (Version 4.0) [Computer software]. Retrieved on April 23, 2018, from http://www.randomizer.org/
Randomization Unit
We randomise all treatments at the class level, within school.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Initial target: 60 classes in 30 schools

Due to unanticipated difficulty in data collection on account of COVID-19 restrictions, we revise our target down to 40 classes in 15 schools.
Sample size: planned number of observations
Initial target: 900 students Due to unanticipated difficulty in data collection on account of COVID-19 restrictions, we revise our target down to 600 students.
Sample size (or number of clusters) by treatment arms
Initial target: 15 classes Baseline, 15 classes Short Deadline treatment, 15 classes Anticipated Reminder treatment, 15 classes Unanticipated Reminder treatment

Due to unanticipated difficulty in data collection on account of COVID-19 restrictions, we revise our target down to 10 classes per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We base our power calculations on List et al. (2008)* and abstract from higher efficiency through control variables, multiple hypothesis testing, and optimal allocation between treatment arms in this initial calculation. We focus on the dichotomous outcome measure of task completion as this is the only measure for which were able to find guiding estimates in the literature. With 900 students in 60 classes, the size of a cluster is 15. We assume an intra-cluster correlation coefficient of 0.1, a level of significance of 0.05 and power of 0.8 in two-sided statistical tests. With approximately With 225 students per treatment arm, the minimum detectable effect size of our dichotomous main outcome is approximately 0.2. This is a plausible expected difference between Baseline and Short Deadline treatments and Baseline and Anticipated Reminder treatments based on the estimates reported in Taubinsky (2014)**. With our revised targets for sample size and number of clusters, the minimum detectable effect size is 0.25, keeping all other parameter values constant. *List, J.A., Sadoff, S. and Wagner, M. (2008). So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design, Experimental Economics, DOI 10.1007/s10683-011-9275-7. **Taubinsky, D. (2014). From Intentions to Actions: A Model and Experimental Evidence of Inattentive Choice, mimeo
IRB Name
QUT University Human Research Ethics Committee (UHREC)
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information