Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Present Bias and Nudges - Field Evidence from a MOOC
Last registered on November 10, 2016


Trial Information
General Information
Present Bias and Nudges - Field Evidence from a MOOC
Initial registration date
November 10, 2016
Last updated
November 10, 2016 10:16 AM EST
Primary Investigator
Potsdam University
Other Primary Investigator(s)
PI Affiliation
Hasso Plattner Institute, University of Potsdam
PI Affiliation
RWI - Leibniz Institute for Economic Research, Ruhr-University Bochum
PI Affiliation
RWI - Leibniz Institute for Economic Research
Additional Trial Information
On going
Start date
End date
Secondary IDs
Massive open online courses (MOOCs) have very low completion rates. Often not more than 15% of those signing up finish the course with a certificate. In this paper we argue that present bias helps explain this phenomenon. In a randomized field experiment we test whether prompting enrollees to schedule their next study sessions increases MOOC engagement and completion. Additionally, we elicit time-inconsistency and examine how awareness of it influences treatment effects.
External Link(s)
Registration Citation
Andor, Mark et al. 2016. "Present Bias and Nudges - Field Evidence from a MOOC." AEA RCT Registry. November 10. https://doi.org/10.1257/rct.1780-1.0.
Former Citation
Andor, Mark et al. 2016. "Present Bias and Nudges - Field Evidence from a MOOC." AEA RCT Registry. November 10. http://www.socialscienceregistry.org/trials/1780/history/11742.
Experimental Details
We set up a randomized field experiment to test the following research questions empirically:
1. Does prompting individuals to plan ahead increase MOOC completion rates?
2. To what extent can time-inconsistent preferences explain low MOOC completion rates?
3. Which individuals are most strongly influenced by the planning tools?

We conduct a field experiment with openHPI and openSAP, MOOC providers in the field of internet technology, computer science and software usage and development. We test the effect of two different planning tools with which MOOC participants can schedule their next MOOC study session or set up a plan for the entire MOOC duration. Additionally, participants are reminded of their scheduled time via email.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The main outcome variable is course completion, which we define as earning a certificate. We will especially focus on participants that have the intention to earn a certificate, because initial motivation of participants may vary (Koller et al. 2013).

In addition, we analyze the effect of our treatments on the course activity such as video visits, number of sessions, session duration, performance in quizzes, assignments, and the final exam.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The experiment consists of five treatments, which are all versions of a planning tool. These treatments are transmitted via additional pop-ups, which are embedded in the MOOC interface. This ensures that participants perceive the treatment as part of the course design rather than as an add-on. The control group views a pop-up with supportive feedback on their course progress. In addition to this supportive feedback, the treatment groups are exposed to one of two different planning tools. Treatment group 1, 2 and 3 will have the opportunity to schedule their next study session. Treatment group 4 and 5 can set up a workload schedule for the entire course duration.
Experimental Design Details
Randomization Method
Randomization is done by a computer.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Sample size: planned number of observations
>= 1500 per treatment, same size for the control group
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The minimum detectable effect size is dependent on how many participants opt-out or opt-in of the treatment. Therefore, we cannot determine it ex-ante.
IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)