Present Bias and Nudges - Field Evidence from a MOOC

Last registered on November 10, 2016

Pre-Trial

Trial Information

General Information

Title
Present Bias and Nudges - Field Evidence from a MOOC
RCT ID
AEARCTR-0001780
Initial registration date
November 10, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 10, 2016, 10:16 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Potsdam University

Other Primary Investigator(s)

PI Affiliation
Hasso Plattner Institute, University of Potsdam
PI Affiliation
RWI - Leibniz Institute for Economic Research, Ruhr-University Bochum
PI Affiliation
RWI - Leibniz Institute for Economic Research

Additional Trial Information

Status
On going
Start date
2016-09-14
End date
2017-07-31
Secondary IDs
Abstract
Massive open online courses (MOOCs) have very low completion rates. Often not more than 15% of those signing up finish the course with a certificate. In this paper we argue that present bias helps explain this phenomenon. In a randomized field experiment we test whether prompting enrollees to schedule their next study sessions increases MOOC engagement and completion. Additionally, we elicit time-inconsistency and examine how awareness of it influences treatment effects.
External Link(s)

Registration Citation

Citation
Andor, Mark et al. 2016. "Present Bias and Nudges - Field Evidence from a MOOC." AEA RCT Registry. November 10. https://doi.org/10.1257/rct.1780-1.0
Former Citation
Andor, Mark et al. 2016. "Present Bias and Nudges - Field Evidence from a MOOC." AEA RCT Registry. November 10. https://www.socialscienceregistry.org/trials/1780/history/11742
Experimental Details

Interventions

Intervention(s)
We set up a randomized field experiment to test the following research questions empirically:
1. Does prompting individuals to plan ahead increase MOOC completion rates?
2. To what extent can time-inconsistent preferences explain low MOOC completion rates?
3. Which individuals are most strongly influenced by the planning tools?

We conduct a field experiment with openHPI and openSAP, MOOC providers in the field of internet technology, computer science and software usage and development. We test the effect of two different planning tools with which MOOC participants can schedule their next MOOC study session or set up a plan for the entire MOOC duration. Additionally, participants are reminded of their scheduled time via email.
Intervention Start Date
2016-09-14
Intervention End Date
2017-07-31

Primary Outcomes

Primary Outcomes (end points)
The main outcome variable is course completion, which we define as earning a certificate. We will especially focus on participants that have the intention to earn a certificate, because initial motivation of participants may vary (Koller et al. 2013).

In addition, we analyze the effect of our treatments on the course activity such as video visits, number of sessions, session duration, performance in quizzes, assignments, and the final exam.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment consists of five treatments, which are all versions of a planning tool. These treatments are transmitted via additional pop-ups, which are embedded in the MOOC interface. This ensures that participants perceive the treatment as part of the course design rather than as an add-on. The control group views a pop-up with supportive feedback on their course progress. In addition to this supportive feedback, the treatment groups are exposed to one of two different planning tools. Treatment group 1, 2 and 3 will have the opportunity to schedule their next study session. Treatment group 4 and 5 can set up a workload schedule for the entire course duration.
Experimental Design Details
Randomization Method
Randomization is done by a computer.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
n/a
Sample size: planned number of observations
>= 1500 per treatment, same size for the control group
Sample size (or number of clusters) by treatment arms
n/a
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The minimum detectable effect size is dependent on how many participants opt-out or opt-in of the treatment. Therefore, we cannot determine it ex-ante.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials