The Effect of Priming on Retention and Learning in a MOOC

Last registered on November 07, 2018

Pre-Trial

Trial Information

General Information

Title
The Effect of Priming on Retention and Learning in a MOOC
RCT ID
AEARCTR-0000679
Initial registration date
April 01, 2015

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 01, 2015, 12:51 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 07, 2018, 3:39 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
IDinsight

Other Primary Investigator(s)

PI Affiliation
J-PAL Global
PI Affiliation
J-PAL Global

Additional Trial Information

Status
Completed
Start date
2015-03-30
End date
2015-12-31
Secondary IDs
Abstract
Massive Open Online Courses (MOOCs) are characterized by relatively lower retention and pass rates than equivalent in-person courses. One type of dropouts (Type 1) is made up of enrollees for whom the course may not be appropriate. For these students, the optimal set-up might be one that allows students to “shop”—enroll in a course, discover whether the content or level is appropriate, and then drop out if need be after feeling sufficiently informed, but before too much time has been invested in making the choice. Another type of dropouts (Type 2) may be made up of enrollees for whom the course is appropriate, and who intend to complete it, but face small barriers that inhibit them from doing so. Discouragement could be one of those barriers. If learning objectives are imprecise, students may underestimate the relevance of the course, or focus on the wrong teaching points. This could lead to poor performance, which in turn feeds discouragement and ultimately fuels drop out. Therefore a model that makes learning objectives clear upfront could lead to reduced attrition over time.

Our research proposes to identify a simple intervention that allows potential dropouts (type 1 and type 2) to self-select into either type as soon as possible. In addition, it would seek to help students who self-select into the second type (i.e. those who want to complete the course ex-ante) by reducing some of the barriers that they may face, ultimately leading to higher course completion rates. The intervention we propose includes giving students a Baseline Diagnostic Test at the start of the course. This diagnostic may help type 1 students understand exactly what the course covers, and what measurable outcomes they are expected to learn, and whether or not the content is right for them. It may also help spark early engagement with type 2 students, giving them a preview of the concepts covered in the course, allowing them to focus on the right teaching points, and motivating them to complete the course. Using our online course JPAL101x: Evaluating Social Programs, we aim to run a randomized evaluation of this intervention.
External Link(s)

Registration Citation

Citation
Naimpally, Rohit, Marc Shotland and Hira Siddiqui. 2018. "The Effect of Priming on Retention and Learning in a MOOC ." AEA RCT Registry. November 07. https://doi.org/10.1257/rct.679-4.0
Former Citation
Naimpally, Rohit, Marc Shotland and Hira Siddiqui. 2018. "The Effect of Priming on Retention and Learning in a MOOC ." AEA RCT Registry. November 07. https://www.socialscienceregistry.org/trials/679/history/36905
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention 1: Baseline Diagnostic Quiz with answers
The intervention will test whether taking a “Baseline Diagnostic Quiz” yields better retention and performance in the course.

The Baseline Diagnostic Quiz is placed in the first unit of the course. It is ungraded and responses are optional. It asks technical questions about the different concepts that are covered throughout the course. In this intervention, the correct answers to the questions will be shown to the students, allowing them to assess their baseline understanding of the concepts covered in the course.

Intervention 2: Baseline Diagnostic Quiz without answers
The intervention will test whether early exposure to key course concepts yields better retention and performance. Furthermore, will leaving the diagnostic questions unanswered immediately after the baseline encourage students to remain in the course to learn the answers.

The Baseline Diagnostic Quiz is placed in the first unit of the course. It is ungraded and responses are optional. It asks the same questions as the diagnostic quiz in intervention 1; however in this case, the answers will not be shown to the students.

Control/Placebo:
The control group will receive a placebo survey that is non-technical and asks about demographics, aspirations, and professional background. The Placebo Survey is placed in the first unit of the course. It is ungraded and responses are optional.
Intervention Start Date
2015-04-01
Intervention End Date
2015-04-30

Primary Outcomes

Primary Outcomes (end points)
Percent of course completed
Number of people who open module 2 (post-baseline)
Number of people who complete module 2 (post-baseline)
Total score
Primary Outcomes (explanation)
Percent of course completed: percent of videos watched and problems attempted
Total score: automatically generated by edX course

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Upon entering the very first module, participants (online students) are randomly assigned by the edX software into one of three groups:
Group A receives: Intervention 2: Baseline Diagnostic Quiz without answers
Group B receives: Intervention 1: Baseline Diagnostic Quiz with answers
Group C is the control group
Experimental Design Details
Randomization Method
edX software automatically conducts the randomization.
Randomization Unit
The randomization unit is at the participant (student) level. Each participant can be independently randomly assigned to one of the three gruops.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
4200
Sample size: planned number of observations
4200
Sample size (or number of clusters) by treatment arms
planned: 1400 receives intervention 1 (baseline with answers), 1400 receives intervention 2 (baseline without answers), 1400 control
Because each student is randomly assigned when they enter the first module, these numbers are estimates.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Massachusetts Institute of Technology Committee on the Use of Humans as Experimental Subjects
IRB Approval Date
2015-04-03
IRB Approval Number
1504007042

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
April 30, 2015, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
June 30, 2015, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
6,662 online learners
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
6,662 online learners
Final Sample Size (or Number of Clusters) by Treatment Arms
Group that received Baseline Assessment without answers given after submission: 1,129 Group that received Baseline Assessment with answers given after submission: 1,111 Placebo expectations survey: 1,097
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials