Structured Study Time, Self-Efficacy, and Tutoring

Last registered on April 12, 2018

Pre-Trial

Trial Information

General Information

Title
Structured Study Time, Self-Efficacy, and Tutoring
RCT ID
AEARCTR-0000172
Initial registration date
January 14, 2014

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 14, 2014, 5:14 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 12, 2018, 3:59 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
MIT

Other Primary Investigator(s)

PI Affiliation
J-PAL, MIT

Additional Trial Information

Status
Completed
Start date
2014-02-01
End date
2014-12-31
Secondary IDs
Abstract
Using an online course (MOOC) we implement three interventions designed to test scalable methods to improve student retention and performance in online courses: Soft commitment to set aside a study time, tutoring, and self-efficacy.
External Link(s)

Registration Citation

Citation
Banerjee, Abhijit and Esther Duflo. 2018. "Structured Study Time, Self-Efficacy, and Tutoring." AEA RCT Registry. April 12. https://doi.org/10.1257/rct.172-8.0
Former Citation
Banerjee, Abhijit and Esther Duflo. 2018. "Structured Study Time, Self-Efficacy, and Tutoring." AEA RCT Registry. April 12. https://www.socialscienceregistry.org/trials/172/history/28138
Experimental Details

Interventions

Intervention(s)
Massive on line courses have the potential to make quality higher education accessible to a much larger public, but they have been plagued by low retention rates. Using the online course “The Challenges of Global Poverty” as a test bed we implement a series of interventions designed to test scalable methods to improve student retention and performance in online courses, with the goal to improve meaningful access to this resource.

We implement three interventions that can provide insight into how to boost engagement and performance in online courses. Our test bed is the Spring 2014 running of the online edX course “The Challenges of Global Poverty.” The interventions include encouraging students to set aside a regularly scheduled time for interacting with the course, providing information on who performs well in order to boost self-efficacy and self-expectations of performance, and providing personalized one-on-one tutoring. Our three main research questions are:

1) Does blocking out regular study time to interact with courseware yield better retention and performance?
2) Does providing information on who performs well in the course boost marginalized groups’ performance?
3) Does extra tutoring from staff result in cost-effective learning gains?
Intervention Start Date
2014-02-05
Intervention End Date
2014-06-30

Primary Outcomes

Primary Outcomes (end points)
Completion rates, final exam performance, overall course grades, course activity
Primary Outcomes (explanation)
The prime outcomes that are comparable across all interventions are:
1) Completion versus drop out
2) Performance on final exam

In addition we will also consider:
1) Overall course grades. Course grades are based on a combination of lecture sequence questions, 9 homework assignments, 1 final project, and 1 final exam.
2) Activity in the course (time spent watching content, exercises attempted, etc.)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We implement the three interventions, with some variations within each treatment and to measure the impact on student retention and performance.
Experimental Design Details
In this research study, we test whether committing to a regular study time encourages enhances student performance, and whether various enforcement mechanisms can further strengthen this effect. We test whether self-efficacy messaging can boost marginalized students’ self-expectations of performance and in turn their eventual performance. Finally, we implement a tutoring program to test whether supplementing online instruction with personalized, virtual tutoring results in cost-effective learning gains.

The first of the interventions is designed to answer the question of whether committing to a regular and structured study time will encourage students to stick to that committed time, and whether this consistency in turn translates into higher eventual performance. To that end, we will provide a randomly chosen subset of students with the option to commit to a regular study time (RST). We will ask those students that opt in to record the time or times that they plan to dedicate to the course each week. However, it is not immediately obvious whether asking students to commit to a regular study time will result in them doing so in practice. For this reason, we also plan to test the impact of various enforcement mechanisms (EM). One enforcement mechanism will be a message provided to a random subset of students that the course staff can monitor usage by looking at timestamps. The second enforcement mechanism will be email reminders sent either one third of the way through the course, two thirds of the way through the course, or at both times. These reminders will encourage students to stick to their committed study time and provide an indication of how closely they have been adhering to that time. Again, the option of receiving these reminders will be randomly assigned (students will have to opt in). We plan to compare ultimate performance in the course between the control and treatment groups.

The second intervention is designed to test whether providing self-efficacy messages can improve self-expectation of performance and eventual performance, particularly among marginalized populations such as non-native English speakers and female students. To that end, we will include in the entrance survey self-efficacy messages that provide factual information on who did well in spring 2013. Self-efficacy categories include gender and primary language spoken at home. Exposure to these messages will be randomly assigned; some students will receive no message as a control. The first stage will be captured in a subsequent question in the entrance survey that measures students’ self-expectation of performance. If this first stage is strong, then we can measure the impact of receiving a self-efficacy message on eventual performance.

The final intervention is designed to test whether students would make use of personalized, virtual tutoring provided on top of the course content, and in turn whether having access to personalized, virtual tutoring has an impact on eventual performance. All students will be offered the opportunity to enter a lottery for tutoring. Of those that sign up, 500 will be randomly selected to receive tutoring with a group of 20 other students. Tutoring services will consist of weekly online group review sessions, availability for individual questions over email (on assignments or on lectures) on a weekly basis, and a final exam group review session. The tutor will in effect play the role that teaching assistants play in residential education. We plan to monitor the level of engagement between tutors and tutees and to examine the effect of having access to a tutor on eventual performance.
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
It will depend on enrollment in the course
Sample size: planned number of observations
It will depend on enrollment in the course
Sample size (or number of clusters) by treatment arms
We hope to have a population sufficient to reach the following:
a) Intervention 1: 1,000 in control group; 3,600 in 5 treatment groups
b) Intervention 2: 2,300 in control group; 2,300 in 2 treatment groups
c) Intervention 3: 4,100 in 2 control groups; 500 in treatment group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With the sample size above: a) Interventions 1 & 2: Effect size: 0.20 of a standard deviation; 80% power; 5% significance ; b)Intervention 3: Effect size: 0.14 of a standard deviation; 80% power; 5% significance
IRB

Institutional Review Boards (IRBs)

IRB Name
Committee On the Use if Humans as Experimental Subjects
IRB Approval Date
2013-12-04
IRB Approval Number
1311006015
Analysis Plan

Analysis Plan Documents

Analysis Plan

MD5: e805df3c5004cce7da228a8d1f32367e

SHA1: d1497394183a0c672ea26bed8f6f19148d9f1b03

Uploaded At: August 15, 2014

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
June 30, 2014, 12:00 +00:00
Data Collection Complete
No
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials

Description
Structured Study Time, Self-efficacy, and Tutoring

Esther Duflo (MIT and J-PAL) and Abhijit Banerjee (MIT and J-PAL)



In this evaluation we implemented a series of interventions during the spring 2014 version of the online course “14.73x: The Challen
Citation
Banerjee, Abhijit, Esther Duflo. "Structured Study Time, Self-Efficacy, and Tutoring," J-PAL Evaluation Summary, January 01, 2014.