Structured Study Time, Self-Efficacy, and Tutoring
Last registered on April 12, 2018

Pre-Trial

Trial Information
General Information
Title
Structured Study Time, Self-Efficacy, and Tutoring
RCT ID
AEARCTR-0000172
Initial registration date
January 14, 2014
Last updated
April 12, 2018 3:59 PM EDT
Location(s)
Region
Primary Investigator
Affiliation
MIT
Other Primary Investigator(s)
PI Affiliation
J-PAL, MIT
Additional Trial Information
Status
Completed
Start date
2014-02-01
End date
2014-12-31
Secondary IDs
Abstract
Using an online course (MOOC) we implement three interventions designed to test scalable methods to improve student retention and performance in online courses: Soft commitment to set aside a study time, tutoring, and self-efficacy.
External Link(s)
Registration Citation
Citation
Banerjee, Abhijit and Esther Duflo. 2018. "Structured Study Time, Self-Efficacy, and Tutoring." AEA RCT Registry. April 12. https://www.socialscienceregistry.org/trials/172/history/28138
Experimental Details
Interventions
Intervention(s)
Massive on line courses have the potential to make quality higher education accessible to a much larger public, but they have been plagued by low retention rates. Using the online course “The Challenges of Global Poverty” as a test bed we implement a series of interventions designed to test scalable methods to improve student retention and performance in online courses, with the goal to improve meaningful access to this resource.

We implement three interventions that can provide insight into how to boost engagement and performance in online courses. Our test bed is the Spring 2014 running of the online edX course “The Challenges of Global Poverty.” The interventions include encouraging students to set aside a regularly scheduled time for interacting with the course, providing information on who performs well in order to boost self-efficacy and self-expectations of performance, and providing personalized one-on-one tutoring. Our three main research questions are:

1) Does blocking out regular study time to interact with courseware yield better retention and performance?
2) Does providing information on who performs well in the course boost marginalized groups’ performance?
3) Does extra tutoring from staff result in cost-effective learning gains?
Intervention Start Date
2014-02-05
Intervention End Date
2014-06-30
Primary Outcomes
Primary Outcomes (end points)
Completion rates, final exam performance, overall course grades, course activity
Primary Outcomes (explanation)
The prime outcomes that are comparable across all interventions are:
1) Completion versus drop out
2) Performance on final exam

In addition we will also consider:
1) Overall course grades. Course grades are based on a combination of lecture sequence questions, 9 homework assignments, 1 final project, and 1 final exam.
2) Activity in the course (time spent watching content, exercises attempted, etc.)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We implement the three interventions, with some variations within each treatment and to measure the impact on student retention and performance.
Experimental Design Details
In this research study, we test whether committing to a regular study time encourages enhances student performance, and whether various enforcement mechanisms can further strengthen this effect. We test whether self-efficacy messaging can boost marginalized students’ self-expectations of performance and in turn their eventual performance. Finally, we implement a tutoring program to test whether supplementing online instruction with personalized, virtual tutoring results in cost-effective learning gains. The first of the interventions is designed to answer the question of whether committing to a regular and structured study time will encourage students to stick to that committed time, and whether this consistency in turn translates into higher eventual performance. To that end, we will provide a randomly chosen subset of students with the option to commit to a regular study time (RST). We will ask those students that opt in to record the time or times that they plan to dedicate to the course each week. However, it is not immediately obvious whether asking students to commit to a regular study time will result in them doing so in practice. For this reason, we also plan to test the impact of various enforcement mechanisms (EM). One enforcement mechanism will be a message provided to a random subset of students that the course staff can monitor usage by looking at timestamps. The second enforcement mechanism will be email reminders sent either one third of the way through the course, two thirds of the way through the course, or at both times. These reminders will encourage students to stick to their committed study time and provide an indication of how closely they have been adhering to that time. Again, the option of receiving these reminders will be randomly assigned (students will have to opt in). We plan to compare ultimate performance in the course between the control and treatment groups. The second intervention is designed to test whether providing self-efficacy messages can improve self-expectation of performance and eventual performance, particularly among marginalized populations such as non-native English speakers and female students. To that end, we will include in the entrance survey self-efficacy messages that provide factual information on who did well in spring 2013. Self-efficacy categories include gender and primary language spoken at home. Exposure to these messages will be randomly assigned; some students will receive no message as a control. The first stage will be captured in a subsequent question in the entrance survey that measures students’ self-expectation of performance. If this first stage is strong, then we can measure the impact of receiving a self-efficacy message on eventual performance. The final intervention is designed to test whether students would make use of personalized, virtual tutoring provided on top of the course content, and in turn whether having access to personalized, virtual tutoring has an impact on eventual performance. All students will be offered the opportunity to enter a lottery for tutoring. Of those that sign up, 500 will be randomly selected to receive tutoring with a group of 20 other students. Tutoring services will consist of weekly online group review sessions, availability for individual questions over email (on assignments or on lectures) on a weekly basis, and a final exam group review session. The tutor will in effect play the role that teaching assistants play in residential education. We plan to monitor the level of engagement between tutors and tutees and to examine the effect of having access to a tutor on eventual performance.
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
It will depend on enrollment in the course
Sample size: planned number of observations
It will depend on enrollment in the course
Sample size (or number of clusters) by treatment arms
We hope to have a population sufficient to reach the following:
a) Intervention 1: 1,000 in control group; 3,600 in 5 treatment groups
b) Intervention 2: 2,300 in control group; 2,300 in 2 treatment groups
c) Intervention 3: 4,100 in 2 control groups; 500 in treatment group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With the sample size above: a) Interventions 1 & 2: Effect size: 0.20 of a standard deviation; 80% power; 5% significance ; b)Intervention 3: Effect size: 0.14 of a standard deviation; 80% power; 5% significance
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Committee On the Use if Humans as Experimental Subjects
IRB Approval Date
2013-12-04
IRB Approval Number
1311006015
Analysis Plan
Analysis Plan Documents
Analysis Plan

MD5: e805df3c5004cce7da228a8d1f32367e

SHA1: d1497394183a0c672ea26bed8f6f19148d9f1b03

Uploaded At: August 15, 2014

Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
Yes
Intervention Completion Date
June 30, 2014, 12:00 AM +00:00
Is data collection complete?
No
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
No
Reports and Papers
Preliminary Reports
Abstract
Structured Study Time, Self-efficacy, and Tutoring
Esther Duflo (MIT and J-PAL) and Abhijit Banerjee (MIT and J-PAL)

In this evaluation we implemented a series of interventions during the spring 2014 version of the online course “14.73x: The Challenges of Global Poverty.” Through these interventions, we aimed to test various methods that might encourage students to stick with the course and boost performance. These consisted of: (1) a tutoring intervention, where student groups of about 25 students were placed in a tutoring group for the duration of the course, (2) a self-efficacy intervention, where students were randomly shown one of several messages about who is successful in the course to test any impacts on performance among various subgroups, and (3) a regular study time intervention, where students could opt in to commit to up to three regular weekly study times, some of whom were randomly sent enforcement reminders and an indication of how closely they were sticking to their study time. At this point, the experiment is considered to be complete; all data was collected during the 14 weeks of the course and the analysis was completed shortly thereafter. No substantial changes to the research design, timeline, or data collection plan were implemented before or during the interventions. 19,694 students registered for the course, of whom 4,704 were active. 1,148 students earned a certificate by achieving a grade of at least 50%. 1,250 students were active on the forum.

Tutoring: During the second and third weeks of the course, students were able to enter the tutoring lottery. 775 students entered, of whom half were randomly assigned to receive a tutor and placed into one of 14 groups based on scheduling preferences submitted at the time of the lottery. Tutors worked with their groups on a weekly basis for the remaining 10 weeks of the course and hosted a dedicated final exam review. We collected feedback from tutors and tutees at the end of the course.

Self-Efficacy: Self-efficacy messages were displayed to students who completed the entrance survey, which was released during the first week of the course. In an effort to increase participation, we offered a random drawing for one of five signed copies of Poor Economics to all students that completed the entrance survey by a certain date. We considered in our sample 3,834 students who took the entrance survey within the first half of the course. These students were randomly allocated to see either no message or one of three messages: (1) a generic message (2) a message related to females performing well in the course, (3) a message related to non-native English speakers performing well in the course. Of the initial 3,834 students assigned to one of the self-efficacy treatments, 2,872 students, or about 75%, were active at some point during the course.

Regular Study Time: Students who completed the entrance survey within the first half of the course were also randomly assigned to one of four treatment groups: (1) control, (2) option for regular study time, (3) option for regular study time with simple enforcement message, and (4) option for regular study time with reminder emails indicating how many times the student had logged on within 30 minutes of one of their designated study times. 3,510 students were included in this intervention, of whom 2,724 were active at some point during the course. Across the three regular study time groups, 1,240 active students, about 46% of those given the option, opted in to sign up for a regular study time. Reminder emails were sent to 333 students, of whom 254 were active.

Results: For the self-efficacy and tutoring interventions, we examined the treatment effect on eight primary outcomes, including course completion, certificate earned, fraction completed, hours spent, overall grade, final exam grade, attempted final exam, and final exam grade conditional on attempting. We find no treatment effect for either intervention on any of the eight outcomes. We find a positive impact of the gender-based self-efficacy message on female students for seven of eight outcomes, however, none of these are statistically significant.

For the tutoring intervention, we also measured the impact of being assigned to a tutor on intensity of interaction with the staff and the course. Those assigned to a tutoring group were significantly more likely to have any interaction with staff (with tutor or on forum) and any engagement on the forum or interaction with a tutor, as we would expect since only those assigned to the tutoring treatment could interact with a tutor. However, we find no impact of treatment on other measures of engagement, including number of interactions with the staff on the forum, any interaction with staff on the forum, any activity on the forum, and number of posts on the forum.

For the regular study time (RST) intervention, we used assignment to RST as an instrument for opting in to RST to estimate the treatment effect on our eight primary outcomes. We find a small and positive but insignificant impact of opting in on seven of the eight outcomes. Similarly, we find a small and positive impact of opting in to email reminders on five of the eight outcomes, again insignificant. We find no significant impact of assignment to RST on various measures of engagement with the course, including hours spent on course, total activity count, and total sessions (all three estimated separately for the full course, the first half only, and the second half only).

Data Publication: All data was cleaned and identified prior to analysis. The complete set of de-identified raw and clean/coded data, as well as all Stata programs that clean and analyze the data are prepared for publication. According to the MOU set up with MITx at the outset of our research, we do not have permission to share “learner data” (scores and activity tracking data collected) outside of our research team. Depending on permission granted from MITx, we will make all or some portion of the de-identified data available on the J-PAL Dataverse.
Completion Date
January 01, 2014 12:00 AM +00:00
Url
https://www.povertyactionlab.org/evaluation/structured-study-time-self-efficacy-and-tutoring
Relevant Papers