NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Motivation and monitoring in modern higher education
Last registered on January 25, 2017


Trial Information
General Information
Motivation and monitoring in modern higher education
Initial registration date
January 25, 2017
Last updated
January 25, 2017 2:02 PM EST
Primary Investigator
Other Primary Investigator(s)
PI Affiliation
Baruch CUNY and NBER
Additional Trial Information
In development
Start date
End date
Secondary IDs
Today's students face many pressures vying to capture their attention, and undergraduate curricula increasingly rely on self-directed content exposure that occurs outside of the traditional classroom. This combination may tend to exacerbate behavioral failures inhibiting human capital production. In this study, we propose to randomize over 500 students across several sections of a standardized, mixed-format (hybrid) introductory economics course taught at a large, public, urban university into treatment arms to receive different types of either motivations or monitoring interventions throughout the semester. These interventions will be provided in electronic (e-mail) format and sent uniquely to students with information pertinent to their study habits and learning process. We will utilize both administrative data, standardized test outcomes, and detailed microdata of student study habits captured by the online content publisher to measure the effect of the intervention on study habits as well as the return to online study.
External Link(s)
Registration Citation
Joyce, Theodore and Stephen O'Connell. 2017. "Motivation and monitoring in modern higher education." AEA RCT Registry. January 25. https://doi.org/10.1257/rct.1957-2.0.
Former Citation
Joyce, Theodore, Stephen O'Connell and Stephen O'Connell. 2017. "Motivation and monitoring in modern higher education." AEA RCT Registry. January 25. http://www.socialscienceregistry.org/trials/1957/history/13476.
Experimental Details
We will send e-mails to students registered in our experimental sections thoughout the semester. These will contain content that intends to motivate students proactively to study particular concepts, and/or indicate that their online study habits are being monitored, along with a recommendation for how much weekly study time is recommended.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Exam scores (standardized), total time (natural log) and visits (integer) online, total time and visits on content activities (all), total time and visits on direct-from-text questions, tries (integer) and grades (percentage correct) on quiz and direct-from-text questions, and time-to-next-access (natural log) from email sending times.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The study design will use within- and across-classroom variation for the separate identification of the effects of motivation and monitoring nudges. We will block-randomize students into two groups, referred to as 1 and 2 below. All randomization will be done via within-class stratification based on those above and below the median GPA at baseline. Students in one set of sections (group A sections) will be randomized into treatment arms that receive either monitoring only emails (A1), or emails that have a monitoring and motivation component (A2). Students in the other sections (group B) will receive either nothing (B1) or a motivation email (B2). In the second half of the semester, we will switch the assignment, so that students who received the A1/B1 treatment will receive the A2/B2 treatment for the remainder of the semester, and vice versa. This way, within any class, no group receives a different amount of treatment accounting over the whole semester (this latter condition was required by our IRB).

The design allows us to compare the effect of different types of treatments both within the same class as well as across classes. Ultimately, we can compare the effect of motivation via the difference between (A1-A2) and (B1-B2); and the effect of monitoring by (A2-B2) and (A1-B1).
Experimental Design Details
Randomization Method
Block randomization done by program in Stata based on beginning-of-semester registration records.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
0 clusters
Sample size: planned number of observations
Approximately 600 students, although this may depend on registration and course drop patterns after the semester begins. In some estimations we will use two observations per student (for midterm and final exams) or estimate directly on question-level outcomes (as detailed in the full analysis plan).
Sample size (or number of clusters) by treatment arms
150 individuals per treatment arm, with 4 treatment arms (A1,A2,B1,B2)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.03 standard deviations in test score in reduced form estimates
IRB Name
CUNY University Integrated Institutional Review Board
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)