Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Professor will this be on the Exam?
Last registered on May 17, 2021


Trial Information
General Information
Professor will this be on the Exam?
Initial registration date
January 12, 2018
Last updated
May 17, 2021 11:09 AM EDT
Primary Investigator
Baruch CUNY and NBER
Other Primary Investigator(s)
PI Affiliation
The Georgia Institute of Technology
Additional Trial Information
Start date
End date
Secondary IDs
Today's students face many pressures vying to capture their attention, and undergraduate curricula increasingly rely on self-directed content exposure that occurs outside of the traditional classroom. This combination may exacerbate behavioral failures inhibiting human capital production. In a previous experiment we found that email nudges to focus on certain ungraded problems available online had little impact on student performance on questions very similar to those on the exam. We realized that students focused on the graded assignments with relatively few attempting to do the ungraded “nudged” problems. In this study we test whether nudges to focus on graded assignments with the message that such problems are likely to appear on the exam sent to a random subset of students within each class help students focus on core concepts more than when graded assignments are not nudged. In addition, we test whether nudges to practice ungraded problems also with the hint that “questions like these are likely to appear on the exams” increases student time spent online with the ungraded problems and enhances performance on the test. Nudges to focus on practice problems will also be sent to a random sub-sample within each class. Lastly we will vary the amount each graded assignment counts towards their final grade to see if exercises that count more garner more online effort and improve performance on exams.
External Link(s)
Registration Citation
Dench, Daniel and Theodore Joyce. 2021. "Professor will this be on the Exam?." AEA RCT Registry. May 17. https://doi.org/10.1257/rct.2669-2.0.
Former Citation
Dench, Daniel, Theodore Joyce and Theodore Joyce. 2021. "Professor will this be on the Exam?." AEA RCT Registry. May 17. http://www.socialscienceregistry.org/trials/2669/history/91889.
Experimental Details
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Probability that a problem set was completed (binary), correct answer to questions corresponding to intervention problems (binary), combined score on questions corresponding to intervention problems (integer), Activity access corresponding to intervention (binary), combined activity access corresponding to intervention (integer), total time accessing activities related to intervention (log).
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The study design will use within- classroom variation in graded problems to identify the effect of online practice problems in teaching basic economic concepts to students. The study will also use within- person variation in the percent towards the final grade that each question is worth from week to week.
Experimental Design Details
Randomization occurs at the classroom level, into two equal groups, call them A and B.In several weeks out of the semester students can complete online questions that count toward their final grade.In half of these weeks, group A will receive problem set X measuring important basic economic concepts covered that week. Group B will have available problem set X but as ungraded practice problems. Students in group A and B will be nudged to focus on problem set X. During the same weeks, group B will be assigned problem set Y measuring different but equally challenging economic concepts covered that week. Group A will have available problem set Y as ungraded practice problems. Neither groups A or B will be nudged regarding problem set Y.

The structure allows us to contrast the effect of graded versus ungraded problems that are nudged on performance on exams. We can compare those differences with performance on the exam for problem set Y that was graded and ungraded without nudges. We hypothesize that nudging will narrow the probability of doing a graded versus an ungraded problem set. We can also isolate the effect of nudging more generally by estimating the difference in differences in the probability of doing problem sets X and Y as well as performance on exam questions similar to those in sets X and Y. Lastly, we can test whether problem sets that count more towards student’s grade are more likely to be completed and more likely to improve student performance on exams.
Randomization Method
Done by program in stata by classroom using set seed and runiform() packages.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
0 clusters
Sample size: planned number of observations
Approximately 660 students, although this may depend on registration and course drop patterns after the semester begins. In some estimations we will use two observations per student (for midterm and final exams) or estimate directly on question-level outcomes (as detailed in the full analysis plan).
Sample size (or number of clusters) by treatment arms
Approximately 330 individuals per treatment arm, with 2 treatment arms (A,B)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.2 standard deviations on overall exam score, and 0.1 standard deviations in number of attempts of online practice problems.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB Name
CUNY University Integrated Institutional Review Board
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Intervention Completion Date
May 09, 2018, 12:00 AM +00:00
Is data collection complete?
Data Collection Completion Date
May 20, 2018, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
833 students
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
9,996 student-problem set pairs.
Final Sample Size (or Number of Clusters) by Treatment Arms
Treatment Arm 1, specific problem set assignment: 423. Treatment Arm 2, specific problem set assignment : 410.
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)