Professor will this be on the Exam?

Last registered on May 17, 2021

Pre-Trial

Trial Information

General Information

Title
Professor will this be on the Exam?
RCT ID
AEARCTR-0002669
Initial registration date
January 12, 2018

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 12, 2018, 5:56 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 17, 2021, 11:09 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Baruch CUNY and NBER

Other Primary Investigator(s)

PI Affiliation
The Georgia Institute of Technology

Additional Trial Information

Status
Completed
Start date
2017-12-27
End date
2018-08-30
Secondary IDs
Abstract
Today's students face many pressures vying to capture their attention, and undergraduate curricula increasingly rely on self-directed content exposure that occurs outside of the traditional classroom. This combination may exacerbate behavioral failures inhibiting human capital production. In a previous experiment we found that email nudges to focus on certain ungraded problems available online had little impact on student performance on questions very similar to those on the exam. We realized that students focused on the graded assignments with relatively few attempting to do the ungraded “nudged” problems. In this study we test whether nudges to focus on graded assignments with the message that such problems are likely to appear on the exam sent to a random subset of students within each class help students focus on core concepts more than when graded assignments are not nudged. In addition, we test whether nudges to practice ungraded problems also with the hint that “questions like these are likely to appear on the exams” increases student time spent online with the ungraded problems and enhances performance on the test. Nudges to focus on practice problems will also be sent to a random sub-sample within each class. Lastly we will vary the amount each graded assignment counts towards their final grade to see if exercises that count more garner more online effort and improve performance on exams.
External Link(s)

Registration Citation

Citation
Dench, Daniel and Theodore Joyce. 2021. "Professor will this be on the Exam?." AEA RCT Registry. May 17. https://doi.org/10.1257/rct.2669-2.0
Former Citation
Dench, Daniel and Theodore Joyce. 2021. "Professor will this be on the Exam?." AEA RCT Registry. May 17. https://www.socialscienceregistry.org/trials/2669/history/91889
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Electronic nudges encouraging students to pay particular attention to specific problem sets as they represent core concepts of which similar problems are likely to be on the exams. Randomization of who receives sets of equally difficult problem sets.
Intervention Start Date
2018-01-27
Intervention End Date
2018-05-20

Primary Outcomes

Primary Outcomes (end points)
Probability that a problem set was completed (binary), correct answer to questions corresponding to intervention problems (binary), combined score on questions corresponding to intervention problems (integer), Activity access corresponding to intervention (binary), combined activity access corresponding to intervention (integer), total time accessing activities related to intervention (log).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study design will use within- classroom variation in graded problems to identify the effect of online practice problems in teaching basic economic concepts to students. The study will also use within- person variation in the percent towards the final grade that each question is worth from week to week.
Experimental Design Details
Randomization occurs at the classroom level, into two equal groups, call them A and B.In several weeks out of the semester students can complete online questions that count toward their final grade.In half of these weeks, group A will receive problem set X measuring important basic economic concepts covered that week. Group B will have available problem set X but as ungraded practice problems. Students in group A and B will be nudged to focus on problem set X. During the same weeks, group B will be assigned problem set Y measuring different but equally challenging economic concepts covered that week. Group A will have available problem set Y as ungraded practice problems. Neither groups A or B will be nudged regarding problem set Y.

The structure allows us to contrast the effect of graded versus ungraded problems that are nudged on performance on exams. We can compare those differences with performance on the exam for problem set Y that was graded and ungraded without nudges. We hypothesize that nudging will narrow the probability of doing a graded versus an ungraded problem set. We can also isolate the effect of nudging more generally by estimating the difference in differences in the probability of doing problem sets X and Y as well as performance on exam questions similar to those in sets X and Y. Lastly, we can test whether problem sets that count more towards student’s grade are more likely to be completed and more likely to improve student performance on exams.
Randomization Method
Done by program in stata by classroom using set seed and runiform() packages.
Randomization Unit
Individual/Student
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0 clusters
Sample size: planned number of observations
Approximately 660 students, although this may depend on registration and course drop patterns after the semester begins. In some estimations we will use two observations per student (for midterm and final exams) or estimate directly on question-level outcomes (as detailed in the full analysis plan).
Sample size (or number of clusters) by treatment arms
Approximately 330 individuals per treatment arm, with 2 treatment arms (A,B)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.2 standard deviations on overall exam score, and 0.1 standard deviations in number of attempts of online practice problems.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
CUNY University Integrated Institutional Review Board
IRB Approval Date
2017-12-01
IRB Approval Number
#2015-1310

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
May 09, 2018, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
May 20, 2018, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
833 students
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
9,996 student-problem set pairs.
Final Sample Size (or Number of Clusters) by Treatment Arms
Treatment Arm 1, specific problem set assignment: 423. Treatment Arm 2, specific problem set assignment : 410.
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials