Nudging to Curb Online Cheating

Last registered on May 17, 2021


Trial Information

General Information

Nudging to Curb Online Cheating
Initial registration date
January 15, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 01, 2019, 3:15 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 17, 2021, 11:12 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.


Primary Investigator

Baruch CUNY and NBER

Other Primary Investigator(s)

Additional Trial Information

Start date
End date
Secondary IDs
The internet greatly facilitates sharing of course work in college. Students can send screen shots of their answers, copy homework and share files they turn in. However this behavior is counterproductive to learning skills required for productivity in the workforce. Using an online excel module, students learn required skills necessary for work in accounting, finance, management and computer science. We have the ability to detect when students share files using this module. The course syllabus has a clear statement regarding cheating, but previous semesters suggest that 10 percent of students cheat on assignments by sharing all or part of their Excel files. To lessen cheating we propose to send a series of nudges to two randomized groups. We will test how nudging students via email effects subsequent cheating episodes and how sanctions will play a role in these episodes.
External Link(s)

Registration Citation

Joyce, Theodore. 2021. "Nudging to Curb Online Cheating." AEA RCT Registry. May 17.
Former Citation
Joyce, Theodore. 2021. "Nudging to Curb Online Cheating." AEA RCT Registry. May 17.
Experimental Details


Email nudges will be sent randomly to inform students of consequences if they cheat. Given cheating, followup nudges will be sent to inform them of the consequences of their cheating.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Using someone else's excel file as a submission.or sharing one's file to be used by others for their submission.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Cheating in future courses.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Across 5 courses students are required to build 3 to 4 spreadsheets. In the syllabus to each course is the following statement: Academic Integrity. The Excel software detects files shared with other students and generates a report for the instructors with the names of plagiarizing students and all parties involved. Students caught cheating will be put on a watch list pending further action with their instructor. Despite this clear warning, we have discovered that more than 10 percent of students are using another student's spreadsheet. In an effort to deter cheating we will send an email to a randomized group of students prior to each assignment reminding them that the system can detect cheating and that they will be put on a watch list pending further action. Students who were given the email warning and who are subsequently caught cheating will be sent a second email informing them the system has flagged them for sharing work with another student or using the work of another student.
Experimental Design Details
Each class that uses the online system assigns 3 to 4 projects per semester. Note some students will be in multiple courses. To balance the warnings, we will randomly divide each course into two groups, A and B. Group A will receive the warning for the first project; Group B will receive the warning on the second project. Thus, Group B serves as the control group for Group A in the first assignment. We can test for the effect of email warnings on whether students cheat by comparing Group A to Group B on the first attempt. Group B, and not Group A will be warned about cheating for the second project. This compares the effect of being warned recently in group B to the effect of being warned earlier semester in group A. In courses that assign only 3 projects there will be no additional warnings. This insures that both groups receive the same amount of warnings. In classes that assign 4 projects, we will send group A a warning on the third project and group B a warning on the fourth project. In this phase, group B will serve as the control for group A. Both groups will have already received one nudge. Thus, this phase tests for persistence effects of the warning.

We will also examine the effect of sanctions for students who are caught cheating. In courses with three projects, we will send an email to students who have cheated on either project 1 or 2. The email will inform them that they have been put on a watch list. There is no obvious comparison group in this case, as this aspect of the experiment is not randomized. Nevertheless we are interested in the behavior of students who have been caught and told they will be reported to their professor, if they cheat again on the third project.

In courses with four projects, we will again send an email to all students caught cheating on either project 1, 2 or both. The email will inform them that they have been put on a watch list. Any further evidence of cheating on projects 3 or 4 will result in their name being sent to their Professor. Again, this does not use any randomization for identification as cheating is self-selected. However, we are very interested in student behavior after they have been caught cheating.
Randomization Method
Done by program in stata using set seed and runiform() packages. We will randomize by course.
Randomization Unit
Individual/Student stratified by course.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Approximately 5000 students
Sample size: planned number of observations
Approximately 17,500 problem sets
Sample size (or number of clusters) by treatment arms
Approximately 8,750 observations per treatment arm with two treatment arms (A, B).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
With an alpha of 0.05 and power of 0.9 and assuming 50% treatment and 50% control, 5000 clusters and 3.5 observations per cluster, and an intracluster correlation of 0.785, the MDES=((3.24)/(0.5*5,000^(1/2))*sqrt(0.785+(1-0.785)/3.5)=0.085 Standard deviations approximately. With a base level cheating rate of 0.1, we'd be able to detect a shift in cheating of 0.77 percentage points.

Institutional Review Boards (IRBs)

IRB Name
CUNY University Integrated Institutional Review Board
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Intervention Completion Date
May 05, 2019, 12:00 +00:00
Data Collection Complete
Data Collection Completion Date
May 10, 2019, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Sample Size: 3,888 student-class combinations
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Sample Size: 10,267 student-class-problem set combinations
Final Sample Size (or Number of Clusters) by Treatment Arms
Treatment Arm 1, early email nudge: 1,703 student-class combinations. Treatment Arm 2, late email nudge: 1,671 student-class combinations.
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials