Do students cheat?

Last registered on April 06, 2026

Pre-Trial

Trial Information

General Information

Title
Do students cheat?
RCT ID
AEARCTR-0018279
Initial registration date
April 02, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 06, 2026, 8:16 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
University of Warwick

Other Primary Investigator(s)

PI Affiliation
University of Warwick

Additional Trial Information

Status
Completed
Start date
2023-09-01
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We test for peer-to-peer cheating effects on academic performance. Exploiting systematic variation in quiz question reuse across randomly assigned classes, we find that each additional 10 students with prior question exposure increases scores by 0.06 to 0.08 points (0.03 to 0.04 SD). Students tested last - when all peers know the questions - score 1.2 points (0.6 standard deviations) higher than those tested first. These findings show that advance information transmission through social networks meaningfully distorts performance measurement - relevant to any setting where evaluations aim to assess individual capabilities.
External Link(s)

Registration Citation

Citation
Boedts, Tristan and Joshua Fullard. 2026. "Do students cheat?." AEA RCT Registry. April 06. https://doi.org/10.1257/rct.18279-1.0
Experimental Details

Interventions

Intervention(s)
The data we use comes from three cohorts of postgraduate students at a UK business school. In a compulsory module, students complete three in-class quizzes designed to assess understanding of course material. Each 15-minute quiz consisted of multiple-choice and short-answer questions.

Students are randomly assigned to classes before the start of term (by the university). Within teach class students take part in a series of questions. In this paper we exploit systematic variation in question reuse across cohorts and quizzes.
Intervention (Hidden)
More details:

The data we use comes from three cohorts of postgraduate students (n= 861) at a UK business school. In a compulsory module, students complete three in-class quizzes designed to assess understanding of course material. Each 15-minute quiz consisted of multiple-choice and short-answer questions.

The university randomly assigned students to classes before the term started, with each class containing no more than 50 students. Within each class, students self-selected into groups of 5 to 7 members for the quiz. The quiz format was collaborative: the class teacher displayed each question, groups discussed briefly and recorded answers on whiteboards, then revealed answers simultaneously. Phone and laptop use was prohibited and monitored by teaching faculty. To incentivise participation, the highest-scoring group in each class received a small prize (often sweets/candy).

Several features make this setting ideally suited to studying peer-to-peer information transmission. Group-level incentives create competition among teams while encouraging within-group collaboration, and student self-selection into groups means teams likely reflect actual friendship networks. This is important because peer-to-peer cheating occurs through trusted social connections, and our design captures these network structures.

Our identification strategy exploits systematic variation in question reuse across cohorts and quizzes. In three quizzes (cohort 1 quiz 1-2 and cohort 2 quiz 1) all students answered identical questions, in three quizzes (cohort 1 quiz 3, cohort 2 quiz 2, and cohort 3 quiz 1) the questions varied by day; and in three quizzes (cohort 2 quiz 3, and cohort 3 quiz 2-3) the questions varied by class.

Students were explicitly told not to share questions or answers with peers in later classes. Better performance in later classes when questions are reused indicates academic misconduct via peer-to-peer information transmission. Our key explanatory variable counts how many students at the start of each class had previously been exposed to the same questions.
Intervention Start Date
2023-09-01
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
If students in later classes perform better on the quiz when the questions are reused.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Exploit variation in question reuse to evaluate the impact of peer-effect on academic performance.
Experimental Design Details
Randomization Method
Randomisation of students to classroom is performed by the university.
Randomization Unit
Student to classes.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
3 cohorts of postgraduate students
Sample size: planned number of observations
861
Sample size (or number of clusters) by treatment arms
861students across 3 student cohorts. Students take part in three quizzes over the term.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials