Conduct and Consequences: Behavioral Rank and Academic Outcomes

Last registered on April 29, 2026

Pre-Trial

Trial Information

General Information

Title
Conduct and Consequences: Behavioral Rank and Academic Outcomes
RCT ID
AEARCTR-0018419
Initial registration date
April 22, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 29, 2026, 3:28 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Monash University

Other Primary Investigator(s)

PI Affiliation
Monash University

Additional Trial Information

Status
In development
Start date
2026-04-27
End date
2026-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Using administrative data from secondary schools in Greece, we document that students who stand out relative to their classmates in terms of disruptive behavior—measured by unexcused absences, which represent the number of hours that see a student removed from the class by the teacher—experience worse behavioral, academic, and higher education outcomes. However, once we account for students’ relative academic performance within the classroom, academic outcomes and subsequent exam results are more strongly associated with academic rank rather than behavioral rank. This pattern raises the question of which mechanisms translate relative standing into long-run outcomes.

To shed light on these mechanisms, we conduct a survey experiment with teachers. Participants are asked to evaluate anonymized student work based on short vignettes. In the first stage, teachers receive information about a student’s behavioral record and relative standing within the classroom. In a second stage, they are provided with additional information about the student’s academic performance and relative academic rank, with these dimensions experimentally varied across participants. We elicit grading decisions and expectations about students’ future academic and career outcomes, as well as whether teachers revise their evaluations when new information is introduced.

This design allows us to assess whether and how teachers’ evaluations and expectations respond to students’ relative behavioral and academic standing. By linking these responses to the patterns observed in the administrative data, the study aims to identify whether teacher behavior constitutes a key channel through which classroom rank effects shape student outcomes.
External Link(s)

Registration Citation

Citation
Megalokonomou, Rigissa and Tommaso Sartori. 2026. "Conduct and Consequences: Behavioral Rank and Academic Outcomes." AEA RCT Registry. April 29. https://doi.org/10.1257/rct.18419-1.0
Experimental Details

Interventions

Intervention(s)
The intervention consists of an online survey experiment administered to teachers. Participants are presented with short, anonymized vignettes describing a student and a piece of student work containing a mistake, and are asked to evaluate the work.

In the first stage, all participants receive identical student work and baseline profile information, along with information about the student’s GPA and behavioral record, including the number of unexcused absences and the student’s relative standing within their classroom (e.g., top or bottom of the class in terms of such behavior). Participants are asked to assign a grade, report their confidence in the evaluation, and provide expectations regarding the student’s future academic and career outcomes.

In the second stage, participants are shown the same student profile and work, augmented with additional information about the student’s relative academic rank within the classroom (e.g., top or bottom). The pairing between behavioral rank and academic rank is experimentally varied across participants. After receiving this information, participants are asked whether they would revise their initial evaluation and, if so, to provide an updated grade. Additional questions elicit the reasoning behind their decisions.

The intervention varies the information available to teachers about students’ relative behavioral and academic standing, allowing us to identify how such information affects grading decisions, confidence, and expectations.
Intervention Start Date
2026-04-27
Intervention End Date
2026-12-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes are teachers’ evaluations and expectations about the student, measured before and after additional information on relative academic standing is revealed.

Stage 1: outcomes measured after the initial vignette

After being shown a student profile containing baseline information, including the student’s number of unexcused absences, prior academic performance (GPA), and the student’s relative disruptiveness rank within the classroom (high or low), the primary outcomes are:

the grade assigned to the student’s work on a 0–10 scale;
whether the teacher reports that the mistake in the exercise is minor or substantial;
whether the teacher reports that the student’s classroom behavior was an important factor in determining the grade;
whether the teacher believes the student’s behavior will affect the student’s future academic performance;
whether the teacher believes the student’s behavior will affect the student’s future career path;
whether the teacher would encourage the student to pursue a competitive university degree;
whether the teacher would encourage the student to pursue a challenging professional path;
whether the teacher would encourage the student’s parents to push the student toward a competitive university degree;
whether the teacher would encourage the student’s parents to push the student toward a challenging professional path.
Stage 2: outcomes measured after academic rank information is added

Participants are then shown the same student profile, augmented with information on the student’s relative academic standing within the classroom (high or low), based on the previously reported GPA. No additional information on absolute academic performance is introduced.

The primary outcomes at this stage are:

whether the teacher chooses to revise the initial grade;
the updated grade assigned to the student’s work;
whether the teacher states that prior student performance should be taken into account when grading;
whether the teacher states that student performance relative to classmates should be taken into account when grading;
whether the teacher believes the student’s behavior will affect the student’s future academic performance;
whether the teacher believes the student’s behavior will affect the student’s future career path;
whether the teacher would encourage the student to pursue a competitive university degree;
whether the teacher would encourage the student to pursue a challenging professional path;
whether the teacher would encourage the student’s parents to push the student toward a competitive university degree;
whether the teacher would encourage the student’s parents to push the student toward a challenging professional path.

The main comparisons of interest are how these outcomes vary with the student’s disruptiveness rank in the first stage, and how revision decisions and updated beliefs vary when relative academic standing is revealed while holding constant absolute academic performance.
Primary Outcomes (explanation)
Most outcomes are directly measured survey responses. The main constructed outcomes are as follows.

Assigned grade: continuous variable ranging from 0 to 10 based on the score assigned by the teacher through the survey slider.
Grade revision indicator: binary variable equal to 1 if the teacher states that they wish to change the initial grade after academic rank information is revealed, and 0 otherwise.
Updated grade: continuous variable ranging from 0 to 10, measured only for teachers who choose to revise their initial grade.
Minor-error indicator / substantial-error indicator: binary variables based on whether the teacher reports that the severity of the mistake was relevant and considered minor or relevant and considered substantial.
Behavior-considered indicator: binary variable equal to 1 if the teacher states that the student’s classroom behavior was an important factor in determining the initial grade.
Future academic impact belief: binary variable equal to 1 if the teacher reports that the student’s behavior is likely to affect future academic performance.
Future career impact belief: binary variable equal to 1 if the teacher reports that the student’s behavior is likely to affect future career outcomes.
Competitive-degree recommendation: binary variable equal to 1 if the teacher reports that they would encourage the student to pursue a competitive university degree.
Challenging-career recommendation: binary variable equal to 1 if the teacher reports that they would encourage the student to pursue a challenging professional path.
Parental competitive-degree recommendation: binary variable equal to 1 if the teacher reports that they would encourage the student’s parents to push the student toward a competitive university degree.
Parental challenging-career recommendation: binary variable equal to 1 if the teacher reports that they would encourage the student’s parents to push the student toward a challenging professional path.
Prior-performance grading belief: binary variable based on whether, after academic rank information is revealed, the teacher reports that prior student performance should be taken into account when grading.
Relative-performance grading belief: binary variable based on whether, after academic rank information is revealed, the teacher reports that performance relative to classmates should be taken into account when grading.

For the post-information recommendation outcomes, we will also examine changes relative to the teacher’s initial responses by constructing indicators for whether the teacher switches their answer after academic rank information is revealed.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study uses an online survey experiment with teachers, in which participants evaluate anonymized student work based on short vignettes.

Each participant is shown a student profile and a piece of student work containing a mistake and is asked to assign a grade and provide related evaluations. In the first stage, the student profile includes information about the student’s behavioral record (number of unexcused absences), prior academic performance (GPA), and the student’s relative standing within the classroom in terms of behavior (high or low disruptiveness rank).

In a second stage, participants are shown the same student profile augmented with information about the student’s relative academic standing within the classroom (high or low academic rank), based on the previously reported GPA. No additional information on absolute academic performance is introduced.

The information about behavioral rank and academic rank is experimentally varied across participants.

The design allows us to examine how teachers’ evaluations and expectations respond to students’ relative behavioral and academic standing, holding constant absolute academic performance, as well as whether teachers revise their evaluations when new information on relative standing is provided.
Experimental Design Details
Not available
Randomization Method
Randomization is conducted automatically by the Qualtrics survey platform using its built-in randomization functions. Participants are randomly assigned to treatment conditions at the individual level by a computer algorithm at the time they access the survey link. Based on the broad field they choose (sciences or humanities), teachers will be asked to review either the mathematics or the literature test.
Randomization Unit
The unit of randomization is the individual participant (teacher). Each participant is independently assigned to a treatment condition by the survey platform.

There is a single level of randomization, and no clustering is used.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The unit of randomization is the individual teacher. The planned number of clusters is between 600 and 800 teachers, with the final number potentially exceeding this range depending on the success of the recruitment process.
Sample size: planned number of observations
The planned number of observations is between 600 and 800 teachers, with the final number potentially exceeding this range depending on the success of the recruitment process. Since the design is not clustered, the number of observations coincides with the number of clusters.
Sample size (or number of clusters) by treatment arms
Participants will be randomized across the four treatment arms of the 2×2 design:

High disruptiveness rank × High academic rank
High disruptiveness rank × Low academic rank
Low disruptiveness rank × High academic rank
Low disruptiveness rank × Low academic rank

Given the target sample size, this implies approximately 150 to 200 teachers per treatment arm overall, with the final number depending on recruitment success. Because the experiment also includes two subject-specific versions of the vignette (scientific fields and humanities), the effective number of observations per treatment arm within each field is expected to be approximately 75 to 100 teachers, assuming a roughly balanced composition across fields.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Because the unit of randomization is the individual teacher and no clustering is used, power calculations are based on individual-level assignment with equal allocation across the four treatment cells of the 2×2 design. Calculations assume a two-sided test with significance level 0.05 and power 0.80. For the main pooled comparisons on continuous outcomes, such as the grade assigned to the student’s work, a total sample of 600 teachers implies a minimum detectable effect size of approximately 0.23 standard deviations, while a total sample of 800 teachers implies a minimum detectable effect size of approximately 0.20 standard deviations. For analyses conducted separately by teaching field, assuming a roughly balanced split between scientific and humanities teachers, the effective sample size is approximately 300–400 teachers per field. This implies a minimum detectable effect size of approximately 0.32 standard deviations with 600 total teachers and 0.28 standard deviations with 800 total teachers. For binary outcomes, the corresponding minimum detectable effect is approximately 10–12 percentage points for pooled comparisons and 14–16 percentage points for field-specific comparisons, assuming a baseline proportion of 0.5. These calculations are based on the main comparisons of interest and are intended as benchmarks; actual precision will depend on realized sample size and the variance of each outcome.
IRB

Institutional Review Boards (IRBs)

IRB Name
Monash University Human Research Ethics Committee
IRB Approval Date
2026-02-17
IRB Approval Number
51031
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information