Nudging Citizenship: Evidence from a Field Experiment on Course Evaluations

Last registered on April 24, 2026

Pre-Trial

Trial Information

General Information

Title
Nudging Citizenship: Evidence from a Field Experiment on Course Evaluations
RCT ID
AEARCTR-0017972
Initial registration date
April 15, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 24, 2026, 8:09 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
ESSEC Business School

Other Primary Investigator(s)

PI Affiliation
ESSEC Business School

Additional Trial Information

Status
In development
Start date
2026-04-16
End date
2026-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Many actions in organizations do not directly benefit the individuals who take them, yet they create meaningful value for others or for the organization as a whole. Such behaviors are often described as citizenship behaviors. Despite their importance for learning and improvement, it remains unclear how organizations can encourage individuals to engage in them at scale. We examine this question in the setting of student course evaluations at a major French business school. Completing an evaluation is a voluntary contribution that can support teaching and program improvements. Yet, it offers limited immediate private benefits to the individual student. As a result, participation is often lower than institutions would like, even though evaluations are an important input into improvement processes. Our field experiment is cluster randomized at the student cohort level. Cohorts are defined by program and year, hence all students within a given cohort are selected into the same condition. Our intervention is delivered through the school’s internal platform during the evaluation window. In the control condition, students receive the usual reminder to complete evaluations through the standard notification system. In the first treatment condition, students see a banner that summarizes their own evaluation participation history for the current academic year, making their individual past behavior salient and easy to track. In the second treatment condition, students see course specific information that reports the peer participation rate for each course they are taking, making participation more socially visible in aggregate. Our main outcome is whether students complete course evaluations. We test whether either prompt increases completion relative to business as usual, and whether the two prompts differ in their effects.
External Link(s)

Registration Citation

Citation
Kshatriya, Anil and Maren Mickeler. 2026. "Nudging Citizenship: Evidence from a Field Experiment on Course Evaluations." AEA RCT Registry. April 24. https://doi.org/10.1257/rct.17972-1.0
Experimental Details

Interventions

Intervention(s)
The intervention consists of light touch informational nudges delivered through the school’s internal platform during the course evaluation window. The intervention does not change the evaluation questionnaire or the evaluation process. It only changes what information students see on the platform when evaluations are open.

In the control condition, students receive the standard reminder through the usual notification system, with no additional nudge on the platform beyond business as usual. In the first treatment condition, students are shown a banner on the internal platform summarizing their own evaluation participation history during the current academic year. The banner provides a simple overview of completion to date, intended to make the student’s participation history salient. In the second treatment condition, students are shown course specific information on the platform indicating the peer participation rate for each course they are taking. This information is presented in aggregate and does not identify individual students.


Intervention Start Date
2026-04-16
Intervention End Date
2026-08-01

Primary Outcomes

Primary Outcomes (end points)
Our primary outcome is the number of course evaluations a student completes during the evaluation window. We operationalize this as a student level count variable that sums the evaluations submitted across all courses in which the student is eligible to provide an evaluation within the relevant period.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We also collect information on the number of course evaluations started (but not finished). Further, we capture average evaluation outcomes for each course.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct a three arm field experiment at a French Business school that is cluster randomized at the cohort level. Cohorts are defined by program and year, and all students within a cohort are assigned to the same condition. The intervention is implemented during the course evaluation window through the school’s internal platform and does not change the evaluation questionnaire or the evaluation process. In the control condition, students receive the usual evaluation reminder through the standard notification system. In Treatment 1, students see a banner on the internal platform that summarizes their own evaluation participation during the current academic year. In Treatment 2, students see course specific information on the internal platform reporting the peer participation rate for each course they take, presented in aggregate.
Experimental Design Details
Not available
Randomization Method
Full randomization at the cohort level.
Randomization Unit
We randomize at the cohort level, where cohorts are defined by program and year. All students within a given cohort are assigned to the same experimental condition. There is no individual level random assignment.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Overall, we have three clusters in the program that participates in the experiment.
Sample size: planned number of observations
Overall, 1994 students will participate in the experiment.
Sample size (or number of clusters) by treatment arms
Treatment 1 : 619 students ; Treatment 2 : 507 students ; Control : 868 students
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
ESSEC Business School Ethics Lab
IRB Approval Date
2026-02-23
IRB Approval Number
2026-016