Performance Observability and Feedback Avoidance among Pupils

Last registered on November 18, 2025

Pre-Trial

Trial Information

General Information

Title
Performance Observability and Feedback Avoidance among Pupils
RCT ID
AEARCTR-0017152
Initial registration date
October 31, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 31, 2025, 9:27 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
November 18, 2025, 7:25 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of St Andrews

Other Primary Investigator(s)

PI Affiliation
Azim Premji University

Additional Trial Information

Status
In development
Start date
2025-11-01
End date
2026-04-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This preregistration describes a field experiment implemented on the Maidaan educational platform in India. The study examines how feedback observability (anonymous vs. public) and task difficulty (easy vs. hard) jointly affect students’ willingness to engage in an additional learning opportunity. The experiment follows a 2×2 factorial design, implemented over four tournament waves between November 2025 and January 2026. This updated version integrates referee feedback, removes the prior control condition, and adds placeholders for pending operational details.
This section draws on the original preregistration’s discussion of ego-utility and feedback avoidance mechanisms. In many learning and selection environments, performance is publicly observable. Ego-utility theories predict that when performance feedback is public and diagnostic, individuals self-protect by avoiding tasks or reducing effort to shield beliefs about their ability (Kőszegi, 2006; Möbius et al., 2022). These mechanisms are especially relevant for school-aged children in competitive digital environments, such as the Maidaan platform.
Understanding how feedback visibility affects motivation is crucial for digital learning environments. Publicly observable performance can enhance social learning but may also create ego concerns that reduce participation, especially after poor results. This experiment explores these mechanisms in a naturalistic setting, leveraging the Maidaan platform’s large student base and regular quiz tournaments.
External Link(s)

Registration Citation

Citation
Agarwal, Garima and Gerhard Riener. 2025. "Performance Observability and Feedback Avoidance among Pupils." AEA RCT Registry. November 18. https://doi.org/10.1257/rct.17152-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Students participating in academic quiz tournaments on the Maidaan digital learning platform are invited to take part in an additional voluntary “practice” round. The invitation message and feedback display vary in how performance information is presented (publicly or anonymously) and in how the difficulty of the practice round is labeled (easy or hard). The study examines how different formats of feedback and task framing affect students’ willingness to engage in further learning activities.
Intervention Start Date
2025-11-01
Intervention End Date
2026-01-31

Primary Outcomes

Primary Outcomes (end points)
opt-in to the bonus round (categorical: 3=finished, 2=started 1=clicked to play, 0=did not). Main interest is binary (3 vs 0).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
(conditional on opt-in): bonus-round performance (score and accuracy), engagement with leaderboards (Leaderboard’ and ‘Your Stats’ tabs), and the timing of opt-in (minutes/hours post-message). Engagement data are derived from Maidaan backend logs where available.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study is a 2×2 randomized field experiment implemented on the Maidaan educational platform in India with students in grades 3–9. After completing a regular academic quiz tournament, students receive a standardized WhatsApp message inviting them to play a short, optional “bonus practice round.” The experiment varies two features of this invitation:
(1) Feedback observability — whether performance on the bonus round is displayed publicly on the leaderboard or anonymously, and
(2) Task difficulty label — whether the bonus round is described as easy or hard.
Participation in the bonus round is entirely voluntary. The main outcome is whether students choose to play. The study examines how different ways of presenting feedback and difficulty influence students’ willingness to re-engage with learning tasks.
Experimental Design Details
Not available
Randomization Method
Randomization occurs at the individual level within each wave, using stratified block randomization (block size = 4, one student per arm). Stratification variables: school, grade, gender, performance tier {Final Rank, DNQ narrow, DNQ large, DNP}, school_type (contract/non-contract), and pool. Returning students are re-randomized independently in each wave; prior assignments are not carried forward. We will record the treatment sequence participants were exposed to and control for previous treatments.
Randomization Unit
Individual student.
Randomization is conducted at the individual level within each tournament wave. Stratified block randomization (block size = 4; one student per treatment arm) is implemented within strata defined by school, grade, gender, performance tier (Final Rank, DNQ narrow, DNQ large, DNP), school type (contract/non-contract), and pool.

Returning students are re-randomized independently in each wave (Nov 1, Nov 15, Jan 10, Jan 31 – 2025/26). Analyses account for repeated participation by clustering standard errors at the individual level and using school fixed effects.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
67 schools.

The experiment includes approximately 67 distinct schools (7 contract schools and 60 non-contract schools) participating across four tournament waves, with an average of 80–120 students per school per wave.
Sample size: planned number of observations
Approximately 20,000 student-wave observations (≈ 5,000 unique students × 4 waves). Each student is randomized in up to four tournament waves between November 2025 and January 2026. The expected total includes both contract and non-contract school participants from 67 schools. Analyses will pool all wave-level observations and cluster standard errors at the school and individual levels.
Sample size (or number of clusters) by treatment arms
Public × Easy: ≈ 5,000 student-wave observations (≈1,250 per wave; students from all 67 schools)
Public × Hard: ≈ 5,000 student-wave observations (≈1,250 per wave; students from all 67 schools)
Anonymous × Easy: ≈ 5,000 student-wave observations (≈1,250 per wave; students from all 67 schools)
Anonymous × Hard: ≈ 5,000 student-wave observations (≈1,250 per wave; students from all 67 schools)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Main outcome: Opt-in to bonus round (binary). Baseline rate (p₀): 0.25 (Pilot-based, delivered subsample). Standard deviation (Bernoulli): √[p₀(1−p₀)] = 0.433. Unit for effect size: percentage points (pp) difference in opt-in rates. Sample design assumptions (used for MDES): 2×2 factorial, balanced arms; ≈ 5,000 student-wave obs per arm (≈20,000 total). 67 school clusters; individual randomization, but SEs clustered by school. Average cluster size across waves ≈ 300 students/school (≈75 per school per wave × 4 waves). α = 0.025 (Bonferroni across 2 primary tests), Power = 0.80. ICC (school) assumed 0.01–0.02 → design effect ≈ 1.8–2.5. Minimum Detectable Effect Size (MDES): ≈ 6.5 pp (if ICC = 0.01; DE ≈ 1.8) ≈ 8.0 pp (if ICC = 0.02; DE ≈ 2.5) In SD units: 6.5 pp / 0.433 ≈ 0.15 SD; 8.0 pp / 0.433 ≈ 0.18 SD. Relative change vs baseline: 6.5 pp on 25% = +26%; 8.0 pp on 25% = +32%.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Azim Premji University IRB
IRB Approval Date
2025-10-21
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information