Understanding Performance Declines

Last registered on August 22, 2025

Pre-Trial

Trial Information

General Information

Title
Understanding Performance Declines
RCT ID
AEARCTR-0016565
Initial registration date
August 12, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 22, 2025, 5:38 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Stanford

Other Primary Investigator(s)

PI Affiliation
Middlebury College
PI Affiliation
University of California Santa Barbara

Additional Trial Information

Status
In development
Start date
2025-08-18
End date
2026-10-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Performance declines within individuals over the course of cognitively demanding tasks have been documented across diverse settings—from students performing worse on later questions of high-stakes exams to physicians making more diagnostic errors later in their shifts—yet our understanding of why individual performance deteriorates remains limited. We study a potential driver of performance declines in an online experiment.
External Link(s)

Registration Citation

Citation
Meyer, Carl, German Reyes and Jason Somerville. 2025. "Understanding Performance Declines." AEA RCT Registry. August 22. https://doi.org/10.1257/rct.16565-1.0
Experimental Details

Interventions

Intervention(s)
Participants complete a 30-question, 50-minute online math exam. We randomly vary the order of the questions and the timing of a higher bonus rate that can apply to one third of the exam (first, middle, or final ten questions). We also elicit participants’ preferred timing and valuation for when to receive higher bonus.
Intervention Start Date
2025-08-18
Intervention End Date
2025-10-01

Primary Outcomes

Primary Outcomes (end points)
1. Test performance: indicator for correct answer at the question level.
2. Performance decline: slope of performance with respect to question position over the exam.
3. Bonus-boost preferences: (a) stated preferred timing (first/middle/final third); (b) WTP for timing options.
Primary Outcomes (explanation)
Performance decline: regress correctness on normalized position \mathrm{NormPos}=(q-1)/29; use randomized item order (and, as needed, question fixed effects) to separate time-on-task effects from difficulty. We will report shrinkage estimates for individual-level slopes.

BB preferences: preferred timing is a categorical choice; WTP is constructed from the MPL switching point for a randomly selected timing option.

Secondary Outcomes

Secondary Outcomes (end points)
1. Effort allocation: time per question.
2. Beliefs about performance: pre- and post-exam forecasts for each third.
3. Cognitive fatigue: pre- and post-exam self-reports.
4. Cross-price elasticities of effort/performance across boosted vs. non-boosted thirds; Slutsky-symmetry test following Bronchetti et al. (2023).
5. Self-reported effort/time allocation (actual vs. ideal) post-exam.
6. Qualitative measures: open-ended BB rationale and advice to future participants (to be categorized with LLMs, exploratory).
7. Heterogeneity: outcomes by gender, income, education.
Secondary Outcomes (explanation)
For further explanations see PAP.

Experimental Design

Experimental Design
Individual online experiment involving math test with randomized order of questions and random timing of a higher bonus rate applied to one third of the test. We elicit beliefs and preferences about performance and bonus timing. Question order is randomized to separate difficulty from time-on-task effects.
Experimental Design Details
Not available
Randomization Method
Randomization done by computer
Randomization Unit
Individual participant (BB timing). Question order is randomized within participant.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Approximately 475 individuals
Sample size: planned number of observations
Approximately 475 individuals
Sample size (or number of clusters) by treatment arms
Approximately 100 individuals per treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of California, Santa Barbara Human Subjects Committee
IRB Approval Date
2025-08-11
IRB Approval Number
Performance declines within individuals over the course of cognitively demanding tasks have been documented across diverse settings—from students performing worse on later questions of high-stakes exams to physicians making more diagnostic errors later in their shifts—yet our understanding of why individual performance deteriorates remains limited. We propose a novel explanation: systematic errors in individuals’ beliefs about their per- formance trajectory
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information