The American Economic Association's registry for randomized controlled trials
NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Time Scarcity and Optimization
Last registered on March 16, 2016
View Trial History
Time Scarcity and Optimization
Initial registration date
January 30, 2015
March 16, 2016 11:44 PM EDT
United States of America
State University of New York (Binghamton) & Harvard University
Contact Primary Investigator
Other Primary Investigator(s)
State University of New York (at Binghamton)
Additional Trial Information
This project seeks to assess the impact of time scarcity on an individual's optimizing behavior through a lab experiment. Participants will be given an assortment of homogeneous cognitive tasks with arbitrary point values subject to a randomized time limit. Their optimizing behavior will be a function of the order in which they choose to complete the tasks and this will be analyzed across assigned time limits to assess the impact of time scarcity.
The project will examine both static (choice between anticipated tasks) and dynamic (choice between anticipated and unanticipated tasks) optimization by randomly selecting participants to receive additional “interruption” tasks during the experiment. Since optimization is in itself a cognitive task and previous studies have demonstrated cognitive function declines as scarcity intensifies, the expected outcome will show an individual's propensity to optimize declines as their time constraint tightens, resulting in inefficient use of time and suboptimal outcomes.
The project will build on recent analyses of poverty and the scarcity mindset, contributing lack of optimization as an additional mechanism by which scarcity perpetuates poverty.
Nikolov, Plamen and Plamen Nikolov. 2016. "Time Scarcity and Optimization." AEA RCT Registry. March 16.
Nikolov, Plamen, Plamen Nikolov and Plamen Nikolov. 2016. "Time Scarcity and Optimization." AEA RCT Registry. March 16.
Sponsors & Partners
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Through this experiment, we hope to demonstrate that as the scarcity of a resource intensifies, the propensity of an individual to use that resource optimally decreases, leading to suboptimal outcomes. Optimizing in itself is a cognitive task that carries a cost; for any set of options, there is a nonzero cost to weighing each option's marginal benefit against its marginal cost and comparing this measure across options to choose the best. The burden of scarcity makes this process more difficult, through both its negative impact on cognitive function and the increased weight of the fixed cost of optimizing relative to the budget constraint. This provides an explanation for why a highly constrained individual may not behave rationally, especially in contexts where the marginal benefit of any particular option is unclear or difficult to measure, such as the returns to education. If the consequences of suboptimal choice are severe enough, such an individual may be ensnared in a “poverty trap.”
Primary Outcomes (explanation)
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
To assess individuals’ optimizing behavior under time scarcity, we administer a randomized control trial in which study participants are given an assortment of homogeneous cognitive tasks to complete subject to a randomized time constraint. The range of time constraints are selected so that it is impossible to complete all tasks within the time limit to simulate varying degrees of scarcity. These tasks will be assigned arbitrary point values to simulate heterogeneous marginal benefits, and these point values will be selected so that the tasks can be clearly ranked (i.e., completing one task is strictly preferred to completing another task in all foreseeable cases). To capture the ability to optimize independently, we partially obscure these points through their placement on each page.
Experimental Design Details
Within the scope of this study, we define “optimizing” as starting with the task assigned the highest point value and completing successive tasks in descending order of points, ending with the task assigned the lowest point value. Given that the tasks are of homogeneous difficulty and the tasks can be clearly ranked, we believe this definition is consistent with economic theory. We define the “optimal score ratio” as the participant's total points attained divided by the score they would have attained had they optimized; higher values (approaching the maximum value of 1) indicate a higher level of optimization. We examine both static (choosing between anticipated tasks) and dynamic (choosing between anticipated and unanticipated tasks) optimization by randomly selecting some participants to receive additional “interruption” tasks during the experiment. These interruptions will have all the same characteristics as the original tasks except for anticipation, allowing us to isolate dynamic optimization. To measure the effect of information, we will also randomly select some participants to receive an additional instruction telling them ahead of time that tasks yield different point values. This simulates reducing the fixed cost of optimization by calling attention to the measure of marginal benefit. At the end of each session, participants are given a short survey, asking for basic demographic information (e.g. age, gender). We also ask questions about their approach to the tasks in order to compare actual optimization to participants' perception. We examine the relationship between optimizing behavior and perception by dividing participants into subgroups based on survey responses.
Randomization done by computer
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
All individuals given a randomized time limit, number of individuals in each time limit not predetermined
300 individuals control, 250 individuals additional information, 250 individuals interruption task, 200 individuals both treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials
INSTITUTIONAL REVIEW BOARDS (IRBs)
State University of New York (at Binghamton) Office of Research Compliance
IRB Approval Date
IRB Approval Number
Post Trial Information
Is the intervention completed?
Is data collection complete?
Is public data available?
Reports, Papers & Other Materials
REPORTS & OTHER MATERIALS