Dynamic Effort Allocation in Information Processing

Last registered on April 26, 2025

Pre-Trial

Trial Information

General Information

Title
Dynamic Effort Allocation in Information Processing
RCT ID
AEARCTR-0015356
Initial registration date
April 09, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 22, 2025, 9:24 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 26, 2025, 5:53 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
University of Tennessee, Knoxville
PI Affiliation
Marist University

Additional Trial Information

Status
In development
Start date
2025-04-14
End date
2026-03-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Human attention and memory are costly and complex tasks. We propose a stylized model of how cognitive effort is allocated to different tasks across time. Devoting more effort (to attention, memory maintenance, and memory recall) is costly, but translates into a more faithful representation of the task (word recall) and better performance upon recall. In our experimental task individuals are presented with a verbal recall task: lists of words that they are invited to study, and later, recall. More correct answers translate into higher payments. Rewards for completing the task are randomized (either high or low); also randomized is the timing of when the reward information is revealed. In this setting, theory suggests that people should react more strongly to early reward revelation than late. By using random reward levels and revelation times, we are able to test whether individuals are allocating their efforts in a dynamically rational way. We are also able to determine the source of mistakes (attention, memory, or recall) and bound the utility costs of processing information at each stage. There are two treatments in the experiment. In the first, there are twenty memory tasks as described above. The second treatment breaks the tasks into two blocks of twenty tasks each. The difference between the blocks is that the "high" reward is larger in one block than the other.

External Link(s)

Registration Citation

Citation
Kofoed, Michael, Andrew Kosenko and Nathaniel Neligh. 2025. "Dynamic Effort Allocation in Information Processing." AEA RCT Registry. April 26. https://doi.org/10.1257/rct.15356-2.0
Experimental Details

Interventions

Intervention(s)
In this experiment we study memory and cognition, and how they respond to rewards.
Intervention Start Date
2025-04-14
Intervention End Date
2025-11-01

Primary Outcomes

Primary Outcomes (end points)
The key variables of interest are the (conditional on the true answer in experiment two and unconditional in experiment one) proportions of correct responses on each question for each trial and for each participant. Using these variables, we will test the hypothesis of supermodularity, and bound the costs of information processing for each participant.
Primary Outcomes (explanation)
The unconditional probabilities of correct answers will be used to test the main prediction of a model of supermodular dynamic effort allocation. This prediction states that the overall performance should respond less to differences in incentive when the incentive level is revealed later in the information handling process. High reward will lead to less of an increase in performance and low reward will lead to less of a decrease when revealed later.
The conditional probabilities of correct responses with be used to construct matrices (with probabilities of correct responses on the diagonal), which will then be. Given these matrices in various conditions (reward level, timing) and two additional possibilities (rounds without delay, and nonremunerated responses to the same requested again, right after the remunerated response), we will estimate the objects of choice (cognitive error matrices) and bound the utility costs of these objects for the participants, first at the aggregate level, and then at the participant level.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In this experiment people will complete a series of memory-based cognitive tasks with varying reward levels.
Experimental Design Details
Not available
Randomization Method
Randomization performed using the built-in features in Qualtrics.
Randomization Unit
The randomization unit is a round; in each round the level of the reward, and the timing of the revelation reward are randomized. In experiment 2 the additional conditions and order of blocks are also randomized.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Subjects maybe be clustered by session if strong evidence of session effects is found but otherwise observations will not be clustered.
Sample size: planned number of observations
500 participants. 20 memory tasks per person in experiment one and 40 tasks per person in experiment two. Each task results in 10 responses. Aggregate analysis will treat a participant as an observation. Individual analysis will treat a response as an observation.
Sample size (or number of clusters) by treatment arms
2000 individuals in experiment one and 300 for experiment two (note that we are using a within-subjects design).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Sample size: 500 individuals. We use a within-subjects design, which significantly reduces the number of participants. We consider testing Corollary 1 (our "experiment 1") as one experiment with a within-subjects design, and the cost-bounding exercise (our experiment two block 1 and and experiment two block 2, the only difference between which is the level of the "high" reward ) as another experiment with a within-subjects design. For each of these two experiments, for a small effect size (Cohen's d_z - not: NOT Cohen's d) of 0.2, Type I error (alpha) of 0.05, a power level of 0.8, and a two-sided ("difference is greater than 0" for experiment 2, or, for testing corollary 1, that the difference in revelation earlier is greater than the difference for revelation later) test, the R package "pwr" yields that 199 participants are necessary, which we round up to 200. Thus, 200 individuals are necessary for each of two experiments. We allow participants to take part in both experiments; given the randomness in the experiment (which implies that not all of the 200 participants will have all of the necessary data), and assuming (conservatively) that about 200 participants (across both experiments together) will not have enough data, we arrive at the necessary number: 200 (for the first experiment) + 200 (for the second experiment) + 100 (to account for missing data issues)=500 participants. Not that the effect size used for this computation is small (Cohen (1998) defines "small" to be 0.2 or less), and thus, this setup can detect quite small differences in behavior.
Supporting Documents and Materials

Documents

Document Name
Experimental Paradigm and Task
Document Type
other
Document Description
File
Experimental Paradigm and Task

MD5: ad79f88e050c843c159cddfea1db1c85

SHA1: e10f37d286d6fa1a2b7b8a6226ad333d862e8a3f

Uploaded At: February 19, 2025

IRB

Institutional Review Boards (IRBs)

IRB Name
UTK - Haslam College of Business - Economics
IRB Approval Date
2025-04-08
IRB Approval Number
UTK IRB-25-08748-XM
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information