Algorithms and the Reduction of Information Overload

Last registered on April 29, 2026

Pre-Trial

Trial Information

General Information

Title
Algorithms and the Reduction of Information Overload
RCT ID
AEARCTR-0018277
Initial registration date
April 22, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 29, 2026, 3:29 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
WHU - Otto Beisheim School of Management

Other Primary Investigator(s)

PI Affiliation
University of Cologne
PI Affiliation
Frankfurt School of Finance & Management
PI Affiliation
University of Cologne
PI Affiliation
University of Cologne

Additional Trial Information

Status
In development
Start date
2026-05-01
End date
2027-01-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates whether algorithmically generated performance benchmarks and attention-focusing cues reduce information overload and improve decision quality in a large retail chain. Using a supervised machine-learning approach (“Nowcasting”) developed and trained prior to the experiment, we generate weekly store-level benchmarks for key performance indicators (KPIs) across retail stores. These benchmarks are embedded in a modified sales and inventory report, and an additional Focus List highlights the KPIs with the largest deviations from the benchmark. In a randomized field experiment at the district-manager level, we compare (A) the status quo report, (B) the report augmented with algorithmic benchmarks, and (C) the benchmark report plus the Focus List. Outcomes include administrative performance metrics and survey-based measures of perceived information overload and information usefulness.
External Link(s)

Registration Citation

Citation
Opitz, Saskia et al. 2026. "Algorithms and the Reduction of Information Overload." AEA RCT Registry. April 29. https://doi.org/10.1257/rct.18277-1.0
Experimental Details

Interventions

Intervention(s)
District managers receive weekly reporting packages. Random assignment determines which version of the reporting package each manager receives during the intervention period:
• Group A (Control): Standard (sales and inventory) report (status quo).
• Group B (Benchmark report): Standard report plus algorithmic benchmark values for key KPIs and the deviation of actual performance from the benchmark.
• Group C (Benchmark report + Focus List): Group B report plus a Focus List summarizing the KPIs with the largest benchmark deviations (intended to focus attention on the most anomalous metrics).
Intervention Start Date
2026-05-01
Intervention End Date
2026-10-31

Primary Outcomes

Primary Outcomes (end points)
- Benchmarked KPIs (revenue and inventory shrinkage) for a previously agreed-upon subset of product categories (i.e., the subset selected ex ante by our partner company for benchmarking due to its actionability)

Primary Outcomes (explanation)
We examine differences in the benchmarked KPIs included in the reports between the treatment and control groups.

Secondary Outcomes

Secondary Outcomes (end points)
- Contribution Margin
- Perceived information overload as measured in the survey
- Perceived usefulness (survey)
- Use and attention allocation (survey)
- Trust in AI (survey)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct a field experiment in a large retail chain to examine the causal impact of algorithmic benchmarks and focus lists on managerial decision-making. The intervention is based on a supervised machine learning algorithm (“Nowcasting”) that was developed and trained prior to the experiment. The algorithm predicts weekly store-level performance indicators (e.g., revenue and inventory shrinkage) using historical data and comparative information from other stores within the chain. Based on these predictions, the system generates weekly updated “virtual benchmarks,” which provide target values for each KPI and form the basis of the experimental manipulation.

District managers are randomly assigned to one of three experimental groups. The intervention is implemented from 2026-05-01 to 2026-10-31. Managers of treated districts receive the modified reporting packages on a weekly basis during this period.

• Group A (Control): Standard (sales and inventory) report (status quo).
• Group B (Benchmark report): Standard report plus algorithmic benchmark values for key KPIs and the deviation of actual performance from the benchmark.
• Group C (Benchmark report + Focus List): Group B report plus a Focus List summarizing the KPIs with the largest benchmark deviations (intended to focus attention on the most anomalous metrics).

Importantly, treatment is assigned at the district level. Once a district enters the treatment condition, it remains treated for the remainder of the study period. This holds even if (i) a new district manager takes over responsibility for the district or (ii) the district is restructured within the organizational hierarchy, provided that the district continues to comprise mainly the same set of stores. In other words, treatment status follows the district as an organizational unit defined by its constituent stores, not by managerial personnel.
If a store is transferred to a different district during the study period, its treatment status follows that of the district to which it is transferred from the time of the transfer onward.

If a new district is created during the study period, it is not included in the experiment and does not receive a treatment assignment. Districts that permanently close during the sample period are excluded from the analytic sample. The rationale is that districts entering a closure process may operate atypically already prior to the formal closure date, making outcomes not comparable to “business-as-usual” districts. Additionally, to prevent cross-treatment contamination, we exclude observations from a district starting from the date on which a newly appointed district manager takes over if that manager has previously been exposed to a different experimental treatment condition.

To examine potential mechanisms, we also conduct a survey among district managers during the experimental period. Participation in the survey is voluntary and anonymous, and respondents provide informed consent before participation. The survey includes measures of report use, perceived usefulness of the reports and lists, perceived information overload, perceived influence on KPIs, and trust in AI-based systems.
Experimental Design Details
Not available
Randomization Method
Stratified randomization: Let s=1,…,S index strata and let n_sbe stratum size. For each stratum s, we generate a random permutation of the managers in that stratum. We then concatenate the permuted strata in a pre-specified order and assign conditions sequentially following a cyclic sequence {1ⓜ,2ⓜ,3}that is not reset between strata. Randomness comes from the within-stratum permutations; the cyclic assignment enforces near-equal allocation.
Randomization Unit
Individual Manager
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable (randomization is at the manager level)
Sample size: planned number of observations
235 managers, approx. 2100 stores
Sample size (or number of clusters) by treatment arms
79 managers in control, 78 managers in benchmark report group and 78 managers benchmark report + focus list group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Cologne Ethical Review Board
IRB Approval Date
2026-01-15
IRB Approval Number
260002DS