AI Predictions, Strategic Disclosure, and Optimal Coarse Bracketing

Last registered on April 27, 2026

Pre-Trial

Trial Information

General Information

Title
AI Predictions, Strategic Disclosure, and Optimal Coarse Bracketing
RCT ID
AEARCTR-0018400
Initial registration date
April 20, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 27, 2026, 11:02 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Peking University

Other Primary Investigator(s)

PI Affiliation
Yale University
PI Affiliation
Peking University

Additional Trial Information

Status
On going
Start date
2025-09-30
End date
2026-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how the presentation of delivery-time predictions affects user behavior and satisfaction on a large digital logistics platform. Online platforms often provide estimated delivery times, but these predictions are inherently uncertain. Platforms must decide how precisely to present this information—for example, as a specific time, a broader time window, or with additional shipment checkpoint updates. In collaboration with a large logistics information platform, we conduct a randomized controlled field experiment in which users are randomly assigned to different versions of the shipment-tracking interface. The experiment varies three main dimensions: the type of prediction model used, whether intermediate shipment nodes are displayed, and the precision of the predicted delivery time. The study aims to understand how information precision and algorithmic framing influence user beliefs, query behavior, satisfaction, and engagement. In particular, we examine whether more precise predictions improve user experience, whether coarser information increases repeated checking behavior, and how these trade-offs affect short-run platform revenue and long-run user retention. The findings will contribute to research on information design, behavioral responses to uncertainty, and the economic implications of AI-supported digital services.
External Link(s)

Registration Citation

Citation
Han, Xu, Yan Li and Junjian Yi. 2026. "AI Predictions, Strategic Disclosure, and Optimal Coarse Bracketing." AEA RCT Registry. April 27. https://doi.org/10.1257/rct.18400-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Participants will be exposed to different versions of a package-tracking interface on a digital logistics platform. The intervention consists of randomized variation in how predicted delivery information is presented to users.
The study varies three dimensions of the interface:
1. Prediction model type:
Users are assigned to different prediction systems, including a traditional statistical prediction model, a higher-accuracy AI-based model, a faster but lower-cost AI model, or a baseline non-model estimate.
2. Information display structure
Users are randomly assigned to either view only the final predicted delivery time or view additional intermediate shipment checkpoint information during the delivery process.
3. Time precision of prediction
The estimated delivery time is displayed at different levels of precision, including day-level, 6-hour window, 1-hour window, or 30-minute window.

After the initial randomized assignment, users may choose to adjust the displayed precision level according to their own preference. These choices are recorded as part of the study. The intervention is embedded in the platform’s normal shipment-tracking service and does not affect the actual delivery process.
Intervention Start Date
2026-05-01
Intervention End Date
2026-06-01

Primary Outcomes

Primary Outcomes (end points)
1. User satisfaction / evaluation outcome:
Overall user rating of the package-tracking experience
Indicator for negative feedback or complaint submission
2. Tracking query behavior:
Total number of tracking queries per shipment
Average time interval between consecutive queries
3. User precision choice:
Indicator for whether the user changes the default prediction precision
Time spent by the user changing the prediction precision
Final selected precision level
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Platform economic outcomes:
Estimated advertising impressions per shipment
Estimated advertising revenue per tracking session
Subsequent platform usage within the observation window
2. Feedback detail outcomes:
Specific reasons selected for negative evaluations
User response conditional on realized prediction
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study is a large-scale randomized field experiment embedded in the shipment-tracking interface of a digital logistics platform. The experiment uses a multi-arm factorial design that varies three dimensions of the user interface and prediction system:

Prediction model type (4 arms)
Users are assigned to one of four prediction approaches: a traditional model, a high-accuracy AI model, a fast AI model, or a baseline non-model estimate.
Intermediate shipment information (2 arms)
Users are randomly assigned to either view intermediate shipment checkpoint information or only the final estimated delivery time.
Prediction time precision (4 arms)
Estimated delivery times are displayed at one of four levels of precision: day-level, 6-hour window, 1-hour window, or 30-minute window.

These three dimensions generate a factorial treatment structure with multiple treatment combinations. Eligible users are randomly assigned at first exposure to one treatment condition by the platform’s backend experimental system.

After the initial assignment, users may choose to adjust the displayed precision level according to their own preferences. This feature allows the study to observe both the causal effects of default information presentation and users’ endogenous preferences for information granularity. The intervention affects only the presentation of predictive information and does not alter the actual logistics or delivery process.
Experimental Design Details
Not available
Randomization Method
Randomization is performed automatically by the platform’s backend computer system at the time of a user’s first shipment-tracking interaction during the study period. Upon first exposure, each eligible user is assigned by a computer-generated pseudo-random algorithm to one treatment cell in a factorial design defined by three dimensions: prediction model type, intermediate-node display, and time-precision level. The assignment procedure is implemented server-side and does not involve manual intervention, public lottery, or any physical randomization device.

The randomization algorithm uses uniform random assignment across all treatment arms and is logged in the platform’s internal experimental framework to ensure reproducibility and auditability.
Randomization Unit
The primary unit of randomization is the individual user.

Each user is randomly assigned to one treatment combination at the time of first exposure to the shipment-tracking interface during the experimental period, and this initial assignment remains the default treatment condition for that user throughout the observation window. Because multiple shipment queries may be observed for the same user, outcome data will be recorded at both the user level and the shipment-query level, but treatment assignment occurs at the individual-user level.

There is no higher-level cluster randomization (e.g., by city, courier company, or shipment route) in the baseline design.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not clustered and 1,000,000 individual users (randomization units).
Sample size: planned number of observations
Approximately 1,000,000 user-level shipment-tracking sessions.
Sample size (or number of clusters) by treatment arms
21 treatment arms with approximately 47,620 observations in each arm under equal random assignment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
National School of Development (Peking University) Institutional Review Board
IRB Approval Date
2026-04-16
IRB Approval Number
CZY2026001