Competing Behavioral Mechanisms for Notification Timing in a Digital Health Experiment

Last registered on December 01, 2025

Pre-Trial

Trial Information

General Information

Title
Competing Behavioral Mechanisms for Notification Timing in a Digital Health Experiment
RCT ID
AEARCTR-0017273
Initial registration date
November 30, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 01, 2025, 11:59 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
NUS Business School
PI Affiliation
Simon Business School, University of Rochester

Additional Trial Information

Status
In development
Start date
2025-12-11
End date
2026-01-08
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines whether sending app reminders at the same time every day helps people use a health app more consistently. Before starting the main experiment, we first run a short pilot to make sure everything works smoothly. After that, participants are randomly assigned to one of several groups: some receive no reminders, while others receive one reminder each day either in the morning, afternoon, or evening. A final group receives reminders at different times each day, chosen at random. For everyone, the reminders contain the same simple, neutral message about the benefits of daily monitoring.

The study lasts several weeks and includes a period before and after the reminders so we can compare changes in app usage over time. By tracking how often people use the app in each group, we aim to understand whether timing consistency—receiving reminders at the same time every day—leads to better engagement than reminders sent at varying times or not at all. This evidence can help improve how digital health apps design their notification systems to support healthier habits.
External Link(s)

Registration Citation

Citation
Deolankar, Varad , Ali Goli and Anuj Kapoor. 2025. "Competing Behavioral Mechanisms for Notification Timing in a Digital Health Experiment." AEA RCT Registry. December 01. https://doi.org/10.1257/rct.17273-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The study evaluates a simple question: does the timing of a daily reminder shape how regularly people engage with a health-monitoring app? To answer this, we vary only when a single neutral reminder is delivered, holding the content and frequency constant.

After a brief quiet period with no notifications, participants are randomly assigned to one of five groups:

No-notification control, to benchmark natural usage.

Morning, afternoon, or evening fixed-time reminders, where the reminder arrives at the same time every day.

A variable-time group, where the reminder time shifts daily and is randomly drawn from the three fixed-time options.

Each reminder contains the same short, neutrally phrased message encouraging daily monitoring—nothing tailored or behaviorally charged. Every participant in the reminder conditions receives exactly one message per day for four weeks.

We then return to a period without notifications to see how engagement patterns persist once the reminders stop. This structure allows us to isolate whether consistency or variability in timing plays a meaningful role in sustaining healthy app habits.
Intervention (Hidden)
The study is designed to isolate the causal effect of timing consistency in daily push notifications on sustained engagement with a health-monitoring app. The intervention proceeds in three phases, with all events passively logged throughout.

Phase 0: Pilot
Before the full experiment, we run a small pilot: 2 days of baseline observation, 4 days of timed notifications using the same structure planned for the main study, and 2 days of follow-up. Daily data dumps during this period allow us to identify any operational issues before scaling to the full sample.

Phase 1: Pre-experimental quiet period
All out-of-app push notifications are suspended for the entire user base. This establishes each user’s natural usage pattern free from recent notification exposure. The app continues to record all usage events.

Phase 2: Randomized timing intervention
At the end of the quiet period, users are randomly assigned at the individual level to one of five arms:

Control (No nudges) – Receives no notifications during the full 4-week window.

Fixed Morning – Receives one push notification daily at the same morning time (e.g., 10:00 AM IST).

Fixed Afternoon – Receives one notification daily at the same afternoon time (e.g., 2:00 PM IST).

Fixed Night – Receives one notification daily at the same evening time (e.g., 9:00 PM IST).

Variable Timing – Each night at midnight (or shortly before the morning send), the system draws a random assignment for the next day from the three fixed-timing slots. The user then receives the notification at whichever time slot is assigned for that day.

Across all treatment conditions (2–5), nudge load is strictly held constant: every treated user receives exactly 28 notifications, one per day, without exception. Message content is also held constant and is intentionally neutral—for example: “Take a reading to improve your daily control” or “Take a reading to spot spikes and keep your sugar in check.” These messages avoid any temporal cues (e.g., references to meals) to prevent confounding.

Phase 3: Post-experimental washout
After the 4-week intervention, all out-of-app notifications are again paused. This window allows us to observe whether behavior established during the experimental period persists once reminders stop. We continue monitoring engagement in real time and may extend the washout up to six weeks, though a minimum of two weeks is required.

This structure ensures that only the consistency of timing varies, not content, frequency, or total exposure. The design allows us to test whether predictable timing supports habit formation more effectively than unpredictable timing—an important question for digital health applications seeking to build durable engagement patterns.
Intervention Start Date
2025-12-11
Intervention End Date
2026-01-08

Primary Outcomes

Primary Outcomes (end points)
The main outcome of interest is daily engagement with the health-monitoring app, measured through objective behavioral traces recorded by the app. Specifically, we focus on:

Probability of taking at least one reading on a given day.
This is the core behavioral metric used in the prior experiment (e.g., daily counts of readings shown in the charts on pp. 1–2 of the descriptives report) and provides a clear, interpretable measure of engagement

Number of readings taken per day.
This captures intensity of engagement, complementing the extensive-margin measure above. The earlier analysis similarly relied on daily reading counts to detect lift during interventions and decay afterward

Timing consistency of user engagement.
For users exposed to fixed-time versus variable-time nudges, we track whether readings cluster more tightly around the assigned nudge time, relative to baseline patterns.

All outcomes are observed continuously throughout the pre-intervention quiet period, the 4-week randomized timing intervention, and the post-intervention washout window. This structure allows us to test whether predictable versus unpredictable reminder timing differentially supports habitual, sustainable app use.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study examines whether delivering a single daily reminder at a consistent time of day, versus a time that varies from day to day, influences how regularly people use a health-monitoring app.

After a short period with no notifications, users are randomly assigned to one of several groups. One group receives no reminders, while the others receive one neutral, non-personalized push notification each day, either at a fixed time (morning, afternoon, or evening) or at a time that changes daily. All users in reminder groups receive the same number of notifications over the intervention period, and the message content remains identical across groups.

The experiment runs for several weeks and includes observation windows before and after the reminder period. This structure allows us to measure baseline behavior, immediate responses to reminder timing, and whether any effects persist once reminders stop. Because the intervention varies only the timing of a simple daily reminder, the design cleanly isolates the role of timing consistency in shaping digital-health engagement.
Experimental Design Details
Randomization Method
Users are randomly assigned to experimental groups using a computerized, reproducible randomization procedure. The allocation is automated within the platform’s backend, ensuring that each eligible user has an equal chance of being placed into any of the study arms. No manual assignment is involved.
Randomization Unit
The randomization is conducted at the individual user level. Each user is independently assigned to one of the experimental arms. There are no group- or cluster-level assignments in this study.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
The study randomizes at the individual user level. We therefore expect approximately 50,000 users to serve as individual clusters.
Sample size: planned number of observations
Because randomization occurs at the individual user level, the planned number of observations corresponds to the number of participating users. We anticipate approximately 50,000–150,000 users contributing daily engagement data throughout the study period.
Sample size (or number of clusters) by treatment arms
Randomization occurs at the individual-user level. We plan to divide the sample evenly across the five arms:

Control (no notifications): ~20% of users

Fixed Morning Reminder: ~20% of users

Fixed Afternoon Reminder: ~20% of users

Fixed Evening Reminder: ~20% of users

Variable-Timing Reminder: ~20% of users

Exact counts will depend on the final pool of eligible active users at the start of the experiment, but the allocation will be balanced across the five groups.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials