Reducing Nonresponse: Evidence from an Experimental Study

Last registered on December 26, 2025

Pre-Trial

Trial Information

General Information

Title
Reducing Nonresponse: Evidence from an Experimental Study
RCT ID
AEARCTR-0017162
Initial registration date
December 10, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 26, 2025, 1:51 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Duesseldorf Institute for Competition Economics (DICE)

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-09-01
End date
2026-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Nonresponse in panel surveys reduces sample sizes and may hence decrease sample representativeness, particularly during the initial waves following recruitment. Maintaining participation immediately after recruitment is therefore crucial for ensuring long-term panel quality.
This experimental study investigates how personalized communication and monetary incentives affect nonresponse in a probability-based, bimonthly online panel. The experiment consists of the following three parts. In Experiment 1, newly recruited panelists are randomly assigned to one of four treatment groups: a printed “Thank-you” postcard, a handwritten “Thank-you” postcard, a printed “Thank-you” postcard with an unconditional €5 incentive, or a control group without intervention. Using data from the subsequent panel wave, we estimate an optimal treatment allocation via a policy tree that maximizes response rates under a fixed budget constraint. In Experiment 2, we evaluate the learned optimal policy by comparing its performance to random assignment. In Experiment 3, we replicate Experiment 2 with non-treated panel members from an earlier recruitment.
We explore the results in two separate papers. Paper 1 investigates the effect of personalized communication and monetary incentives on nonresponse, comparing the average treatment effects across the different treatment conditions. Furthermore, we analyze treatment effect heterogeneity across sociodemographic subgroups, Big Five personality traits and participants’ motivation to participate. Overall, we expect personalized communication and monetary incentives to reduce nonresponse. Moreover, we expect the printed “Thank-you” postcard with a monetary incentive to have the strongest effect, and the printed “Thank-you” postcard without a monetary incentive to have the smallest effect on nonresponse. Further, we expect the effectiveness of treatments to vary systematically across sociodemographic subgroups, Big Five personality traits, and participants’ motivation to participate. Specifically, we hypothesise that the printed “Thank-you” postcard with a monetary incentive has the strongest effect among participants who indicate monetary incentives as their primary motivation for survey participation. The handwritten “Thank-you” postcard is expected to have the strongest effect among those who indicate support for the research team as their primary motivation for survey participation. We expect all treatments to have the strongest effect on subgroups who are known to be at highest risk of nonresponse: individuals who have lower educational degrees and non-German participants.
In Paper 2, we explore whether optimal treatment allocation of personalized communication and monetary incentives in an adaptive survey design yields lower nonresponse than random assignment and whether this effect is generalizable to existing panelists.
External Link(s)

Registration Citation

Citation
Mohr, Aline. 2025. "Reducing Nonresponse: Evidence from an Experimental Study." AEA RCT Registry. December 26. https://doi.org/10.1257/rct.17162-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2025-12-15
Intervention End Date
2026-02-16

Primary Outcomes

Primary Outcomes (end points)
The main outcome variable is the participant’s response in a panel wave, which is measured as a binary variable (0/1).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Another outcome of interest is sample representativeness, defined as the deviation from the population distribution.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment consists of three parts and is conducted prior to the first and second panel waves of a probability-based, bimonthly online panel in 2026. The online panel studies individual attitudes and preferences relevant to political and economic decision-making processes.
For each survey wave, an email invitation is sent to all panelists on the first day of every odd month. The survey remains open for one month. Panelists receive an incentive based on the length of the questionnaire (about 20-25 minutes; €0.25 euros per minute), which is credited to their study account. In addition, participants receive a conditional yearly bonus of €10 if they complete all surveys, or €5 if they miss only one. Incentives can be cashed out, paid as an Amazon voucher, or donated to charity twice a year.
In addition to the standard recruitment interview questions (sociodemographic characteristics, Big Five personality traits, need for cognition), we ask participants about their motivations to participate (monetary incentive/ interest in study content/ desire to express opinion/ sample representativeness/ support for social science research/ support for the research team/ other). Participants first select all motivations that apply and then indicate their primary motivation among these. We plan to use these interview questions, and particularly, the motivation question to explore treatment effect heterogeneity.
The study follows a between-subject design with one factor comprising four levels.
Between September and November 2025, new panel members are recruited based on two different sampling frames (population register vs. commercial address provider). After recruitment, participants are invited to participate in their first panel wave starting on 1 November 2025.
In Experiment 1, we randomly assign the newly recruited panelists to one of four equally sized treatment groups within each stratum. Participants in the first treatment group T1.1 receive a “Thank-you” postcard with printed text, while those in the second group T1.2 receive the same postcard but with handwritten text. Participants in the third group T1.3 receive the “Thank-you” postcard with printed text along with an unconditional monetary incentive of €5. The fourth group T1.4 serves as a control group and therefore receives no treatment.
The “Thank-you” postcards for treatment groups T1.1 - T1.3. are sent on 15 December 2025 and their second panel wave opens on 1 January 2026.
After completion of the second panel wave in January, we use the resulting data to determine the optimal treatment allocation based on participants’ available characteristics. To do so, we implement a policy tree that maximizes survey response conditional on treatment-specific costs, subject to an overall budget constraint equal to the cost of random assignment across the four treatments.
In Experiment 2, we split the control group T1.4 into two equally sized subgroups in order to evaluate the learned optimal policy. In treatment group T2.1, we implement the optimal policy derived from wave 2. In treatment group T2.2, we repeat Experiment 1 and randomly assign participants to one of the four equally sized treatment groups. The treatment allocation in T2.1 and T2.2 is constrained to ensure equal overall costs. The “Thank-you” postcards are sent on 15 February 2026 and the third panel wave opens on 1 March 2026. Panel members in treatment groups T1.1 - T1.3 do not receive an additional intervention.
We use data from the third panel wave in March to evaluate whether the optimal treatment allocation yields higher response rates than random assignment.
For additional evidence on the efficiency of the optimal policy, we replicate Experiment 2 with non-treated panel members from an earlier recruitment in Experiment 3 (T3.1 and T3.2).
Experimental Design Details
Not available
Randomization Method
The GIP recruitment 2025 is based on two different samples (register vs. commercial address provider). The treatment groups will be randomly drawn within these strata in Experiment 1 and within the control group in Experiment 2. Within each stratum, we use block randomization, where each participant is randomly assigned to one of the four equally sized, predetermined blocks. Randomization is conducted using statistical software.
Randomization Unit
Randomization is conducted at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The sample size depends on the recruitment success of the GIP 2025. We expect to observe approximately 1100 newly recruited members.
Additionally, we plan to treat up to 1000 existing panelists from the 2018 GIP recruitment.
Sample size: planned number of observations
Throughout the main experiment (Experiments 1 and 2), we observe each newly recruited participant’s response behavior in two survey waves. Experiment 1, corresponding to survey wave 2, is expected to yield 1,100 observations, in line with the anticipated number of participants. Experiment 2, corresponding to survey wave 3, is expected to yield 275 observations, consistent with the expected number of participants in T1.4. In Experiment 3, among the 1,000 existing panelists from the 2018 GIP recruitment, we observe one survey wave, resulting in up to 1,000 observations.
Sample size (or number of clusters) by treatment arms
In Experiment 1, we expect to allocate approximately 275 participants to each treatment group (T1.1 - T1.4).
In Experiment 2, we expect 138 participants in treatment groups T2.1 and T2.2. In T2.2, this corresponds to approximately 34 participants per randomly allocated subgroup, whereas the subgroup size in T2.1 will be entirely determined by the optimal treatment allocation.
In Experiment 3, we expect up to 500 participants per treatment group (T3.1 and T3.2).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Experiment 1 - Comparison between two treatment groups: If the response rate in one treatment group is 80%, the other treatment would need to yield a response rate that is 8.5 percentage points higher (88.5%) for the effect to be statistically significant (with alpha = 0.05 and power = 0.8), given a sample size of 275 per treatment group. Experiment 2 - Optimal policy vs. Random assignment: If the response rate in the random-assignment group is 80%, the optimal policy would need to yield a response rate that is 11 percentage points higher (91%) for the effect to be statistically significant (with alpha = 0.05 and power = 0.8), given a sample size of 138 per treatment group.
IRB

Institutional Review Boards (IRBs)

IRB Name
German Association for Experimental Economic Research e.V.
IRB Approval Date
2025-12-04
IRB Approval Number
gQjenStJ