Social Norms and Online Survey Participation

Last registered on April 28, 2022


Trial Information

General Information

Social Norms and Online Survey Participation
Initial registration date
April 24, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 28, 2022, 6:05 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

ifo Institute

Other Primary Investigator(s)

PI Affiliation
Nuremberg Institute of Technology

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Researchers often rely on data from voluntary (online) surveys, for instance, to investigate effects on outcomes that are not available in administrative data (e.g., well-being and non-cognitive skills) or to study potential mechanisms that drive effects (e.g., behavioral changes may occur due to an update of individuals’ beliefs). However, survey response rates are often low, which reduces the statistical power of subsequent analyses. In addition, survey participation may be selective, which can affect nonresponse bias. Using a large sample of university students, we investigate if social norms can be used to increase participation rates and to reduce the nonresponse bias in a voluntary online survey.
External Link(s)

Registration Citation

Brade, Raphael and Robert Jäckle. 2022. "Social Norms and Online Survey Participation." AEA RCT Registry. April 28.
Experimental Details


The intervention will be implemented as part of the invitations to two online surveys that are conducted in the summer semester 2022 with students of all bachelor’s programs at a university of applied sciences studying towards 27 different degrees. In all experimental groups, students will receive an email invitation and four email reminders asking for their participation in the surveys.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Survey participation
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
An increase in survey participation can, but need not, increase survey quality. To assess how survey quality is affected by the treatments and study potential mechanisms, we will also investigate effects along the following dimensions:

1. Nonresponse bias: Nonresponse bias will be assessed by approaches such as analyzing differences in observable administrative background variables between respondents and nonrespondents (such as age, sex, features of the university entrance qualification, and prior academic performance).

2. Quality of survey response: We will further approximate the quality of the survey responses using variables based on dimensions such as answering speed, item-non-response, survey drop-out, and item-non-differentiation.

3. Beliefs: Finally, to study potential mechanisms we will investigate effects on students’ beliefs about the participation rate among all students and the confidence in those beliefs (asked in the surveys).
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Students were randomly allocated to a control and seven treatment groups.
Experimental Design Details
Experimental groups of the 2x4 factorial design are:

Group 1: No injunctive norm + No descriptive norm
Group 2: No injunctive norm + Qualitative descriptive norm
Group 3: No injunctive norm + Absolute descriptive norm
Group 4: No injunctive norm + Relative descriptive norm

Group 5: Injunctive norm + No descriptive norm
Group 6: Injunctive norm + Qualitative descriptive norm
Group 7: Injunctive norm + Absolute descriptive norm
Group 8: Injunctive norm + Relative descriptive norm
Randomization Method
Blocked randomization using Stata
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
9,276 students
Sample size: planned number of observations
9,276 students
Sample size (or number of clusters) by treatment arms
Group 1: 1,161
Group 2: 1,162
Group 3: 1,159
Group 4: 1,163
Group 5: 1,153
Group 6: 1,162
Group 7: 1,156
Group 8: 1,160
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
MDES for effects in percentage points range between 1.42 pp and 5.2 pp. MDES for standardized effect sizes range between d=0.046 and d=0.116.

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials