Incentives and Answer Quality for Open-Ended Questions

Last registered on January 10, 2023


Trial Information

General Information

Incentives and Answer Quality for Open-Ended Questions
Initial registration date
January 10, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 10, 2023, 1:17 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

University of Cologne

Other Primary Investigator(s)

PI Affiliation
University of Cologne
PI Affiliation
Frankfurt School of Finance and Management
PI Affiliation
University of Cologne

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We investigate the question to what extent incentives have an impact on the quality of answers to open-ended survey questions. To this end, we conduct an online survey study on Prolific asking participants open-ended questions as well as using items from established scales. Moreover, we investigate whether giving participants an additional incentive to provide high-quality answers, i.e., informing them that we will hire some of the participants again for a well-paid task depending on the quality of their answers, can increase the answer quality for open-ended questions. To do so, we randomly provide half of the participants with this additional incentive.
External Link(s)

Registration Citation

Opitz, Saskia et al. 2023. "Incentives and Answer Quality for Open-Ended Questions." AEA RCT Registry. January 10.
Experimental Details


Depending on the quality of their answers, some participants in the treatment group are hired again for a well-paid task.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variable is the quality of the given answers, i.e., clarity and answer fit as rated by two human evaluators.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants have to answer three open-ended questions connected to performance feedback (e.g., “Imagine the following: You receive negative feedback from your boss knowing that you did not perform at your best. How do you think you would react?”) or unrelated to performance feedback (e.g., “Describe what you did last Sunday.“) We randomly allocate half of the participants into a treatment group, which receives an additional incentive they are informed about before answering the open-ended questions: depending on the quality of the answers (clear description and answer fit) to the open-ended questions, some participants are hired again for a well-paid task. The control group does not receive this incentive.

Two human evaluators read the submitted answers to the open-ended questions and rate their quality based on clarity of the description as well as answer fit to the question. The two best performing participants in this respect will be hired to perform a real-effort task, which will be paid about 150% of the usual hourly wage paid on Prolific.

After the open-ended questions, we further elicit conscientiousness, social comparison and feedback orientation using established scales. Additionally, we elicit education level, age and gender.
Experimental Design Details
Randomization Method
We will randomize participants into treatment or control group using a random number generator such that there is an equal number of subjects in each group.
Randomization Unit
Individual subject
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
The number of clusters is the same as the number of observations (please see below).
Sample size: planned number of observations
300 individual subjects
Sample size (or number of clusters) by treatment arms
150 individual subjects control, 150 individual subjects treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
University of Cologne Ethics Board
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials