Honesty Interventions in Online Surveys

Last registered on March 30, 2023

Pre-Trial

Trial Information

General Information

Title
Honesty Interventions in Online Surveys
RCT ID
AEARCTR-0011138
Initial registration date
March 22, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 30, 2023, 3:17 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Munich School of Politics & Public Policy; TUM School of Management

Other Primary Investigator(s)

PI Affiliation
Munich School of Politics & Public Policy; Technical University of Munich
PI Affiliation
Munich School of Politics & Public Policy; Technical University of Munich
PI Affiliation
Munich School of Politics & Public Policy; TUM School of Management

Additional Trial Information

Status
On going
Start date
2023-03-15
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The goal of this study is to gain a better understanding of the extent to which (and possibly the conditions under which) low-cost honesty interventions can be used to address dishonesty and shirking among survey participants in online surveys.
External Link(s)

Registration Citation

Citation
Büthe, Tim et al. 2023. "Honesty Interventions in Online Surveys." AEA RCT Registry. March 30. https://doi.org/10.1257/rct.11138-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The goal of this study is to gain a better understanding of the extent to which (and possibly the conditions under which) low-cost honesty interventions can be used to address dishonesty and shirking among survey participants in online surveys.
Intervention (Hidden)
Dishonest responses are a major concern in survey research, where they may entail over-self-reporting socially desirable or (for the respondent) materially or otherwise rewarding choices and outcomes, as well as under-self-reporting undesirable or costly choices and outcomes. In addition, survey researchers are concerned about careless and hence low-quality responses – a form of shirking that implies dishonesty in the sense of giving the appearance of answering a question without having actually read it or without having fully considered the response options. Such concerns might be expected to be especially prevalent in online (internet-based) surveys. Concerns about dishonest or low-quality responses have contributed to favoring "revealed" rather than stated preference. Yet, many scholars still seek to understand opinions and sentiments as such - or consider the reliance on stated preferences useful or even necessary in order to address a broad range of economic and social science research questions. For them, concerns about dishonest responses have contributed to the increasing use of "incentivized" behavioral research. Alternatively, or in addition, survey researchers have devised a range of methods to detect dishonest and shirking survey participants. Designing and implementing measures to detect and/or overcome participant dishonesty and shirking can be time-consuming and expensive. Questions designed to identify dishonest or shirking participants, moreover, can be easily perceived as attempts to "trick" participants, damaging trust or destroying the rapport between survey researchers and survey participants. The goal of our research is to gain a better understanding of the extent to which (and possibly the conditions under which) simpler, less costly honesty interventions can be used to address dishonesty and shirking among survey participants in online surveys. Towards this end, we plan to conduct a series of experiments to examine the effectiveness of honesty nudges and similar interventions, such as online-suitable versions of an "honesty oath", which combines elements of moral reminder ("educational") nudges and internal reward experiments with external commitment. The overarching empirical strategy is to gather individual-level data through surveys with embedded experiments, which can then be analyzed using statistical/econometric techniques.
In a first study, intended as a pilot, we plan to sample about 1000 respondents living in Germany, recruited by the survey research firm Respondi/Bilendi to be representative of the German population with respect to gender, age brackets, and education, as well as with respect to living in the Eastern vs. the Western German states. For this study, we will split participants into three treatment groups and one control group of about 250 participants, each. At the beginning of the survey, participants in the treatment groups get asked (in German) whether they are willing to commit to (1) answer all the questions truthfully (we call this the honesty treatment) or (2) read all the questions attentively (the attentiveness treatment) or (3) read all the questions attentively and answer them truthfully/honestly (the honestyXattentiveness treatment). [We motivate this question/request by telling respondents: "The quality of our survey data is of great significance, and we therefore hope to capture your assessments as accurately as possible. It is therefore important to us that you ...*... "] The control group is not presented with any request or question regarding reading carefully and/or answering honestly ("null-control"). Respondents in the treatment groups are then given three response options:
• I am willing to ...*... (in German: "Ich bin bereit, ...*...")
• I cannot promise to ...*... (in German: "Ich kann nicht versprechen ...*...")
• I am not willing to ...*... (in German: "Ich bin nicht bereit, ...*...")
... where the experimentally altered treatment phrase replaced the "...*...", i.e., for treatment group 1: "answer all questions truthfully" ("alle Fragen ehrlich zu beantworten"); for treatment group 2: "read all questions attentively" ("alle Fragen aufmerksam zu lesen"); and for treatment group 3: "read all questions attentively and answer them truthfully" ("alle Fragen aufmerksam zu lesen und ehrlich zu beantworten"). Respondents must select one of the three response options before they can proceed and start the actual survey.
In the first study, we only ask participants to select one of the statements by clicking on it. In subsequent iterations we intend to vary the strength of the commitment, but we start with this minimally invasive, low-cost version.
Respondents are then presented with several sets of questions, games, and tasks (most of them substantively unrelated to honesty and/or attentiveness). Outcomes (possible effects of the experimental treatments) will be measured in seven ways.
Intervention Start Date
2023-03-17
Intervention End Date
2023-03-23

Primary Outcomes

Primary Outcomes (end points)
Responses/Behavior in Game Designed to Elicit Individual-Level Measure of (Dis)Honesty: To measure individual-level dishonesty directly, the survey includes a game with a material incentive to behave dishonestly. Every respondent is provided with a screen introducing the opportunity to earn a bonus. At the top of the screen, the respondent is told: "You have now answered approximately half of the questions in this survey. You now have the opportunity to earn a bonus [reward]" (emphasis in the original). Below this statement, participants are shown an image of two tumbling dice, underneath which they are told: "After you click 'Next', you will be shown an image of six dice. If two of the dice show the same number of eyes, you will receive a bonus as follows: If 1 is the highest number that appears twice, you will get the basic Bonus of € 0.50; If 2 is the highest number that shows twice, you get double the basic bonus: € 1.00; if 3 is the highest number that shows twice, you will get triple the basic bonus: €1.50; etc."
On the next page, respondents are shown an image consisting of the surfaces of six dice. 95% of respondents are shown an image containing two dice showing the number 1 (no other number twice; see Analysis Plan for details). Underneath the image, all participants are asked: "Each of the six dice shows a number of eyes. You get a bonus if a number appears exactly twice. Does a number appear twice on your screen?" [In German: "Jeder der sechs Würfel zeigt eine Zahl von Augen an. Sie erhalten einen Bonus, wenn eine Zahl genau zweimal vorkommt. Kommt bei Ihnen eine Zahl zweimal vor?"] This statement and question is supposed to heighten the ambiguity about whether we have any knowledge of the number shown twice, allowing for the possibility that respondents can behave dishonestly (see below) without detection. The question is followed by a simple Yes – No choice.
If the participant clicks "Yes", s/he gets the immediate (on-page) feedback: "Congratulations! Please remember the number that appears twice." If the respondent selects No, the statement "What bad luck! We are sorry that you did not roll a bonus." appears. Participants who click "No" skip the next page.
On the next page, participants are asked: "Which number appeared twice on the dice? To claim your bonus, select that number from the drop-down menu:" The dropdown menu shown for this question contains the options 1 through 6. As participants select a number from the menu, a statement appears underneath: "The bonus to which you are entitled is € 0.50. This bonus will be credited to you by Mingle/Meinungsplatz after completion of the project (in approx. 10-14 days)" if they select "1"; "The bonus to which you are entitled is €1.00 if they select "2"; etc., such that the specific amount filled in, depending on the number selected, so as to remind participants of the monetary reward available to them for being dishonest. While €3 might be only a modest material incentive, it is twice the "incentive pay" awarded to respondents by the survey company for participating in the survey, suggesting that participants who might deviate from the social norm of honest behavior in response to material rewards should thus be incentivized to behave dishonestly.
Primary Outcomes (explanation)
For all respondents who were shown the number "1" twice, any deviance from "1" constitutes a direct, individually attributable measure of dishonesty with a range from 1 (low) to 5 (high/maximum level of dishonesty), which can be compared across individuals. We expect respondents who receive the honesty treatment (groups T1 or T3) to exhibit a lower likelihood of behaving dishonestly (reporting that they are entitled to a higher bonus than suggested by the image of the six dice shown to them) and/or lower level of dishonesty (C > T3 ≈ T1). See analysis plan for details.

Secondary Outcomes

Secondary Outcomes (end points)
Outcomes (possible effects of the experimental treatments) will also be measured in six other ways (see Analysis Plan for details):
(1) Time spent on the survey as a whole:
(2) Time spent on each page;
(3) Responses to attention check questions;
(4) Responses to factual attention questions;
(5) Responses to honesty check questions;
(6) Participants' level of (dis)agreement with the items of the Social Desirability–Gamma Short Scale.
Secondary Outcomes (explanation)
See analysis plan for detail on each of the outcome variables. For example, for the first measure, we expect respondents who, in the beginning of the survey, received the attentiveness treatment (T2) to spend more time on average to complete the survey than respondents in the control group (C). As honesty implies thoughtful considerations of the questions and response options, we also expect respondents who have received the honesty treatment (T1) to spend more time on taking the survey than respondents in the control group, though this effect might be less strong than the effect of specifically committing to attentiveness. Theory and prior work does not allow us to develop a clear expectation as to whether, in the combined treatment (T3) the effect of the honesty treatment should be a reinforcing complement or a substitute for the attentiveness treatment, such that we would expect the pattern of time spent to be: T3 ≥ T2 > T1 > C.

Experimental Design

Experimental Design
For the initial ("pilot") study, we split participants into three treatment groups and one control group of about 250 participants, each. The treatment groups received (1) an honesty treatment, (2) an attentiveness treatment or (3) an honestyXattentiveness treatment.
Experimental Design Details
The experimental design is described in some detail under "Intervention" and "Outcomes" above and in more detail in the Analysis Plan. Additional self-reported outcome measures or conditioning variables are discussed in the Analysis Plan.
Randomization Method
Selection into one of the three treatment groups or the control group will be done via a "randomizer" in the survey software Qualtrics (with equal distribution across the four groups). Within each treatment group and the control group, participants are further assigned to subgroups for substantively unrelated questions via nested randomizers.
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1 country
Sample size: planned number of observations
For initial pilot study: 1000 (+ up to 10%) inhabitants of Germany (Respondi/Bilendi online sample)
Sample size (or number of clusters) by treatment arms
ca. 250
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
to be identified via pilot study
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB of the German Association for Experimental Economic Research
IRB Approval Date
2023-03-22
IRB Approval Number
zKIZDfSf
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials