Experimental Design
Procedure:
The experiment is conducted online via Prolific and is programmed in oTree. Participants are recruited on a rolling basis so we cannot state a precise end date of the trial. Upon arrival, the participants have to solve a Captcha to verify they are human and enter their Prolific-ID. Upon agreement with the data regulations, participants are asked for socio-demographic characteristics before they enter the first round of the real-effort task in one baseline environment. The instructions of their respective environment are displayed and participants have to confirm to have read the instructions carefully and have understood the payment scheme. Then, they proceed to the real-effort task described below. After the task, there is a diagnostic survey (see below) before the second round starts. Here, participants again read the instructions and confirm to have understood them before they enter the real-effort task. After the second round, another survey is displayed including the diagnostic questions, before the payment is shown and the participants are redirected to Prolific.
Real-effort task:
Participants work on a real-effort task for a 6-minute work period. A 3-digit number appears on the participants' screens and changes every 1.2 seconds. At random moments, the participants are asked to enter the last displayed number into an entry field within 7 seconds. Within the 6 minutes, this query pops up 30 times. There is one additional rule: If the last displayed number included a "3", participants have to enter "0" into the entry field instead of the last displayed number. 3 out of 30 queries are randomly chosen to be payoff relevant. Participants are paid for each correct response. Participants can skip the task at any time by pressing a "Skip"-button, which is displayed on their screens. After pressing this button, participants are sent to a screen with a video, which runs until the remainder of the 6-minute work period is over. After the 6 minutes, participants are redirected to a final survey.
Three environments:
We induce three environments in which a certain behavioral barrier is more pronounced. In the first environment, the extra rule is made less salient. In the second environment, participants do not receive a monetary incentive for correct responses. The third environment makes the real-effort task more difficult. It uses a 5-digit number instead of a 3-digit number. Additionally, the "Skip"-Button does not lead to the video but immediately ends the task.
Interventions:
Participants either receive an intervention in the second round or stay with the baseline setting. Every intervention (in addition to a baseline) described in the "Interventions"-section is conducted only within their respective environment. Thus, we have a total of 6 experimental conditions.
Diagnostic survey:
After each round of the real-effort task, we ask the participants how many numbers they intended to enter correctly. Then, we ask how many numbers they think they entered correctly (beliefs). Additionally, we ask qualitative questions after the second round: First, whether they are aware of the actual number of mistakes they made, elicited on a 5-point Likert scale ranging from Yes to No, and whether they forgot about the extra rule at any point (binary). Second, we ask participants whether they planned to enter the numbers correctly and whether they were determined to do so (5-point Likert scale from Yes to No). Lastly, we ask participants whether they consciously entered fewer numbers correctly than they initially intended, whether they had problems with implementing their plan, and whether they anticipated this (5-point Likert scale from Yes to No). Finally, we elicit participants' economic preferences after an attention check as part of the final survey after the second round.
Exclusion criteria:
We have the following exclusion criteria: A Captcha on the first page, a hidden input field that leads to exclusion once there is some input value (bots will provide input, humans will not because the field is not visible), and a test for repeated participation (the same Prolific ID cannot be used twice). Additionally, we implement an attention check during the final survey and measure how fast participants answer the questions.
Hypotheses:
(1) A higher share of individually diagnosed awareness barriers predicts higher effectiveness of the intervention in environment 1.
(2) A higher share of individually diagnosed intention barriers predicts higher effectiveness of the intervention in environment 2.
(3) A higher share of individually diagnosed implementation barriers predicts higher effectiveness of the intervention in environment 3.