Back to History

Fields Changed

Registration

Field Before After
Trial Start Date September 01, 2022 September 21, 2022
Last Published August 25, 2022 02:26 PM September 16, 2022 11:02 AM
Intervention (Public) We measure participants' performance in a real-effort-task. Participants have to remember a frequently changing number on the screen and enter the last shown number, if prompted to do so. Additionally, we implemented an extra rule where participants have to enter only the number "0" if one of the digits of the last shown number is a "3". It is possible to skip this task at any time by pressing a "Skip"-button on screen and watch a video instead. We test the effectiveness of three interventions across three environments: Environment 1: The extra-rule is less salient in the instructions. Environment 2: Participants do not receive a monetary incentive for correct responses. Environment 3: The real-effort task is more difficult. It uses a 4-digit number instead of 3-digit number. Additionally, the "Skip"-Button does not lead to the video, but immediately ends the task. Intervention 1: During the real-effort-task, there is a reminder-note for the extra rule on the screen which says: Reminder: If the 3-digit number contains a "3", type in "0" only. Intervention 2: The incentive is increased/introduced for a correct response. Intervention 3: The "Skip"-button is removed and the numbers are reduced to two digits. We measure participants' performance in a real effort task. Participants have to remember a frequently changing number on the screen and enter the last shown number if prompted to do so. Additionally, we implemented an extra rule where participants have to enter only the number "0" if one of the digits of the last shown number is a "3". It is possible to skip this task at any time by pressing a "Skip"-button and watch a video instead. We test the effectiveness of three interventions across three environments: Environment 1: The extra rule is less salient in the instructions. Environment 2: Participants do not receive a monetary incentive for correct responses. Environment 3: The real-effort task is more difficult. It uses a 5-digit number instead of a 3-digit number. Additionally, the "Skip"-Button does not lead to the video but immediately ends the task. Intervention 1: During the real effort task, there is a reminder note for the extra rule on the screen which says: Reminder: If the 3-digit number contains a "3", type in "0" only. Intervention 2: The incentive is increased/introduced for a correct response. Intervention 3: The "Skip"-button is removed and the numbers are reduced to two digits.
Intervention Start Date September 01, 2022 September 21, 2022
Primary Outcomes (Explanation) 1. Number of mistakes during the real-effort task: A mistake is coded when a participant does not enter the last displayed number correctly, does not answer, or does not apply the extra rule. 2. Awareness barrier: Difference between the number of correct responses and beliefs about correct responses. 3. Intention barrier: Difference between the initially intended (planned) number of correct responses and 50. 4. Implementation barrier: Difference between beliefs about correct responses and initially intended number of correct responses. In addition to the measures above, we use qualitative statements to diagnose barriers. Awareness barrier: We use two questions about whether the participants are aware of their actual number of mistakes during the task (5-point Likert scale) and whether they forgot to apply the extra rule at some point (Yes/No). Intention barrier: We use two questions about whether the participants planned and were determined to answer all 50 queries correctly (5-point Likert scale each). Implementation barrier: We use three questions about whether the participants consciously decided to answer fewer queries correctly than they initially intended, whether they had difficulties answering as many queries correctly as they planned, and whether they anticipated to end up answering fewer queries correctly than planned (5-point Likert scale each). 1. Number of mistakes during the real-effort task: A mistake is coded when a participant does not enter the last displayed number correctly, does not answer, or does not apply the extra rule. 2. Awareness barrier: Difference between the number of correct responses and beliefs about correct responses. 3. Intention barrier: Difference between the initially intended (planned) number of correct responses and 50. 4. Implementation barrier: Difference between beliefs about correct responses and initially intended number of correct responses. In addition to the measures above, we use qualitative statements to diagnose barriers. Awareness barrier: We use two questions about whether the participants are aware of their actual number of mistakes during the task (5-point Likert scale) and whether they forgot to apply the extra rule at some point (Yes/No). Intention barrier: We use two questions about whether the participants planned and were determined to answer all 50 queries correctly (5-point Likert scale each). Implementation barrier: We use three questions about whether the participants consciously decided to answer fewer queries correctly than they initially intended, whether they had difficulties answering as many queries correctly as they planned, and whether they anticipated ending up answering fewer queries correctly than planned (5-point Likert scale each).
Experimental Design (Public) Procedure: The experiment is conducted online via Prolific and is programmed in oTree. Participants are recruited on a rolling basis so we cannot state a precise end date of the trial. Upon arrival, the participants have to solve a Captcha to verify they are human and enter their Prolific-ID. Upon agreement with the data regulations, participants are asked for socio-demographic characteristics before the instructions of their respective treatment group are displayed. After confirming to have read the instructions carefully and to have understood the payment scheme, participants proceed to the real-effort task described below. After the task, there is a final survey (see below) before the payment is shown and the participants are redirected to Prolific. Real-effort task: Participants work on a real-effort task for a 10 minute work period. A 3-digit number appears on the participants' screens and changes every 1.2 seconds. At random moments, the participants are asked to enter the last displayed number into an entry-field within 7 seconds. Within the 10 minutes, this query pops up 50 times. There is one additional rule: If the last displayed number included a "3", participants have to enter "0" into the entry-field instead of the last displayed number. 5 out of 50 queries are randomly chosen to be payoff relevant. Participants are paid for each correct response. Participants can skip the task at any time by pressing a "Skip"-button, which is displayed on their screens. After pressing this button, participants are sent to a screen with a video, which runs until the remainder of the 10 minute work period is over. After the 10 minutes, participants are redirected to a final survey. Three environments: We induce three environments in which a certain behavioral barrier is more pronounced. In a first environment, the extra-rule is made less salient. In the second environment, participants do not receive a monetary incentive for correct responses. The third environment makes the real-effort task more difficult. It uses a 4-digit number instead of a 3-digit number. Additionally, the "Skip"-Button does not lead to the video but immediately ends the task. Treatment groups: Every intervention (in addition to a baseline) described in the "Interventions"-section is conducted within each environment. Thus, we have a total of 12 experimental conditions. Final survey: After the real-effort task, we ask the participants how many numbers they intended to enter correctly. Then, we ask how many numbers they think they entered correctly (beliefs). Additionally, we ask qualitative questions: First, whether they are aware of the actual number of mistakes they made, elicited on a 5-point Likert scale ranging from Yes to No, and whether they forgot about the extra rule at any point (binary). Second, we ask participants whether they planned to enter the numbers correctly and whether they were determined to do so (5-point Likert scale from Yes to No). Lastly, we ask participants whether they consciously entered fewer numbers correctly than they initially intended, whether they had problems with implementing their plan, and whether they anticipated this (5-point Likert scale from Yes to No). Finally, we elicit participants' economic preferences after an attention check. Exclusion criteria: We have three exclusion criteria: A Captcha on the first page, a hidden input field that leads to exclusion once there is some input value (bots will provide input, humans will not because the field is not visible), and a test for repeated participation (the same Prolific ID cannot be used twice). Additionally, we implement an attention check during the final survey. Hypotheses: 1. Diagnostic questions: (i) A higher share of diagnosed awareness barriers correlates with a higher effectiveness of intervention 1. (ii) A higher share of diagnosed intention barriers correlates with a higher effectiveness of intervention 2. (iii) A higher share of diagnosed implementation barriers correlates with a higher effectiveness of intervention 3. 2. The effectiveness of treatments is context-dependent: (i) Intervention 1 is more effective in environment 1. (ii) Intervention 2 is more effective in environment 2. (iii) Intervention 3 is more effective in environment 3. Procedure: The experiment is conducted online via Prolific and is programmed in oTree. Participants are recruited on a rolling basis so we cannot state a precise end date of the trial. Upon arrival, the participants have to solve a Captcha to verify they are human and enter their Prolific-ID. Upon agreement with the data regulations, participants are asked for socio-demographic characteristics before the instructions of their respective treatment group are displayed. After confirming to have read the instructions carefully and to have understood the payment scheme, participants proceed to the real-effort task described below. After the task, there is a final survey (see below) before the payment is shown and the participants are redirected to Prolific. Real-effort task: Participants work on a real-effort task for a 10-minute work period. A 3-digit number appears on the participants' screens and changes every 1.2 seconds. At random moments, the participants are asked to enter the last displayed number into an entry field within 7 seconds. Within the 10 minutes, this query pops up 50 times. There is one additional rule: If the last displayed number included a "3", participants have to enter "0" into the entry field instead of the last displayed number. 5 out of 50 queries are randomly chosen to be payoff relevant. Participants are paid for each correct response. Participants can skip the task at any time by pressing a "Skip"-button, which is displayed on their screens. After pressing this button, participants are sent to a screen with a video, which runs until the remainder of the 10-minute work period is over. After the 10 minutes, participants are redirected to a final survey. Three environments: We induce three environments in which a certain behavioral barrier is more pronounced. In a first environment, the extra rule is made less salient. In the second environment, participants do not receive a monetary incentive for correct responses. The third environment makes the real-effort task more difficult. It uses a 5-digit number instead of a 3-digit number. Additionally, the "Skip"-Button does not lead to the video but immediately ends the task. Treatment groups: Every intervention (in addition to a baseline) described in the "Interventions"-section is conducted within each environment. Thus, we have a total of 12 experimental conditions. Final survey: After the real-effort task, we ask the participants how many numbers they intended to enter correctly. Then, we ask how many numbers they think they entered correctly (beliefs). Additionally, we ask qualitative questions: First, whether they are aware of the actual number of mistakes they made, elicited on a 5-point Likert scale ranging from Yes to No, and whether they forgot about the extra rule at any point (binary). Second, we ask participants whether they planned to enter the numbers correctly and whether they were determined to do so (5-point Likert scale from Yes to No). Lastly, we ask participants whether they consciously entered fewer numbers correctly than they initially intended, whether they had problems with implementing their plan, and whether they anticipated this (5-point Likert scale from Yes to No). Finally, we elicit participants' economic preferences after an attention check. Exclusion criteria: We have the following exclusion criteria: A Captcha on the first page, a hidden input field that leads to exclusion once there is some input value (bots will provide input, humans will not because the field is not visible), and a test for repeated participation (the same Prolific ID cannot be used twice). Additionally, we implement an attention check during the final survey and measure how fast participants answer the questions. Hypotheses: 1. Diagnostic questions: (i) A higher share of diagnosed awareness barriers predicts higher effectiveness of intervention 1. (ii) A higher share of diagnosed intention barriers predicts higher effectiveness of intervention 2. (iii) A higher share of diagnosed implementation barriers predicts higher effectiveness of intervention 3. 2. The effectiveness of treatments is context-dependent: (i) Intervention 1 is more effective in environment 1. (ii) Intervention 2 is more effective in environment 2. (iii) Intervention 3 is more effective in environment 3.
Planned Number of Clusters We plan to have 8280 participants that complete the experiment. We plan to have 7500 participants that complete the experiment.
Planned Number of Observations We plan to have 8280 participants that complete the experiment. We plan to have 7500 participants that complete the experiment.
Sample size (or number of clusters) by treatment arms We plan to have 690 participants that complete the experiment per treatment arm. We plan to have 625 participants that complete the experiment per treatment arm.
Power calculation: Minimum Detectable Effect Size for Main Outcomes Given a sample size of N=8280 and mean mistakes of 20 and a standard deviation of 17 mistakes, we can detect an effect size of 4.5 mistakes between two experimental groups with 99% statistical power and 1% significance level using a two-sided t-test. We can detect an effect size of 2.57 mistakes with 80% statistical power and at a 5% significance level. The actual sample size in the experiment may vary due to imperfect compliance or violation of the experimental rules. Given a sample size of N=7500 and an N=625 per treatment arm, we have a minimum detectable effect size (MDE) of 0.159 standard deviations with 80% statistical power and a 5% significance level using a two-sided t-test. The MDE is 0.278 standard deviations with 99% statistical power and 1% significance level. The actual sample size in the experiment may vary due to imperfect compliance or violation of the experimental rules.
Back to top