Anamnesis, Diagnosis, and Prescription for Economists - Heterogenous Treatment Effects

Last registered on January 27, 2023

Pre-Trial

Trial Information

General Information

Title
Anamnesis, Diagnosis, and Prescription for Economists - Heterogenous Treatment Effects
RCT ID
AEARCTR-0010778
Initial registration date
January 24, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 27, 2023, 2:27 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Max Planck Institute for Research on Collective Goods

Other Primary Investigator(s)

PI Affiliation
Max Planck Institute for Research on Collective Goods
PI Affiliation
Max Planck Institute for Research on Collective Goods

Additional Trial Information

Status
In development
Start date
2023-02-06
End date
2023-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We conduct an online experiment to test whether a simple diagnostic survey framework can predict when interventions are likely to succeed and when they are likely to fail to change behavior. Using a real-effort task, we induce three types of environments (contexts) in which there is a behavioral barrier to "optimal" behavior: (i) Lack of awareness, (ii) lack of intention, and (iii) failure to implement the intended choice. For each of these environments, we then test the effectiveness of three types of interventions that are commonly used to address the behavioral barriers (reminders, incentives, simplification). Using a within-subject design, we study whether our diagnostic questions predict treatment effects on the individual level.

The experiment was planned after data collection from a similar experiment using a between-subject design to show that the effectiveness of an intervention depends on the underlying environment and to study the average predictability of the diagnostic survey. The link to the preregistration of the original experiment's preregistration is provided below.

Registration Citation

Citation
Riedmiller, Sebastian, Matthias Sutter and Sebastian Tonke. 2023. "Anamnesis, Diagnosis, and Prescription for Economists - Heterogenous Treatment Effects." AEA RCT Registry. January 27. https://doi.org/10.1257/rct.10778-1.0
Experimental Details

Interventions

Intervention(s)
We measure participants' performance in a real-effort task.
Participants have to remember a frequently changing number on the screen and enter the last shown number if prompted to do so. Additionally, we implemented an extra rule where participants have to enter only the number "0" if one of the digits of the last shown number is a "3". It is possible to skip this task at any time by pressing a "Skip"-button and watch a video instead.
We test the effectiveness of three interventions in their respective environment:

Environment 1:
The extra rule is less salient in the instructions.
Intervention: During the real effort task, there is a reminder note for the extra rule on the screen which says: Reminder: If the 3-digit number contains a "3", type in "0" only.

Environment 2:
Participants do not receive a monetary incentive for correct responses.
Intervention: The incentive is increased/introduced for a correct response.

Environment 3:
The real-effort task is more difficult. It uses a 5-digit number instead of a 3-digit number. Additionally, the "Skip"-Button does not lead to the video but immediately ends the task.
Intervention: The "Skip"-button is removed and the numbers are reduced to two digits.

Using a within-subject design, participants first perform the real-effort task in one of the three environments without the intervention. In the second round, participants are randomly assigned to receive the respective intervention of their environment or not.
Intervention (Hidden)
Intervention Start Date
2023-02-06
Intervention End Date
2023-02-28

Primary Outcomes

Primary Outcomes (end points)
1. Difference of mistakes during the real-effort task between the first and second round.
2. Diagnosis of an awareness barrier;
3. Diagnosis of an intention barrier;
4. Diagnosis of an implementation barrier
Primary Outcomes (explanation)
1. Difference of mistakes during the real-effort task between the first and second round:
A mistake is coded when a participant does not enter the last displayed number correctly, does not answer, or does not apply the extra rule.

2. Awareness barrier:
Difference between the number of correct responses and beliefs about correct responses.

3. Intention barrier:
Difference between the initially intended (planned) number of correct responses and 50.

4. Implementation barrier:
Difference between beliefs about correct responses and initially intended number of correct responses.

We will adjust our diagnostic measures to account for simultaneously appearing other barriers and for the difference in mistakes of the groups that do not receive the intervention in the second round.

Secondary Outcomes

Secondary Outcomes (end points)
1. Diagnosis of an awareness barrier (qualitative);
2. Diagnosis of an intention barrier (qualitative);
3. Diagnosis of an implementation barrier (qualitative)
Secondary Outcomes (explanation)
In addition to the measures above, we use qualitative statements to diagnose barriers. These qualitative questions are asked only after the second round, hence at the end of the experiment.

Awareness barrier: We use two questions about whether the participants are aware of their actual number of mistakes during the task (5-point Likert scale) and whether they forgot to apply the extra rule at some point (Yes/No).

Intention barrier: We use two questions about whether the participants planned and were determined to answer all 50 queries correctly (5-point Likert scale each).

Implementation barrier: We use three questions about whether the participants consciously decided to answer fewer queries correctly than they initially intended, whether they had difficulties answering as many queries correctly as they planned, and whether they anticipated ending up answering fewer queries correctly than planned (5-point Likert scale each).

Experimental Design

Experimental Design
Procedure:
The experiment is conducted online via Prolific and is programmed in oTree. Participants are recruited on a rolling basis so we cannot state a precise end date of the trial. Upon arrival, the participants have to solve a Captcha to verify they are human and enter their Prolific-ID. Upon agreement with the data regulations, participants are asked for socio-demographic characteristics before they enter the first round of the real-effort task in one baseline environment. The instructions of their respective environment are displayed and participants have to confirm to have read the instructions carefully and have understood the payment scheme. Then, they proceed to the real-effort task described below. After the task, there is a diagnostic survey (see below) before the second round starts. Here, participants again read the instructions and confirm to have understood them before they enter the real-effort task. After the second round, another survey is displayed including the diagnostic questions, before the payment is shown and the participants are redirected to Prolific.

Real-effort task:
Participants work on a real-effort task for a 6-minute work period. A 3-digit number appears on the participants' screens and changes every 1.2 seconds. At random moments, the participants are asked to enter the last displayed number into an entry field within 7 seconds. Within the 6 minutes, this query pops up 30 times. There is one additional rule: If the last displayed number included a "3", participants have to enter "0" into the entry field instead of the last displayed number. 3 out of 30 queries are randomly chosen to be payoff relevant. Participants are paid for each correct response. Participants can skip the task at any time by pressing a "Skip"-button, which is displayed on their screens. After pressing this button, participants are sent to a screen with a video, which runs until the remainder of the 6-minute work period is over. After the 6 minutes, participants are redirected to a final survey.

Three environments:
We induce three environments in which a certain behavioral barrier is more pronounced. In the first environment, the extra rule is made less salient. In the second environment, participants do not receive a monetary incentive for correct responses. The third environment makes the real-effort task more difficult. It uses a 5-digit number instead of a 3-digit number. Additionally, the "Skip"-Button does not lead to the video but immediately ends the task.

Interventions:
Participants either receive an intervention in the second round or stay with the baseline setting. Every intervention (in addition to a baseline) described in the "Interventions"-section is conducted only within their respective environment. Thus, we have a total of 6 experimental conditions.

Diagnostic survey:
After each round of the real-effort task, we ask the participants how many numbers they intended to enter correctly. Then, we ask how many numbers they think they entered correctly (beliefs). Additionally, we ask qualitative questions after the second round: First, whether they are aware of the actual number of mistakes they made, elicited on a 5-point Likert scale ranging from Yes to No, and whether they forgot about the extra rule at any point (binary). Second, we ask participants whether they planned to enter the numbers correctly and whether they were determined to do so (5-point Likert scale from Yes to No). Lastly, we ask participants whether they consciously entered fewer numbers correctly than they initially intended, whether they had problems with implementing their plan, and whether they anticipated this (5-point Likert scale from Yes to No). Finally, we elicit participants' economic preferences after an attention check as part of the final survey after the second round.

Exclusion criteria:
We have the following exclusion criteria: A Captcha on the first page, a hidden input field that leads to exclusion once there is some input value (bots will provide input, humans will not because the field is not visible), and a test for repeated participation (the same Prolific ID cannot be used twice). Additionally, we implement an attention check during the final survey and measure how fast participants answer the questions.

Hypotheses:
(1) A higher share of individually diagnosed awareness barriers predicts higher effectiveness of the intervention in environment 1.
(2) A higher share of individually diagnosed intention barriers predicts higher effectiveness of the intervention in environment 2.
(3) A higher share of individually diagnosed implementation barriers predicts higher effectiveness of the intervention in environment 3.
Experimental Design Details
Randomization Method
Treatment assignment is stratified by age, gender, and education using least-fill bins.
Randomization Unit
We randomize at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We plan to have 1350 participants that complete the experiment.
Sample size: planned number of observations
We plan to have 1350 participants that complete the experiment.
Sample size (or number of clusters) by treatment arms
We plan to have 225 participants that complete the experiment per treatment arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Given a sample size of N=1350 and an N=225 per treatment arm, we have a minimum detectable effect size (MDE) for a linear regression slope coefficient of 0.188 with 80% statistical power and a 5% significance level using a two-sided t-test. The MDE is 0.329 with 99% statistical power and a 1% significance level. The actual sample size in the experiment may vary due to imperfect compliance or violation of the experimental rules. The slope coefficient measures how much the individual treatment effect changes for a change in the diagnostic score, constructed through the answers to the diagnostic survey questions.
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee of the Faculty of Economic and Social Sciences (ERC-FMES) at the University of Cologne
IRB Approval Date
2022-01-28
IRB Approval Number
220006MS

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials