The American Economic Association's registry for randomized controlled trials
Please fill out this
short user survey
of only 3 questions in order to help us improve the site. We appreciate your feedback!
Ignoring Good Advice
Last registered on July 28, 2020
View Trial History
Ignoring Good Advice
Initial registration date
July 28, 2020
July 28, 2020 9:33 AM EDT
United States of America
University of Warwick
Contact Primary Investigator
Other Primary Investigator(s)
University of Oxford
Additional Trial Information
sunk cost effect
sunk cost fallacy
Advice should (rationally) be followed by an advisee if accepting it leads to a higher expected payoff than ignoring it. Whether, when and why people deviate from this are important social science questions. We wish to explore the behavioral factors (i.e., those outside the standard rational economic paradigm) that may lie behind such deviations from rationality.
Ronayne, David and Daniel Sgroi. 2020. "Ignoring Good Advice." AEA RCT Registry. July 28.
Sponsors & Partners
In the main wave of the study (wave 2), subjects will undertake two (incentivized) tasks; a random (coin-flipping) task and a real-effort (number-counting) task, which provide within-subject variation. Subjects will then be offered the chance to take advice from an adviser (another subject drawn from wave 1 of the study which we will run in advance of wave 2). We vary the remuneration of the adviser, which provides randomized between-subject variation.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
binary variable of whether (good) advice followed or not, and its interactions with: adviser remuneration; task type (random or real-effort); task performance; subject envy; susceptibility to the sunk cost effect; (general) stubbornness.
Primary Outcomes (explanation)
Each subject decides whether to accept or ignore advice, once for each task. Advice is said to be good if accepting it increases expected payoff. We vary the remuneration of advisers, which provides randomized between-subject variation. We will also measure the role of the task itself (luck or effort-based) by looking at the behavior of the same subjects across tasks; a within-subject measure. Subject envy is a measure of envy drawn from the psychology literature. We will use a new measure of susceptibility to the sunk cost effect (the "SCE-8") taken from one of our other papers ("Evaluating the Sunk Cost Effect" by Ronayne, Sgroi & Tuckwell, CAGE WP 475, 2020). General stubbornness is measured using a series of questions taken from the existing literature converted to a Likert scale.
Secondary Outcomes (end points)
intelligence test; questions relating to medical advice (linked to the COVID19 pandemic); demographics
Secondary Outcomes (explanation)
We will use two questions related to advice-taking behavior in the COVID19 pandemic together to measure recent real-life advice taking. We will use a short (Raven's progressive matrix) test to provide a control for intelligence, and standard demographics to add further controls for our analysis.
WAVE 1: In wave 1, subjects will be recruited using Amazon's Mechanical Turk (MTurk). We will ask them to complete 2 tasks. In one they guess the result of 5 coin tosses. In the second they count the number of 1s in 5 matrices of 0s and 1s. Both tasks will be incentivized. We will reveal their performance in the 2 tasks and ask if they are willing to advise later subjects (advisees; those in wave 2 of the experiment) to submit their answers instead of those chosen by the wave 2 subject. We will select some of the consenting wave 1 subjects as advisers for wave 2. Each selected wave 1 subject will be randomized into a different treatment in which they are paid differently. In one treatment the adviser will receive a payment at the same level as the subjects in wave 2. In a second treatment they will receive a larger payment. A third treatment involves a payment that is only awarded if wave 2 subjects follow their advice, which allows wave 2 subjects to directly influence the remuneration of the adviser. Note that in each case wave 1 subjects will receive the same bonus (incentive) payment instructions as seen by wave 2 subjects, which specify the minimum bonus they will receive: earning a larger bonus (for instance in the second treatment) will not influence the adviser's behavior since this will only become apparent after they have completed the task.
WAVE 2: In wave 2, subjects (different MTurk subjects) will take the same 2 tasks as in wave 1 (which will provide within-subject variation). This time, after taking the tasks, they will be offered the chance to follow the advice of one of the wave 1 advisers (the between-subject treatment). When making this choice, they will also be told: (1) their own performance; (2) an adviser's performance and (3) how the adviser was remunerated. Note that in wave 2 it will be made clear to subjects that the adviser received the same instructions as they did regarding payment, and hence the adviser did not know the size of the bonus before completing the tasks and so this would not have influenced the adviser's behavior more than it did their own. The different levels of adviser remuneration provide between-subject randomized variation since each wave 2 subject is randomly allocated to one of the three wave 1 remuneration treatments for advisers. All wave 2 subjects will then complete a Raven's progressive matrix test, the Dispositional Envy Scale (Smith, Parrott, Diener, Hoyle & Kim, 1999), a test of general stubbornness (Wilkins, 2015), the SCE-8 scale (Ronayne, Sgroi & Tuckwell, 2020) and some general demographic questions as well as two questions on advice-taking in the COVID19 pandemic.
Experimental Design Details
The randomization is undertaken by the software Qualtrics.
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
25-75 in wave 1; 1500-1800 in wave 2.
Sample size (or number of clusters) by treatment arms
500-600 for each of the three (wave 2) adviser-remuneration treatments
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Measure: The absolute difference in two proportions (of subjects who take good advice). Our budget is fixed but our payment per subject can vary significantly depending on subjects' choices. Pilots suggest the cost per subject is such that a total of 1500-1800 subjects can be recruited for wave 2. That gives 500-600 subjects per between-subject treatment. The minimum detectable difference in two proportions - regardless of the absolute values of the two proportions - is 0.0885 found in the case that we collect data from 500 subjects per condition. (This is found in the case that the midpoint of the who proportions is 0.5; if the proportions have a different midpoint, smaller differences are detectable.) If we collect 600 per condition, the analogous figure is 0.0808.
Supporting Documents and Materials
INSTITUTIONAL REVIEW BOARDS (IRBs)
Department’s Research Ethics Committee (DREC), University of Oxford
IRB Approval Date
IRB Approval Number
Centre for Experimental Social Sciences (CESS), University of Oxford
IRB Approval Date
IRB Approval Number
Post Trial Information
Is the intervention completed?
Is data collection complete?
Is public data available?
Reports, Papers & Other Materials
REPORTS & OTHER MATERIALS