Ignoring Good Advice

Last registered on July 28, 2020


Trial Information

General Information

Ignoring Good Advice
Initial registration date
July 28, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 28, 2020, 9:33 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

University of Warwick

Other Primary Investigator(s)

PI Affiliation
University of Oxford

Additional Trial Information

In development
Start date
End date
Secondary IDs
Advice should (rationally) be followed by an advisee if accepting it leads to a higher expected payoff than ignoring it. Whether, when and why people deviate from this are important social science questions. We wish to explore the behavioral factors (i.e., those outside the standard rational economic paradigm) that may lie behind such deviations from rationality.
External Link(s)

Registration Citation

Ronayne, David and Daniel Sgroi. 2020. "Ignoring Good Advice." AEA RCT Registry. July 28. https://doi.org/10.1257/rct.6229
Experimental Details


In the main wave of the study (wave 2), subjects will undertake two (incentivized) tasks; a random (coin-flipping) task and a real-effort (number-counting) task, which provide within-subject variation. Subjects will then be offered the chance to take advice from an adviser (another subject drawn from wave 1 of the study which we will run in advance of wave 2). We vary the remuneration of the adviser, which provides randomized between-subject variation.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
binary variable of whether (good) advice followed or not, and its interactions with: adviser remuneration; task type (random or real-effort); task performance; subject envy; susceptibility to the sunk cost effect; (general) stubbornness.
Primary Outcomes (explanation)
Each subject decides whether to accept or ignore advice, once for each task. Advice is said to be good if accepting it increases expected payoff. We vary the remuneration of advisers, which provides randomized between-subject variation. We will also measure the role of the task itself (luck or effort-based) by looking at the behavior of the same subjects across tasks; a within-subject measure. Subject envy is a measure of envy drawn from the psychology literature. We will use a new measure of susceptibility to the sunk cost effect (the "SCE-8") taken from one of our other papers ("Evaluating the Sunk Cost Effect" by Ronayne, Sgroi & Tuckwell, CAGE WP 475, 2020). General stubbornness is measured using a series of questions taken from the existing literature converted to a Likert scale.

Secondary Outcomes

Secondary Outcomes (end points)
intelligence test; questions relating to medical advice (linked to the COVID19 pandemic); demographics
Secondary Outcomes (explanation)
We will use two questions related to advice-taking behavior in the COVID19 pandemic together to measure recent real-life advice taking. We will use a short (Raven's progressive matrix) test to provide a control for intelligence, and standard demographics to add further controls for our analysis.

Experimental Design

Experimental Design
WAVE 1: In wave 1, subjects will be recruited using Amazon's Mechanical Turk (MTurk). We will ask them to complete 2 tasks. In one they guess the result of 5 coin tosses. In the second they count the number of 1s in 5 matrices of 0s and 1s. Both tasks will be incentivized. We will reveal their performance in the 2 tasks and ask if they are willing to advise later subjects (advisees; those in wave 2 of the experiment) to submit their answers instead of those chosen by the wave 2 subject. We will select some of the consenting wave 1 subjects as advisers for wave 2. Each selected wave 1 subject will be randomized into a different treatment in which they are paid differently. In one treatment the adviser will receive a payment at the same level as the subjects in wave 2. In a second treatment they will receive a larger payment. A third treatment involves a payment that is only awarded if wave 2 subjects follow their advice, which allows wave 2 subjects to directly influence the remuneration of the adviser. Note that in each case wave 1 subjects will receive the same bonus (incentive) payment instructions as seen by wave 2 subjects, which specify the minimum bonus they will receive: earning a larger bonus (for instance in the second treatment) will not influence the adviser's behavior since this will only become apparent after they have completed the task.

WAVE 2: In wave 2, subjects (different MTurk subjects) will take the same 2 tasks as in wave 1 (which will provide within-subject variation). This time, after taking the tasks, they will be offered the chance to follow the advice of one of the wave 1 advisers (the between-subject treatment). When making this choice, they will also be told: (1) their own performance; (2) an adviser's performance and (3) how the adviser was remunerated. Note that in wave 2 it will be made clear to subjects that the adviser received the same instructions as they did regarding payment, and hence the adviser did not know the size of the bonus before completing the tasks and so this would not have influenced the adviser's behavior more than it did their own. The different levels of adviser remuneration provide between-subject randomized variation since each wave 2 subject is randomly allocated to one of the three wave 1 remuneration treatments for advisers. All wave 2 subjects will then complete a Raven's progressive matrix test, the Dispositional Envy Scale (Smith, Parrott, Diener, Hoyle & Kim, 1999), a test of general stubbornness (Wilkins, 2015), the SCE-8 scale (Ronayne, Sgroi & Tuckwell, 2020) and some general demographic questions as well as two questions on advice-taking in the COVID19 pandemic.
Experimental Design Details
Randomization Method
The randomization is undertaken by the software Qualtrics.
Randomization Unit
Individual level
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
No clustering
Sample size: planned number of observations
25-75 in wave 1; 1500-1800 in wave 2.
Sample size (or number of clusters) by treatment arms
500-600 for each of the three (wave 2) adviser-remuneration treatments
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Measure: The absolute difference in two proportions (of subjects who take good advice). Our budget is fixed but our payment per subject can vary significantly depending on subjects' choices. Pilots suggest the cost per subject is such that a total of 1500-1800 subjects can be recruited for wave 2. That gives 500-600 subjects per between-subject treatment. The minimum detectable difference in two proportions - regardless of the absolute values of the two proportions - is 0.0885 found in the case that we collect data from 500 subjects per condition. (This is found in the case that the midpoint of the who proportions is 0.5; if the proportions have a different midpoint, smaller differences are detectable.) If we collect 600 per condition, the analogous figure is 0.0808.

Institutional Review Boards (IRBs)

IRB Name
Department’s Research Ethics Committee (DREC), University of Oxford
IRB Approval Date
IRB Approval Number
IRB Name
Centre for Experimental Social Sciences (CESS), University of Oxford
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials