When Good Advice is Ignored: The Role of Adviser Remuneration

Last registered on June 28, 2017


Trial Information

General Information

When Good Advice is Ignored: The Role of Adviser Remuneration
Initial registration date
April 27, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 27, 2017, 10:35 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 28, 2017, 9:54 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.


Primary Investigator

University of Warwick

Other Primary Investigator(s)

PI Affiliation
University of Oxford

Additional Trial Information

In development
Start date
End date
Secondary IDs
If a decision-maker receives good advice they should probably take it. The remuneration of the advisor, when that remuneration does not relate in any way to the probability that the advice is good should not matter. However, it may be the case that people do indeed ignore good advice because of advisor remuneration. Through a series of experiments, we wish to explore the behavioral factors that may lie behind such deviations from rationality.
External Link(s)

Registration Citation

, and Daniel Sgroi. 2017. "When Good Advice is Ignored: The Role of Adviser Remuneration ." AEA RCT Registry. June 28. https://doi.org/10.1257/rct.2022
Former Citation
, and Daniel Sgroi. 2017. "When Good Advice is Ignored: The Role of Adviser Remuneration ." AEA RCT Registry. June 28. https://www.socialscienceregistry.org/trials/2022/history/19017
Experimental Details


Subjects undertake two tasks and are then offered the opportunity to replace their own decisions with those suggested by an advisor.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
(good) advice followed or not; subject envy; stubbornness (in general and relating to the sunk-cost fallacy); whether the nature of the task matters (luck or ability-based) in determining if advice is followed; demographics
Primary Outcomes (explanation)
Whether good advice is followed is determined in the experiment: advice is good if following it increases expected payoff. Remuneration is determined by treatment (low or high, can be affected by the subject or cannot). We can also measure the role of the task itself (whether luck or ability-based) by looking at behaviour by the same individuals across tasks. Subject envy is a measure of envy drawn from the psychology literature. Stubbornness associated with the sunk-cost fallacy is measured using scenarios derived from the relevant social science literature converted to a Likert scale. Stubbornness is measured using a series of questions taken from the existing literature and again converted to a Likert scale.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
MTurk study involving a series of tasks and demographic/psychometric measures.
Experimental Design Details
In wave 1, subjects are recruited using MTurk. They are asked to complete 2 tasks. In one they must guess the result of 10 coin tosses. In the second they are asked to complete a 10-questions Raven visual IQ test. We record their performance in the 2 tasks enabling us to allocate some of them as "advisors" for wave 2. We collect data on their experiences (for instance, on the perceived difficulty of the tasks). We record the distribution of answers so we can choose the advisors to be better than the bulk of other subjects in wave 2.

In wave 2, subjects (different subjects, but again recruited through MTurk) take the 2 tasks (which are exactly the same as in wave 1). This time, after taking the tasks, they are offered the chance to switch their own answers for those of an "advisor" determined from wave 1. They are told that they can win a bonus if one of their answers (selected at random) proves to be correct. When making this choice, they are also told: (1) their own success rate; (2) the advisor's success rate; and (3) how the advisor was remunerated. Next the subjects are split into 3 treatments:
(A) In one treatment the advisor receives a bonus payment of the same level as the subjects.
(B) In another they received a larger bonus. Subjects are told that the advisor saw the same instructions as they did, hence the advisor did not know the size of the bonus before completing the tasks and so would not have influenced the advisor's behavior more than it did their own.
(C) A third involves a bonus that is only awarded if the subject explicitly follows the advice, allowing the subject to directly influence the remuneration of the advisor.
All subjects then complete a demographic survey and a psychological test designed to elicit an index of envy (using the Dispositional Envy Scale - Smith, R. H., Parrott, W. G., Diener, E. F., Hoyle, R. H., & Kim, S. H., 1999) and similar tests of stubbornness, both generally and in relation to the sunk-cost fallacy, drawn from various sources in social science.
Randomization Method
The randomization is undertaken by Qualtrics (the software we are using to run the MTurk study).
Randomization Unit
Individual level only
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
No clustering
Sample size: planned number of observations
Wave 1: approx 75 MTurk users Wave 2: approx 1,500 MTurk users
Sample size (or number of clusters) by treatment arms
approx 500 per condition for Wave 2.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Measure: The absolute difference in two proportions p & q. Sample size per condition: 500 Power: 80% Significance Level: 5% The minimum detectable effect size that is detectable regardless of the absolute values of p & q is: 0.0882 i.e., 8.82 percentage points (found where p=0.4559 and q=0.5441 i.e., where (p+q)/2=0.5). For different p & q values, smaller effect sizes can be detectable e.g., in an extreme example with p=0.0001 and q=0.0158, the difference of 0.0157 is detectable with sample sizes of 500 in each condition. Note that lower effect sizes are detectable in the case that the empirical values of p & q are such that (p+q)/2< or >1/2.

Institutional Review Boards (IRBs)

IRB Name
Centre for Experimental Social Sciences (CESS), University of Oxford
IRB Approval Date
IRB Approval Number
IRB Name
Department’s Research Ethics Committee (DREC), University of Oxford
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials