Experimenter Demand Effect in the Lab

Last registered on December 03, 2020

Pre-Trial

Trial Information

General Information

Title
Experimenter Demand Effect in the Lab
RCT ID
AEARCTR-0006847
First published
December 03, 2020, 12:42 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 03, 2020, 1:02 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Pittsburgh

Other Primary Investigator(s)

PI Affiliation
University of Pittsburgh
PI Affiliation
University of Pittsburgh

Additional Trial Information

Status
In development
Start date
2020-12-04
End date
2021-01-31
Secondary IDs
Abstract
We use four canonical behavioral predictions and test their sensitivity experimenter demand effects, using standard student subject pool of an experimental lab . Specifically we look at probability weighting, present bias, endowment effect and charitable giving and examine if demand effects can reverse the standard comparative static found in the literature.
External Link(s)

Registration Citation

Citation
Mustafi, Priyoma, Lise Vesterlund and Alistair Wilson. 2020. "Experimenter Demand Effect in the Lab." AEA RCT Registry. December 03. https://doi.org/10.1257/rct.6847-1.1
Former Citation
Mustafi, Priyoma, Lise Vesterlund and Alistair Wilson. 2020. "Experimenter Demand Effect in the Lab." AEA RCT Registry. December 03. https://www.socialscienceregistry.org/trials/6847/history/80954
Experimental Details

Interventions

Intervention(s)
Using four standard results from the experimental economics literature, we test their sensitivity to experimenter demand effects. Following de Quidt et. al 2018, we test if and how demand effects affects the standard predictions of probability weighting , hyperbolic discounting , endowment effect and charitable giving in a dictator game. Specifically, we ask if using positive and negative experimenter demand effects appropriately can reverse the standard predictions observed in the literature.
Intervention Start Date
2020-12-04
Intervention End Date
2021-01-31

Primary Outcomes

Primary Outcomes (end points)
sooner payment, WTP for lottery, WTA for lottery, giving to charity
Primary Outcomes (explanation)
Hyperbolic discounting : amount chosen to be received as the sooner payment out of $10 endowment
Probability weighting : WTP for two lottery, one having a 10% chance at winning $10 and another having 90% chance of winning 10$
Endowment effect : WTA and WTP for each of the lotteries
Charitable giving : amount allocated from the endowment of $10 to send to charity

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We use a within-subject design in an individual decision-making study. Each participant is asked to make eight decisions, for each they begin with a $10 endowment. They are told they would be paid for only one of them.

For probability weighting, the participants are endowed with $10 to purchase a lottery ticket once for a 10% lottery of winning $10 and another for a 90% lottery of winning $10, while a random price for the lottery is determined between $0.01 and $10. The participant is asked to report their highest willingness to pay (WTP) for each of the lotteries. Using BDM, if the lottery price is greater than WTP, the participant do not buy the lottery and earn the $10.00.
If the lottery price is less than or equal to WTP, the participant buy the lottery and earn $10.00 minus the lottery price, plus the outcome of the lottery.

For endowment effect (in addition to WTP elicitation mentioned above), the participants are also asked to report their lowest WTA for both types of lotteries, and additionally they also receive $10 endowment for these decisions with a lottery price determined randomly. If the lottery price is less than the reported WTA, the participant does not sell the lottery and earn $10.00 plus the outcome of the lottery. If the lottery price is greater than or equal to their WTA, they sell the lottery and earn $10.00 plus the lottery price.

For hyperbolic discounting, the participants are asked to make two decisions, each asking them to decide how they would like to divide an endowment of $10, between a sooner and later payment, with the later payment. For one decision, individuals are asked to choose between today and a week from today, while in the other they are asked to choose between tomorrow and a week from tomorrow. For both the decisions, the participant earns an additional 20% interest on the amount they decide to receive later.

For charitable giving, we use a dictator game with the respondent framed as a charity. The participants are asked to decide how much of a $10 endowment, they would like to donate to the charity. For one decision, the charity receives the amount they donate while in another the donations are matched, i.e. the charity receives twice the amount of donation.

We have three treatments, in which we vary the kind of experimenter demand, following de Quidt et. al (2018). In a positive (negative) treatment, participants are urged to choose higher (lower) numbers, while in the control treatment nothing additional is mentioned.
Experimental Design Details
We use a within-subject design in an individual decision-making study. Each participant is asked to make eight decisions, for each they begin with a $10 endowment. They are told they would be paid for only one of them.

For probability weighting, the participants are endowed with $10 to purchase a lottery ticket once for a 10% lottery of winning $10 and another for a 90% lottery of winning $10, while a random price for the lottery is determined between $0.01 and $10. The participant is asked to report their highest willingness to pay (WTP) for each of the lotteries. Using BDM, if the lottery price is greater than WTP, the participant do not buy the lottery and earn the $10.00.
If the lottery price is less than or equal to WTP, the participant buy the lottery and earn $10.00 minus the lottery price, plus the outcome of the lottery.

For endowment effect (in addition to WTP elicitation mentioned above), the participants are also asked to report their lowest WTA for both types of lotteries, and additionally they also receive $10 endowment for these decisions with a lottery price determined randomly. If the lottery price is less than the reported WTA, the participant does not sell the lottery and earn $10.00 plus the outcome of the lottery. If the lottery price is greater than or equal to their WTA, they sell the lottery and earn $10.00 plus the lottery price.

For hyperbolic discounting, the participants are asked to make two decisions, each asking them to decide how they would like to divide an endowment of $10, between a sooner and later payment, with the later payment. For one decision, individuals are asked to choose between today and a week from today, while in the other they are asked to choose between tomorrow and a week from tomorrow. For both the decisions, the participant earns an additional 20% interest on the amount they decide to receive later.

For charitable giving, we use a dictator game with the respondent framed as a charity. The participants are asked to decide how much of a $10 endowment, they would like to donate to the charity. For one decision, the charity receives the amount they donate while in another the donations are matched, i.e. the charity receives twice the amount of donation.

We have three treatments, in which we vary the kind of experimenter demand, following de Quidt et. al (2018). In a positive (negative) treatment, participants are urged to choose higher (lower) numbers, while in the control treatment nothing additional is mentioned. Thus a positive (negative) demand effect urges one over (under) weight the lottery chances, be more (less) patient in hyperbolic discounting decisions and finally more (less) generous in dictator games.
Randomization Method
computer program
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
800
Sample size: planned number of observations
800
Sample size (or number of clusters) by treatment arms
100 by each treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We use the effect sizes suggested by de Quidt et al. (2018) We use effect sizes and variability in their data to calculate the sample size needed to generate >90% power across all of our hypothesis.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Pittsburgh Institutional Review Board
IRB Approval Date
2020-10-21
IRB Approval Number
STUDY20090122

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials