Experimental Aversion

Last registered on July 28, 2023

Pre-Trial

Trial Information

General Information

Title
Experimental Aversion
RCT ID
AEARCTR-0011843
Initial registration date
July 26, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 28, 2023, 2:05 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Birmingham

Other Primary Investigator(s)

PI Affiliation
PI Affiliation

Additional Trial Information

Status
In development
Start date
2023-07-27
End date
2023-07-29
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In this research, we aim to further explore the concept of `experiment aversion`: Individuals appear more likely to object on moral grounds to a randomisation between two policies affecting others than to each policy alone. Our further exploration is motivated by contrasting findings. Previous work using a vignette design (Meyer et al, 2019) found evidence in favour of experiment aversion. Our own subsequent work, a pre-registered non-hypothetical experiment found no supporting evidence. We hypothesize that the diverging findings are driven by the fact that in the non-hypothetical experiment, individuals’ experimental payoffs are linked to learning from policy evidence. To test this hypothesis, we propose a follow-up study that employs a vignette design similar to Meyer et al. (2019). Each vignette describes a situation in which a decision maker has to decide between implementing a policy A, a policy B or to conduct a randomized experiment between both policies (A/B) before making a final choice between A and B. In a departure from previous designs, participating individuals are allocated, between subjects, to three different conditions. The baseline condition conceptually replicates Meyer et al. (2019) by asking participants to rate the moral appropriateness of implementing policies A, B, or a randomization of A/B. In the two treatment conditions we ask participants to indicate instead how the decision maker should act (Treatment 1) or to chose how they would act themselves in lieu of the decision maker (Treatment 2). Examining variations in experimental aversion across these three conditions will help determine whether individuals rate the instrumental value of learning from randomization higher than their possible moral objection.
External Link(s)

Registration Citation

Citation
Chlond, Bettina, Timo Goeschl and Johannes Lohse. 2023. "Experimental Aversion." AEA RCT Registry. July 28. https://doi.org/10.1257/rct.11843-1.0
Experimental Details

Interventions

Intervention(s)
In a vignette study we are varying the question of whether participants are selecting an action they find (i) ethically correct (ii) a third party should choose (iii) they themselves would choose in lieu of the third party.
Intervention Start Date
2023-07-28
Intervention End Date
2023-07-29

Primary Outcomes

Primary Outcomes (end points)
Each participant will encounter two vignettes each and will be asked to choose a preferred outcome that either favours (implementing) a single policy (A or B) or a 50/50 randomisation between A and B.
Our primary outcome will compare how often the randomisation is chosen relative to each single policy.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
As in Meyer et al. (2019), we will utilize six distinct vignettes spanning various policy areas such as Genetic Testing, Autonomous Vehicles, Social Policy, Retirement Saving, and Hospital Safety. These vignettes depict a decision maker attempting to enact new policies. Each vignette outlines two different policies under the decision maker's consideration. Participants are asked to judge what is most appropriate before final policy implementation: implementing policy A, policy B or a randomized experiment between policies A and B. We sourced and translated five vignettes directly from the existing literature. The sixth vignette, pertaining to Social Policy, has been designed to describe the policies we tested in our previous experiment.
Each participant is randomly presented with two vignettes.
Our main treatment variation pertains to three randomly allocated conditions.
The baseline condition is a conceptual replication of Meyer et al. (2019) in that we ask participants to select the morally most appropriate course of action in a given scenario.
The two treatment conditions vary the type of judgment we elicit from participants. In Treatment 1, we ask participants for a prescriptive norm in that we ask them to judge which course of action the decision maker in the vignette ought to take. In Treatment 2, we ask for a personal norm by asking them which action they would take if they had to decide in lieu of the person in the vignette.
We believe that these variations make the potential instrumentality of information generated from an experiment more salient relative to other considerations.
Experimental Design Details
Randomization Method

Between subjects, participants are randomized with the help of a computer program.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
We plan to collect 200 observations for each of the three treatments for a total of 600 observations. This should provide us with sufficient statistical power to identify small to medium sized effects. Participants will be recruited from Prolific among German speakers with a sufficiently high Prolific rating >95.
Sample size (or number of clusters) by treatment arms
200 participants in three between subjects arms. Each participant will be allocated two random vignettes (out of 6).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
This should provide us with sufficient statistical power to identify small to medium sized effects.
IRB

Institutional Review Boards (IRBs)

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials