Effects of varying the numbers of choice options and choice questions in discrete choice experiments for valuing public goods

Last registered on September 22, 2023

Pre-Trial

Trial Information

General Information

Title
Effects of varying the numbers of choice options and choice questions in discrete choice experiments for valuing public goods
RCT ID
AEARCTR-0011342
Initial registration date
May 08, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 17, 2023, 12:24 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 22, 2023, 5:23 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Warsaw

Other Primary Investigator(s)

PI Affiliation
University of Tennessee

Additional Trial Information

Status
On going
Start date
2023-03-03
End date
2023-10-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Discrete choice experiments (surveys) are commonly used in economics, marketing, transportation and other disciplines to understand the preferences that people have for various goods and services. This research seeks to gain a better understanding of whether and how choices and preference information vary according to the design of the choice experiment. The particular focus is on the sensitivity of choices to the number of available choice options in a choice question (two or three), and the number of choice questions included in the survey (one versus many).
External Link(s)

Registration Citation

Citation
Vossler, Christian and Ewa Zawojska. 2023. "Effects of varying the numbers of choice options and choice questions in discrete choice experiments for valuing public goods." AEA RCT Registry. September 22. https://doi.org/10.1257/rct.11342-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
There are four main survey treatments, which vary the voting procedures (i.e., discrete choice experiment design) across experiment participants:
- Single binary choice involving a "small" project. Participants are asked to vote on whether the group will fund a specific project. The option with the most votes is implemented for real.
- Single binary choice involving a "large" project. Participants are asked to vote on whether the group will fund a specific project. The option with the most votes is implemented for real.
- Repeated binary choice. Participants vote on nine voting questions. Each question is a yes or no vote on whether the group will fund a specific project. The projects vary across voting questions. One vote is randomly selected to be binding. The option with the most votes (for the randomly selected question) is implemented for real.
- Repeated trinary choice. Participants vote on nine voting questions. Each question asks participants to vote for one of two projects or no project. The pair of included projects varies across voting questions. One vote is randomly selected to be binding. The option with the most votes (for the randomly selected question) is implemented for real.
Intervention Start Date
2023-03-03
Intervention End Date
2023-10-31

Primary Outcomes

Primary Outcomes (end points)
(1) Individuals' choices (votes) made in the discrete choice experiment. (2) Estimates of willingness to pay (WTP) for the projects.
Primary Outcomes (explanation)
WTP is identified based on experimental variation in the cost of funding the public good.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experiment participation is facilitated through an online survey. The survey’s is about the value that people place on providing services to improve the well-being of farmworkers in New York State and about how they form voting choices. Participants are placed in groups of 50 people that are completing the survey experiment at roughly the same time. Votes are consequential in the sense that they determine whether the group will fund a farmwork assistance project, in which case people will forego some or all of their experiment earnings. We have partnered with the Cornell Farmworker Program to implement the projects as described in the survey.

The questionnaire starts with several "warm-up" questions to make people start thinking about the survey’s topic. Then, the survey describes possible assistance projects, an explanation of the voting procedures, and how projects will be carried out if implemented. This is followed by quiz questions to gauge and reinforce participant understanding. Next, participants are asked one or nine voting questions. Each question is presented as a choice set (standard for discrete choice experiments) and framed as a group vote. These votes are over farmworker assistance projects, which vary in the amount of education, clothing, and transportation services they would provide to needy farmworkers living in New York. Several follow-up questions are asked to assess understanding and to gauge possible motives underlying voting choices. The survey ends with socio-demographic questions.

There are four main survey treatments:
- Single binary choice involving a "small" project. Participants are asked to vote on whether the group will fund a specific project. The option with the most votes is implemented for real.
- Single binary choice involving a "large" project. Participants are asked to vote on whether the group will fund a specific project. The option with the most votes is implemented for real.
- Repeated binary choice. Participants vote on nine voting questions. Each question is a yes or no vote on whether the group will fund a specific project. The projects vary across voting questions. One vote is randomly selected to be binding. The option with the most votes (for the randomly selected question) is implemented for real.
- Repeated trinary choice. Participants vote on nine voting questions. Each question asks participants to vote for one of two projects or no project. The pair of included projects varies across voting questions. One vote is randomly selected to be binding. The option with the most votes (for the randomly selected question) is implemented for real.

There are other important sources of variation:
- For all treatments, the cost of a given project is randomly varied across participants. This allows for identification of willingness to pay.
- Half of the repeated binary choice participants vote on the "small" project first. The other half vote on the "large" project first.
- Half of the repeated trinary choice participants first face a choice set (vote) that includes the "small" project, the "large" project, and "no project" as voting options.
- Aside from deliberate placement of "small"/"large" project voting questions, the voting questions in the nine-question sequences are randomly ordered.
Experimental Design Details
Randomization Method
Randomization is computer automated (i.e., we use Qualtrics' "randomizer" tool).
Randomization Unit
Individual
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
1200 respondents
Sample size: planned number of observations
An observation is a single vote from a single participant. With 300 participants per treatment, this yields 300 observations for each of the two single binary choice treatments and 2700 observations for each of the repeated choice treatments. Total number of observations is 6000.
Sample size (or number of clusters) by treatment arms
300 respondents in each of the four treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Comparison of vote proportions. This includes: (1) single binary choice "small" versus "large"; and (2) comparing the single binary choice ("small" or "large") to the same vote when placed in the binary choice voting sequence. Based on data from a pilot experiment, the MDE is a 10-point difference in the percentage of "yes" (for project) votes. Comparison of willingness to pay. For the repeated choice treatments, WTP can be estimated either by pooling all the data or focusing on a single vote. As the cleanest comparison across treatments involve what happened in a single vote, we focus on this case here. Data from a pilot experiment suggests an MDE in the range of 89 cents to $1.16.
IRB

Institutional Review Boards (IRBs)

IRB Name
Cornell University IRB
IRB Approval Date
2022-12-15
IRB Approval Number
IRB0145635

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials