Fairness and Legitimacy of Voting Methods in Digital Participatory Budgeting

Last registered on March 13, 2023

Pre-Trial

Trial Information

General Information

Title
Fairness and Legitimacy of Voting Methods in Digital Participatory Budgeting
RCT ID
AEARCTR-0011021
Initial registration date
March 06, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 13, 2023, 9:00 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
ETH Zürich

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2023-03-08
End date
2023-03-10
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The past few years have seen the rise of digital Participatory Budgeting (PB) as a popular democratic approach for cities to involve citizens in the allocation of public funds. However, much of the process design is still being formulated, and cities often have to experiment with their process design to ensure fairness and legitimacy. This paper aims to address the lack of empirical research in the field and provide guidance to city practitioners on the design decisions they can make to optimise the fairness and political legitimacy of digital participatory processes.

From a Social Choice perspective, fairness in PB often implies a proportional distribution of a public budget. However, most cities currently use a simple rule to choose winning projects, which can suffer from the negative effects of majority politics and minority interests being overlooked. To explore alternative solutions, this paper examines prevalent input and aggregation methods in recent social choice literature and real-world PB practices, specifically an emerging voting aggregation method called Method of Equal Shares that focuses on proportionality in PB.

Using behaviour experiments, this paper investigates the economic and social trade-offs in the voting process of digital PB programs, and explores the main challenges from the perspectives of social choice and citizen perception. The objective is to identify the combination of input and aggregation methods that minimises cognitive load, considers proportionality issues, and ensures legitimacy ratings. The findings highlight the importance of proportionality considerations and provide valuable insights for the design of digital PB systems that prioritise fairness and legitimacy in the real world.
External Link(s)

Registration Citation

Citation
Yang, Joshua C.. 2023. "Fairness and Legitimacy of Voting Methods in Digital Participatory Budgeting." AEA RCT Registry. March 13. https://doi.org/10.1257/rct.11021-1.0
Experimental Details

Interventions

Intervention(s)
The experiment on Qualtrics involves two voting rounds where participants are asked to vote in a participatory budgeting (PB) program.

In the first vote, participants are randomly assigned to one of three settings with different population distributions (10%, 30%, 60%) to understand how being in the majority or minority affects their perception of the outcome. In the first round, participants compare two voting aggregation methods and rate them in terms of satisfaction, fairness and trustworthiness on a Likert scale.

In the second vote, participants vote in a more realistic PB with citizen project with different districts, project categories and costs. They are asked to cast their vote in 6 different commonly used voting input methods in PB: (1) Select any number of projects (2) Select 5 project (3) Distribute 5 points (4) Distribute 10 points (5) Select 5 and rank (6) Distribute 10 points for 5 projects. Then, they can rate these input methods in terms of ease of use and expressiveness.

The participants are shown random simulated voting results with outcomes calculated using 2 different voting aggregation methods. They are asked to rate fairness and trustworthiness on a Likert scale. Then, the participants are randomly put into 3 different groups with different kind of explanations (mechanism explanation, the distribution of individual budget won, & the distribution of budget according to district and categories). The participants are asked to rate fairness and trustworthiness again.

Finally, the participants will be asked to vote for their preferred voting input and aggregation method. The most supported voting input method and voting aggregation will be used to calculate their final compensation for the experiment.
Intervention Start Date
2023-03-08
Intervention End Date
2023-03-10

Primary Outcomes

Primary Outcomes (end points)
The experiment tests the voter perception of citizens of different relative position (majority, minority) on different voting input methods and voting aggregation methods. The dependent variables are a participant's ratings on voting input methods in terms of ease of use and expressiveness and voting aggregation methods in terms of satisfaction, fairness and trustworthiness. The other main dependent variable is the incentivised decision of preferred voting input method and voting aggregation a participant wishes to use to calculate their compensation.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The online experiment on Qualtrics consists of two voting rounds. It is an incentivised experiment, that is, participants receive money
depending on their choices in their final round of voting.

First, the participants are introduced to a vignette : They live in a virtual city with 3 districts (Nord, Süd, West) with different proportion of the city population (10%, 30%, 60%) and they are asked to cast their vote in a participatory budgeting (PB) program. In order to understand how being a part of the minority or majority affects voters’ perception of the outcome, the participants are randomly assigned to one of the following four settings with different population distribution. The difference in demographics is also reflected in the voting aggregation outcome in the later stage.

Participants are then shown a table with realised bundles of a simulated outcome of the PB program, with projects selected using the two different aggregation methods with 200 simulated voters. The number of votes on a certain project roughly reflects the population of the respective district. The participants are then asked to compare the two outcome using a simple Greedy outcome, selecting projects based on number of votes, and Method of Equal Shares, allocating budget to projects proportional to number of voters, and rate them in terms of (1) legitimacy, (2) how easy they found the voting task, and (3) how well the assigned input format captured their preferences.

In the second vote, participants proceed to the more realistic PB voting with project categories and different costs on top of the district of the projects. The projects are extracted from the Zurich 2021 city PB project with some level of abstraction and alteration. In this vote, participants are asked to pick the district in the city they identify with the most, and they would act as the residents of the districts in this realistic PB vote. As the participants are all university students in Zurich, it is assumed that they have a certain level of connection or understanding of the city.

They are asked to cast their vote in 6 different commonly used voting input methods in PB: (1) Select any number of projects (2) Select 5 project (3) Distribute 5 points (4) Distribute 10 points (5) Select 5 and rank (6) Distribute 10 points for 5 projects. Then, they can rate these input methods in terms of ease of use and expressiveness.

The participants are shown different random simulated voting results with outcomes calculated using (1) Greedy Method (2) Method of Equal Shares. They are then to assess the different outcomes and rate outcomes in terms of satisfaction, fairness, and the trustworthiness of the methods. Then, the participants are randomly put into 3 different groups with different kind of explanations (mechanism explanation, the distribution of individual budget won, & the distribution of budget according to district and categories). The participants are asked to rate fairness and trustworthiness again. They are shown the budget distribution of the two outcomes and they have to choose which distribution they would rather be a random voter. Finally, the participants will be asked to vote for their preferred voting input and aggregation method. The most supported voting input method and voting aggregation will be used to calculate their final compensation pay-off for the experiment.
Experimental Design Details
Randomization Method
Qualtrics online randomisation.
Randomization Unit
Randomisation takes part on the participant level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No Clusters
Sample size: planned number of observations
180 participants.
Sample size (or number of clusters) by treatment arms
60 participants for each relative position in population (10%, 30%, 60%) in the first stage and 60 participants for each of the 3 explanation groups in the second stage.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Commission of ETH Zürich
IRB Approval Date
2022-08-24
IRB Approval Number
EK 2022-N-143

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials