Electoral Rules vs. Voter Psychology: Fiscal Accountability in Theory and Experiment

Last registered on August 18, 2025

Pre-Trial

Trial Information

General Information

Title
Electoral Rules vs. Voter Psychology: Fiscal Accountability in Theory and Experiment
RCT ID
AEARCTR-0016567
Initial registration date
August 13, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 18, 2025, 6:40 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Tianjin University

Other Primary Investigator(s)

PI Affiliation
University of Manchester

Additional Trial Information

Status
Completed
Start date
2024-10-29
End date
2024-12-19
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We develop and test a model of fiscal accountability in which parties compete over a full fiscal platform: taxes, public goods, and political rents. Our theory predicts that disproportional electoral rules, by raising the stakes of winning, discipline politicians and lead to higher provision of public goods and lower corruption and taxes in equilibrium. We tested this mechanism in a laboratory experiment with human subjects as candidates and voters. The results provide causal evidence for this core mechanism in our experiment, where disproportional rules significantly reduce corruption and increase public goods. However, this institutional discipline is attenuated by two behavioral frictions. First, a partisan shield allows voters to tolerate high corruption from ideologically aligned candidates. Second, candidates learn to execute a public–good gambit, compensating voters for high rents with increased public spending funded by higher taxes. Our integrated approach shows that while electoral rules are a powerful discipline tool, their effectiveness is fundamentally constrained by the psychological trade-offs voters are willing to make.
External Link(s)

Registration Citation

Citation
Saporiti, Alejandro and Yizhi Wang. 2025. "Electoral Rules vs. Voter Psychology: Fiscal Accountability in Theory and Experiment." AEA RCT Registry. August 18. https://doi.org/10.1257/rct.16567-1.0
Experimental Details

Interventions

Intervention(s)
The treatments were designed to address distinct dimensions of the model: the relationship between electoral outcomes and power-sharing, the role of transparency in fiscal policy, and the implications of voter behavior. Specifically, the first treatment implemented a proportional power-sharing rule, where policy-making power was directly proportional to the vote shares received by each candidate.

The second treatment replicated the proportional rule of the first treatment but replaced human voters with automated algorithms, aligning with the methodology commonly used in the experimental literature on electoral competition. This allowed for a closer comparison with existing studies, which often focuses exclusively on candidates' decisions, while isolating the effects of automated versus human decision-making on electoral outcomes.

The third treatment, like the first, used a proportional rule but added full transparency by explicitly informing voters about all elements of fiscal policy, including allocations to both public goods and rents. This treatment was designed to investigate how transparency affects voter accountability and candidate behavior. In the other three treatments only partial information about taxes and public good was released to the electorate in each round, leaving voters to infer corruption levels from the budget constraint.

Finally, the fourth treatment was identical in most respects to the first, but with a key modification: policy-making power responded more than proportionally to vote shares, particularly for the candidate receiving a majority of votes. This allowed us to examine how changes in the payoff structure influenced candidates' strategic behavior and voters' decisions.

The selection of these four treatments strikes a balance between theoretical relevance and empirical feasibility. While the theoretical model generates a wide array of hypotheses, these four treatments were chosen to focus on the most salient aspects of the research exercise and to address gaps in the existing literature. They capture critical mechanisms such as power allocation, transparency, and voter behavior, all while ensuring the experimental design remains practical in terms of complexity and session length.
Intervention (Hidden)
Intervention Start Date
2024-10-29
Intervention End Date
2024-12-19

Primary Outcomes

Primary Outcomes (end points)
Corruption level, Public good level, Tax level.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design

Each experimental treatment consisted of two independent sessions, each lasting 25 periods. Each period simulated an election involving two candidates (labelled A and B) and five voters (denoted as 1, 2, 3, 4, and 5). While longer sessions with 50 or 60 periods are common in experimental studies on elections, we opted for shorter sessions due to the inclusion of human voters instead of computer algorithms.

Human voters increased the time required for each election, as candidates made their decisions first, followed by the voting process, during which all five voters cast their votes independently. To manage session length while ensuring robust data collection, we maintained the smallest meaningful number of voters per election. This constraint was relaxed in treatment 2, where voters were replaced by computer algorithms, allowing a more streamlined process focused on candidates’ behavior.

At the start of each session, 28 participants were randomly and anonymously assigned the role of either ``candidate'' or ``voter.'' These roles remained fixed throughout all 25 elections. Participants were informed of their own roles but not the identities of others they were matched with. In treatment 2, the number of participants was reduced, as voters were automated, and only 8 human subjects were assigned the roles of candidates A and B (see Table~\ref{tab-lab_overview} for further details).

Within each election, participants were randomly assigned to one of four groups, each consisting of seven members. Although group composition changed between elections, the initial role (candidate or voter) of each participant remained fixed. For instance, a participant assigned the role of candidate A in group 1 during the first election might be candidate A in group 3 in the second election, and so forth. The same applied to voters, except in treatment 2, where the electorate was automated. Group re-matching was implemented in each period to mitigate potential collusion among candidates and voters.

The experimental design employed neutral language to avoid introducing unintended positive or negative connotations unrelated to the incentive structure. For example, participants assigned the role of candidates were instructed to make two decisions in each election: (a) a budget, representing a percentage (a number between 1 and 99) of the total voter income, set at 100 Experimental Currency Units (ECUs); and (b) an allocation of the collected income between good 1, which benefits all voters equally, and good 2, which benefits only the candidates.

Voters observed the budget and proposed spending on good 1 from each candidate before independently voting for their preferred candidate (A or B). In treatment 3, voters were also explicitly informed about the spending on good 2. This variation in the information setting enabled an assessment of how voters respond differently to scenarios where fiscal misuse is explicitly disclosed compared to cases where they infer such misuse indirectly from other fiscal policy components. This comparison highlights the role of transparency in fostering accountability and the challenges voters face in less transparent settings.

Participants were informed about their payoff structure at the beginning of the session. Candidates’ payoffs depended on the number of votes their proposed budget and allocation received, which determined their ``share of power.'' Power directly influenced their earnings from the amount allocated to good 2. In most treatments, power was proportional to the number of votes received. For instance, if a candidate received three votes out of five, their power share was 0.6. However, in treatment 4, power increased disproportionately with the percentage of votes received, allowing us to examine how non-linear returns affect candidates’ strategic decisions and competition for voter support.

Voters’ payoffs were determined by their net income, the benefit derived from good 1, and their candidate bias. Initial incomes were randomly assigned, with voters 1–4 receiving values between 18 and 25 ECUs, and voter 5 receiving the remainder of the group’s total income (100 ECUs). A candidate’s proposed budget affected voters’ net incomes. For instance, if a voter’s initial income was 25 ECUs and a candidate proposed a budget of 32\%, the voter’s net income would be \( (100-32)\% \times 25 = 17 \) ECUs. Candidate bias, a randomly assigned value between \([-0.25, 0.25]\), influenced voters’ payoffs by favouring one candidate over the other, independent of policy proposals.

To assist participants in evaluating their decisions, an `expected payoff calculator' was provided on the decision screen. This tool allowed subjects to experiment with different combinations of their own and others’ choices to observe the resulting per-period payoffs. Participants could use the calculator as many times as needed before making their decisions. A log-sheet was also provided for participants to track their earnings throughout the experiment, helping them better understand how different decisions impacted their payoffs in previous rounds.

Control questions were administered at the start of each session to ensure participants understood the instructions. After the final election period, and before receiving payment, participants completed a post-experiment survey. The survey included questions about their experience during the experiment, their decision-making process, and related factors. Responses were anonymized and did not affect participants' per-period payments.\footnote{A copy of the lab instructions, the control questions, and the post-experiment survey are available from the authors upon request.}
Experimental Design Details
Randomization Method
Randomization by computer
Randomization Unit
Individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
32 groups
Sample size: planned number of observations
800 observation at the candidates level, and 4000 observation at the voter level.
Sample size (or number of clusters) by treatment arms
200 candidate-level elections, and 1000 voter-level in the control treatment; 600 candidate-level and 3000 voter-level in the other three treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ma Yinchu School of Economics
IRB Approval Date
2024-09-15
IRB Approval Number
MYSOE-2024003

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials