Experimental Design
Each experimental treatment consisted of two independent sessions, each lasting 25 periods. Each period simulated an election involving two candidates (labelled A and B) and five voters (denoted as 1, 2, 3, 4, and 5). While longer sessions with 50 or 60 periods are common in experimental studies on elections, we opted for shorter sessions due to the inclusion of human voters instead of computer algorithms.
Human voters increased the time required for each election, as candidates made their decisions first, followed by the voting process, during which all five voters cast their votes independently. To manage session length while ensuring robust data collection, we maintained the smallest meaningful number of voters per election. This constraint was relaxed in treatment 2, where voters were replaced by computer algorithms, allowing a more streamlined process focused on candidates’ behavior.
At the start of each session, 28 participants were randomly and anonymously assigned the role of either ``candidate'' or ``voter.'' These roles remained fixed throughout all 25 elections. Participants were informed of their own roles but not the identities of others they were matched with. In treatment 2, the number of participants was reduced, as voters were automated, and only 8 human subjects were assigned the roles of candidates A and B (see Table~\ref{tab-lab_overview} for further details).
Within each election, participants were randomly assigned to one of four groups, each consisting of seven members. Although group composition changed between elections, the initial role (candidate or voter) of each participant remained fixed. For instance, a participant assigned the role of candidate A in group 1 during the first election might be candidate A in group 3 in the second election, and so forth. The same applied to voters, except in treatment 2, where the electorate was automated. Group re-matching was implemented in each period to mitigate potential collusion among candidates and voters.
The experimental design employed neutral language to avoid introducing unintended positive or negative connotations unrelated to the incentive structure. For example, participants assigned the role of candidates were instructed to make two decisions in each election: (a) a budget, representing a percentage (a number between 1 and 99) of the total voter income, set at 100 Experimental Currency Units (ECUs); and (b) an allocation of the collected income between good 1, which benefits all voters equally, and good 2, which benefits only the candidates.
Voters observed the budget and proposed spending on good 1 from each candidate before independently voting for their preferred candidate (A or B). In treatment 3, voters were also explicitly informed about the spending on good 2. This variation in the information setting enabled an assessment of how voters respond differently to scenarios where fiscal misuse is explicitly disclosed compared to cases where they infer such misuse indirectly from other fiscal policy components. This comparison highlights the role of transparency in fostering accountability and the challenges voters face in less transparent settings.
Participants were informed about their payoff structure at the beginning of the session. Candidates’ payoffs depended on the number of votes their proposed budget and allocation received, which determined their ``share of power.'' Power directly influenced their earnings from the amount allocated to good 2. In most treatments, power was proportional to the number of votes received. For instance, if a candidate received three votes out of five, their power share was 0.6. However, in treatment 4, power increased disproportionately with the percentage of votes received, allowing us to examine how non-linear returns affect candidates’ strategic decisions and competition for voter support.
Voters’ payoffs were determined by their net income, the benefit derived from good 1, and their candidate bias. Initial incomes were randomly assigned, with voters 1–4 receiving values between 18 and 25 ECUs, and voter 5 receiving the remainder of the group’s total income (100 ECUs). A candidate’s proposed budget affected voters’ net incomes. For instance, if a voter’s initial income was 25 ECUs and a candidate proposed a budget of 32\%, the voter’s net income would be \( (100-32)\% \times 25 = 17 \) ECUs. Candidate bias, a randomly assigned value between \([-0.25, 0.25]\), influenced voters’ payoffs by favouring one candidate over the other, independent of policy proposals.
To assist participants in evaluating their decisions, an `expected payoff calculator' was provided on the decision screen. This tool allowed subjects to experiment with different combinations of their own and others’ choices to observe the resulting per-period payoffs. Participants could use the calculator as many times as needed before making their decisions. A log-sheet was also provided for participants to track their earnings throughout the experiment, helping them better understand how different decisions impacted their payoffs in previous rounds.
Control questions were administered at the start of each session to ensure participants understood the instructions. After the final election period, and before receiving payment, participants completed a post-experiment survey. The survey included questions about their experience during the experiment, their decision-making process, and related factors. Responses were anonymized and did not affect participants' per-period payments.\footnote{A copy of the lab instructions, the control questions, and the post-experiment survey are available from the authors upon request.}