BallotBot: Can Chatbots Strengthen Direct Democracy?

Last registered on May 21, 2024


Trial Information

General Information

BallotBot: Can Chatbots Strengthen Direct Democracy?
Initial registration date
May 14, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 21, 2024, 10:46 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

ETH Zurich

Other Primary Investigator(s)

PI Affiliation
ETH Zurich
PI Affiliation
University of Bologna

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Are AI technologies complementary to direct democratic institutions? This study provides empirical evidence on this question in the context of Swiss Federal Referendums. The starting point is BallotBot, a newly built GPT-powered chatbot that has access to full-text source material for referendum actions in Switzerland. In a survey experiment to be run at the end of May, we will randomize respondents to gain access to BallotBot (treatment) or a government-provided voter guide (active control). In the survey, respondents undertake a set of survey tasks to answer the following questions: (1) Do chatbots reduce the cost of information acquisition on referendum issues? (2) Does chatbot access increase knowledge about referendum issues? (3) Does chatbot access increase (expected) referendum turnout? These results provide an early indication of how AI technologies will shape democratic institutions in the years to come.
External Link(s)

Registration Citation

Ash, Elliott, Sergio Galletta and Giacomo Opocher. 2024. "BallotBot: Can Chatbots Strengthen Direct Democracy?." AEA RCT Registry. May 21.
Experimental Details


We randomly assign a random sample of Swiss voters to use a chatbot or a voter guide to answer a set of closed-ended questions about the issues they vote for on the 9th of June. We measure the time they need to complete the task, the perceived cost, the share of correct answers, how sure they are of each answer, and their willingness to ask further questions. We do this to understand why these two sources of information are different and which one works better in the context of mapping political information into knowledge. Ultimately, we aim to shed light on the potential beneficial role of AI in strengthening direct democracy.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Share of correct answers, degree of confidence, objective cost of answering the questions, perceived cost of answering, willingness to ask further questions, and time spent on the resource for answering further questions.
Primary Outcomes (explanation)
The degree of confidence is measured on a 4-degree scale (how certain are you about your answer? 1.highly uncertain 2.uncertain 3.certain 4.highly certain). The objective cost is measured as the time they need to answer each question. The perceived cost is measured through a bidding game similar to the Becker–DeGroot–Marschak method. The willingness to ask further questions is measured through the probability of choosing a question a participant is interested in and looking for the answer at the end of the survey, without any incentive in place. The time spent on the resource is measured as the amount of seconds the respondent spends outside the survey window.

Secondary Outcomes

Secondary Outcomes (end points)
We collect some feedback on the functioning of the chatbot, from the experimental group that received it to explore how they compare it with the voter guide.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
It is a survey experiment. After collecting the baseline characteristics of the participants (demographics, socioeconomic conditions, attitudes toward AI, baseline knowledge of the election matters), we randomize the respondents into two groups. The first group is given the PDF version of the official voter guide (a document that the government distributes to all the voters), and the second group is given an internally developed chatbot trained on the voter guide's text to answer any query about election matters. Respondents answer 5 closed-answer questions concerning election matters using the resource they are randomized into. For each correct answer, they earn 0.2 CHF on top of their participation fee. Then, they can answer an additional question and bid for their remuneration. A random number is drawn. If the number is higher than the bid, the respondent does not take the additional question, otherwise, she takes it and gets paid her bid in case of correct answer. Then, respondents are encouraged to ask a question of their choice to the resource. This is not incentivized. Finally, we collect feedback on the functioning of the chatbot to the chatbot group.
Experimental Design Details
Randomization Method
Qualtrics randomization mechanism.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
1000 individuals
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials