Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
One in a Million: Field Experiments on Perceived Closeness of the Election and Voter Turnout
Last registered on August 02, 2018


Trial Information
General Information
One in a Million: Field Experiments on Perceived Closeness of the Election and Voter Turnout
Initial registration date
August 01, 2018
Last updated
August 02, 2018 1:53 AM EDT
Primary Investigator
Other Primary Investigator(s)
PI Affiliation
Purdue University
PI Affiliation
UC Berkeley
PI Affiliation
Yale University
Additional Trial Information
Start date
End date
Secondary IDs
A common feature of many models of voter turnout is that increasing the perceived closeness of the election should increase voter turnout. However, cleanly testing this prediction is difficult and little is known about voter beliefs regarding the closeness of a given race. In a field experiment during the 2010 US gubernatorial elections, we elicit voter beliefs about the closeness of the election before and after showing different polls, which, depending on treatment, indicate a close race or a not close race. Subjects update their beliefs in response to new information, but systematically overestimate the probability of a very close election. However, the decision to vote is unaffected by beliefs about the closeness of the election. A follow-up field experiment, conducted during the 2014 gubernatorial elections but at much larger scale, also points to little relationship between poll information about closeness and voter turnout.
External Link(s)
Registration Citation
Gerber, Alan et al. 2018. "One in a Million: Field Experiments on Perceived Closeness of the Election and Voter Turnout." AEA RCT Registry. August 02. https://doi.org/10.1257/rct.3199-1.0.
Former Citation
Gerber, Alan et al. 2018. "One in a Million: Field Experiments on Perceived Closeness of the Election and Voter Turnout." AEA RCT Registry. August 02. http://www.socialscienceregistry.org/trials/3199/history/32522.
Experimental Details
Why people vote is a core question in political economy. In classic instrumental models of voting, natural assumptions lead to the prediction that individuals are more likely to vote when they believe the election to be close. We conduct two large-scale field experiments / randomized controlled trials (RCTs) in the US that exogenously shift voters' beliefs about the election being close by randomly showing different polls to voters.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Voter turnout
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The first RCT was conducted during the 2010 US gubernatorial election cycle and included over 16,000 voters. Using computer surveys in 13 US states, we asked potential voters to predict the vote margin, as well as their beliefs about the chance that the governor's race would be very close (e.g., decided by less than 100 votes). Exploiting variation in real-world polls prior to the election, we divide subjects into groups. We informed the ``Close" group of the results of a poll indicating the narrowest margin between the two candidates, whereas the ``Not Close" group saw a poll indicating the greatest gap between the candidates. (In addition, there was a third group (``Control") who received no poll information and did not get surveyed.) After the election, we used administrative data to determine if people actually voted.

We conducted a second large-scale RCT during the 2014 gubernatorial elections. We randomly mailed postcards to about 80,000 households (125,000 individuals) where we again provided information from the most close or least close poll. Including the control households that didn't get postcards, we have a sample size of over 1.38 million voters. A cross-randomized treatment randomly provided an expert prediction of whether the electorate size would be smaller or larger.
Experimental Design Details
Randomization Method
Randomization done by a computer.
Randomization Unit
Individual for 2010 experiment. Household for 2014 experiment.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
About 16,000 voters in 2010 RCT. About 875,000 households in 2014 RCT.
Sample size: planned number of observations
About 16,000 voters in 2010 RCT. About 1.38 million voters in 2014 RCT.
Sample size (or number of clusters) by treatment arms
In 2010, rough even split between 3 arms (close poll, not close poll, control). In 2014, about 90% control and 10% treated. Treated split roughly evenly between 4 arms in 2x2 design (Close vs. Not Close Poll, Small Electorate Likely vs. Not).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)