Experimental Design
In summary, we designed our experiment to test how new information related to government spending that either improves or worsens reciprocal beliefs impacts an individual's stated behaviors and their behavioral response in the form of filing taxes.
To look at these, we recruit participants from across Ecuador using email invitations that direct them to an online survey experiment. In this experiment, we identify an individual's current preferences and beliefs about government spending allocations and randomize an information treatment that reveals the actual distribution from an entire year of spending decisions. In effect, some participants' beliefs are likely to improve while others may worsen. We use this exogenous variation to test for effects on individual stated behaviors by including an outcome questionnaire that collects information on three primary outcomes of interest: 1) support towards the government, 2) perceptions about taxes, and 3) affective political polarization. Hence, the first part of our experiment corresponds to an artefactual survey experiment.
We strategically time the experiment in a way that allows us to test how being misinformed impacts behaviors related to filing their actual self-employed taxes with the Ecuadorian IRS. The second part of our experimental design is thus a natural field experiment. Participants make real decisions in the form of filing taxes. Participants make these decisions in their own environment without any direct or indirect involvement by the research team.
We recruit self-employed taxpayers to participate on dates that are close to the individual's tax filing deadline. We can identify specific tax filing deadlines because we observe the unique RUCs. Throughout July 2021, we sent out emails with links to an online survey experiment. displays a copy of the recruitment email.} The email says (in Spanish) ``Are you interested in participating in a research project and the opportunity to win gift cards up to $500? Our team of researchers from UDLA and ACU need your help by completing a short survey on political preferences. If you are interested, please click on the link to begin.'' Our email is worded to limit sample selection and not reveal any of the research's objectives or outcomes. These emails are scheduled to be sent out 1-day before an individual's specific tax filing deadline.
Following this step, we target the January 2022 tax filing deadline similarly. For those who complete the initial survey, we send out another round of email invitations two days before their scheduled tax filing deadline. In the second round, we remind individuals of their responses from the first survey and have them complete a second outcome questionnaire. We note that we only use the first survey's outcome questionnaire to estimate artefactual effects.
In each survey, participants are incentivized with entrance to a lottery drawing for multiple gift cards of up to $500. For both phases, we offer one gift card equal to $500, five gift cards of $100, and ten gift cards of $50. To be entered into the drawing, participants must complete the survey.
After the link to participate on the email invitation is clicked, potential participants are provided basic information about the survey and provide consent to participate in the experiment. Upon consenting, all participants are asked basic demographic questions, including race, education, and political views.
Following this step, we use a modified version of McNamara $ Mosquera (2022) to elicit individual preferences and expectations towards government spending allocations. Given two different spending categories that a government can allocate its budget towards, we ask participants how much of a given $100 they would prefer to have allocated between the two giving a measure of individual spending preferences, P_i. Participants are then asked how they believe the current government allocates across the two categories giving a measure of individual spending beliefs, E_i. Hence, with data on the actual spending allocation R, we can back out and differentiate between individuals who have negatively inflated beliefs versus those who have positively inflated beliefs. Consider the following hypothetical.
In this survey, participants are asked to allocate funds between A) education programs and B) payments on government debt. These were selected because they are funded similarly yet draw partisan criticisms. Once preferences and beliefs are elicited, participants are randomized into a control or treatment group. In the control group, a simple summary of their response is provided. In the treatment group, individuals are provided a summary of their responses and then revealed the actual spending distribution of R. As highlighted in the example above, this can either treat an individual to improved beliefs or to worsened beliefs. In light of this, we are able to estimate effects separately for these two groups by using their respective counterparts in the control group. That is, we can compare treated participants who have Negatively Inflated Beliefs with untreated participants holding Negatively Inflated Beliefs to estimate the likely impacts of improving beliefs, and we can compare treated participants who have Positively Inflated Beliefs with untreated participants holding Positively Inflated Beliefs to estimate the likely impacts of worsening beliefs. Following treatment assignment, all participants are directed to complete an outcome questionnaire.
All participants are prompted with a questionnaire that contains outcome questions related to three primary outcome categories of interest. For each of these categories, we construct indices by weighting by the inverse of the covariance between each variable within the category. To ensure each outcome has the same directional meaning in the context of other variables contained in an index, we reorient some outcomes by multiplying by -1.
After completing the outcome questionnaire, the initial survey is complete. We do not interact with participants after this point other than to facilitate payments until their next filing period in January. At this later date, participants are emailed again, reminded of their earlier participation, and asked to complete another outcome questionnaire. Upon completing the questionnaire, we again only interact with participants after this point to facilitate payments. Since tax data in Ecuador is publicly available, we can further track how the treatments impact real behaviors in a natural field experiment setting without any mention or interaction with participants.