On the mechanics of kleptocratic states: Administrators’ power, protectors, and taxpayers’ false confessions

Last registered on July 21, 2016


Trial Information

General Information

On the mechanics of kleptocratic states: Administrators’ power, protectors, and taxpayers’ false confessions
Initial registration date
July 21, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 21, 2016, 4:34 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.


Primary Investigator

UC Berkeley, Haas

Other Primary Investigator(s)

PI Affiliation
New York University, Political Science Department
PI Affiliation
University of Toronto, Munk School of Global Affairs
PI Affiliation
University of Pittsburgh, Political Science Department
PI Affiliation
Harvard Kennedy School of Government

Additional Trial Information

On going
Start date
End date
Secondary IDs
Powerful state administrators can take advantage of their positions to extract resources,
especially when political accountability is broken. We conjecture that administrators’ power depends
on their ability to inflict harm using the power of office, their ability to mobilize powerful
networks, and on their privileged access to information. Measuring transfers to administrators
is challenging, because they often involve secrecy, and surveys often draw on recall. To circumvent
this challenge, we develop a smart phone application, and monitor 400 households of the
Democratic Republic of the Congo to privately report every day the universe of payments made
during 5 months. The DRC offers a well-suited environment, because administrators systematically
use their power to extract payments from citizens at unusually high rates. We deploy
three randomized interventions aimed to affect the balance of power between administrators
and households. First, since administrators systematically take advantage of a tax code that
is extremely confusing, we organize pro-bono weekly tax consulting to a group of households.
Second, to affect the bargaining power that stems from unequal access to social networks, we
extend a link from a reputed civil society organization to randomly selected citizens. The organization
uses its political leverage to protect the selected citizens. Third, we organize a city-wide
campaign to expose administrators known to have committed abuses in a random sample of
External Link(s)

Registration Citation

Henn, Soeren et al. 2016. "On the mechanics of kleptocratic states: Administrators’ power, protectors, and taxpayers’ false confessions." AEA RCT Registry. July 21. https://doi.org/10.1257/rct.1443-1.0
Former Citation
Henn, Soeren et al. 2016. "On the mechanics of kleptocratic states: Administrators’ power, protectors, and taxpayers’ false confessions." AEA RCT Registry. July 21. https://www.socialscienceregistry.org/trials/1443/history/9544
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Our key outcome data comes from a smart phone application we developed for this project and distributed to households and businesses for daily entry, and weekly upload. Participants in treatment and control groups reported weekly on what they had paid in formal and informal taxes, whether they had negotiated to lower their tax payments, whether that negotiation was successful, and their attitudes towards paying taxes. Since we made sure that the smart phone data collection activity and the ODEP tax intervention activities were independent of one another, we can be confident that any reporting bias is orthogonal to treatment assignment. We also draw on household and business surveys for key variables for checking balance, analysis of heterogeneous effects, and controls. To analyze payments, we use the estimated payments of informal and formal taxes. We allow for informal (and formal) payments to non-state actors. In addition, we also collected the following variables, which we will exploit in the analysis: Whether a negotiation occurred; starting amount, final amount, and difference ; satisfaction with tax payment; and reasons for paying or not paying associated with bargaining. We next provide a rationale for the categorization of taxes by their degree of formality. Additionally, we use project implementation data that informs how the treatments were actually implemented (This data includes information on how often participants were called by the ODEP advisors (client dataset), tracking sheets that provide detailed tracking data on the nature of each phone call (including what taxes were discussed, abuses reported, etc), and qualitative exit interviews conducted with recruited citizens at the end of the smart phone reporting period that checked on the quality of ODEP consulting) In the remainder of the paper, we use the data collapsed at the week level for each respondent.

The smart phone system allows us to overcome under-reporting that may arise in retrospective surveys. The average payments are higher in the smart phone system, likely because respondents do not need to recall their payments over long periods of time, and the proportions of formal and informal are similar.

A key challenge is how to measure formal and informal payments. Definitions of what constitutes formal and informal taxation have been highly contested within existing research. Following recent work, we define taxation as ``all payments---whether cash or in kind, including labor time---that are made as a result of the exercise of political power, social sanction or armed force.'' Within this definition, identifying and defining formal taxes is straightforward: Formal taxes refer to any compulsory tax or tax like payment stipulated in the statutory legal framework. At the local government level this includes levies formally referred to as ``taxes'', but includes licensing fees, rate and user fees for particular services. In practice, user fees are often particularly prominent as a means to finance services provision. User fees are ``imposed on specific persons, activities, or properties that receive a service or benefit'' in return. Common fees in developing countries like the DRC include those to access education and health services, obtain businesses licenses, or operate in markets. Fees are often viewed as distinct from taxes because, unlike with taxation, there is a direct and immediate relationship between fee payments and the goods and services received in return. Yet, given the prevalence of user fees and the fact that they constitute compulsory payments in exchange for government provided goods and services, we also measure them.

In this RCT, we use multiple approaches to examine formal and informal taxation. We use three approaches to measure informal taxes.

We first obtain formality from the households and businesses self-reports if the payments they make are formal, state law backed payments, or instead informal payments to facilitate the process for instance. However, relying on households' self assessment of formality is problematic on multiple grounds. To begin with, a motivation of this paper is precisely that households do not know what their legal liabilities are, hence relying on self-reported formality may contain biases. Furthermore, the treatments themselves may induce households to relabel taxes between formal and informal in their reporting, without changing the payments. This can induce non-classical measurement error correlated with the treatment. Also, we know that a large fraction of payments made by household are "formal'' in the sense that they are payments they should make according to the law, but are nonetheless bribes. There is a sense of formality in the social convention of paying the statutory taxes to tax officials, even if it is common knowledge that these will be used for private consumption of the official and his superior.

Second, we use the pre-treatment survey data to construct scores of formality of each tax category. There is variation in the proportion in the survey of self-reported proportion of formal taxes in each category. To construct a measure of formality of payments where self-declared formality is not endogenous to the treatments, we use these scores in the main subsequent smart-phone analysis to estimate, probabilistically, the share of payments that are formal. This allows us to capture changes in payments that are immune to relabeling/non-classical measurement bias, since relabeling would only occur within categories.

Third, since self-reporting the formality of a payment, and its meaning, raises concerns of non-classical measurement error, we can focus on total payments, where predictions are immune to endogenous relabeling by households. Any payment to a tax official in the DRC has no guarantee to end up in the state coffers, hence one approach is to consider payments to tax officials who conduct visits to be bribes / formal taxes would instead be paid at the office.

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We randomly sampled 576 households and 384 businesses on 96 avenues in Kinshasa to participate in household and business surveys (Sampling was implemented in August and September 2015). From this pool, the research team recruited households and businesses to participate in an additional smart phone data collection activity (We further randomized how we framed the invitations to participate in the smart phone data collection activity for all those who were eligible. This is a separate experiment.) A respondent was considered eligible for recruitment into the smart phone data collection activity if they were literate enough to read or write a letter in French and if the enumerator assessed them as having been willing to participate in the survey. If a respondent met these conditions and the target for the avenue had not yet been reached, the enumerator invited the respondent to take part in the smart phone data activity. Note that the targets for the avenues were per-determined and based on the first step of the random assignment, with a target of 200 households and 200 businesses. To ensure that the subsample of participants in the smart phone survey was random conditional on eligibility constraints, enumerators visited households on each avenue in a random order. Enumerators then invited households who agreed to participate in the smart phone data collection activity to attend training at the office of the research team in Kinshasa. A local research team then provided, at the trainings in the office, instructions on how to use the smart phones and on how to enter and upload their tax data on a weekly basis for up to 20 weeks. The research team recruited households on a rolling basis as enumerators implemented the survey (Approximately eight weeks of training were held). In return for their regular reporting, participants received a small compensation (Participants were allowed to keep the smart phones at the conclusion of the study). The training emphasized that the smart phone data collection activity was being undertaken by the same research team that had conducted the household and business surveys. A few days after the end of the smart phone training, individuals were contacted by an ODEP advisor to learn about the ODEP tax activities and to indicate their willingness to participate (The ODEP advisors used the following script: "I am a representative from ODEP, an emerging organization that works to improve the fiscal system in the DRC and to help households better confront the complex fiscal administration of the DRC, and the frequency of abuses by tax collectors. We are partly funded by DFID, the British development organization, and we sit at the table with the government in order to guarantee transparency of their decisions. We represent no political interest, except the interest of the people, and aim to improve the Congolese ability to operate in this predatory and confusing tax environment. You can contact us at x and our website is xxxx.xxx. We are in no way connected to the data collection training that you received or the data collection itself. We are contacting you because we have been informed you are concerned about your taxes, and we are going to make weekly calls to you in order to provide you with support on your taxes. We really hope that our support will help improve the fiscal problem in the DRC. Too many taxes are paid to private interests as a burden to households and we want to help you. Everyone would rather prefer that what you pay goes to public coffers so you can benefit from services the state owes you, isn't it the case?'' The ODEP tax advisor then proceeded to obtain consent and record the contact information for those who were willing to participate in the ODEP consulting activities.)

Experiment 2: Anti-corruption campaign

After having gathered the data, we implemented a campaign in 50 % of the neighborhood of Kinshasa. The campaign aimed at increasing the cost of engaging in abusive expropriation by the lower level tax officials in the targeted neighborhoods. The campaign started with meetings with community mayors and neighborhood leaders, where the organization explained they would start exposing individual tax collectors suspected of abuse to the public and to their supervisors. They presented in each neighborhood a detailed list of abuses that they have recorded. In addition, they communicated with key relevant supervisors to communicate the abuses that took place in the selected neighborhoods. For the first 3 months of the data collection, the campaign was not implemented. Then, a random sample of neighborhoods were selected to receive the targeted campaign at the start of the 4th month. This allows us to use within neighborhood design, and differences in differences. We obtain 70 neighborhoods, 35 in treatment and 35 in control, with daily data during 4 months (pre-campaign is 3 months, post-campagin is 1 month). In addition, we use a target of 400 smartphone holders to examine the impact of the campaign.

To ensure that only some neighborhoods are treatment, we also made sure that the campaign guaranteed that they could only operate in the treatment neighborhoods, due to limited resources. The control neighborhoods receive the guarantee that no matter what they do, ODEP cannot use information there.
Experimental Design Details

The design followed the following protocols. First, ODEP activities are separate from the smart phone data collection activities to minimize the potential for reporting or social desirability bias. Second, participation in the smart phone reporting system was voluntary and unconditional. Third, the introduction script was generic and conveyed no mention to ODEP. A list of participants to the training activities was then passed to the research team, which implemented the randomization as described in the next section.

Two trained ODEP advisors (one specializing in household taxes and the other business taxes) implemented the treatments by calling participants on a weekly basis for five months. Each call followed a protocol with a highly structured format that followed the requirements of each treatment and minimized potential spillover in treatment content. Both treatments also emphasized that any data about payments provided by citizens would be kept strictly confidential so that any reports of abuses would not be linked back to them. In partnership with ODEP, we implemented the interventions described in the interventions section, over-lapped in a 2x2 factorial design.

While the target sample of the experiment was 200 households and 200 businesses across the four experimental conditions, our final sample is 310 individuals, reporting daily data for up to 5 months. Taking into account the likelihood of potential spillovers if we were to assign individuals within avenues, we first randomly assigned avenues to treatment and control groups. In other words, of 96 avenues within Kinshasa, we assigned 48 to serve as a pure control and the other 48 to have ODEP activities. Within each of the pure control avenues, we set a target of one household and one business for the smart phone reporting (on two avenues we recruited an additional respondent), yielding a goal of 50 households and 50 businesses in the pure control.

The random assignment to specific ODEP treatments was done at the individual household or business level after obtaining consent. Taxpayers were randomly assigned to one of the three treatment groups (tax consulting, protection, and tax consulting + protection) blocking on strata formed by whether they were a household or business, commune, and framing experiment assignment. The target number of households and businesses to recruit into the smart phone data collection on ODEP treatment avenues was 200 households and 200 businesses across 48 avenues. Our final sample had 310. While we did not reach our recruitment goals, this does not create bias because randomization occurred within the recruited households and businesses, although it hurts our statistical power (In actuality, due to challenges in the field, we recruited half of the target number of households and businesses. Note that this is not a compliance issue, rather an implementation failure that arises from management failures among the field teams.
Randomization Method
Randomization done in an office by a computer
Randomization Unit
Experiment 1 (protection and tax consulting): smartphone holder level. However, we combined an avenue level clustering for randomization that allows us to examine spillovers. In particular, we first randomly selected avenues to be pure control or into the experimental individual level lottery. Those that were part of this second lottery, we proceeded to randomly select individuals into the different treatment arms.

Experiment 2 (anti-corruption campaign): neighborhood-level
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Experiment 1 (protection and tax consulting): 400 smartphone holders (panel of 120 days, daily data)

Experiment 2 (anti-corruption campaign): 140 neighborhoods (panel of 120 days, daily data)
Sample size: planned number of observations
Target: N=400 smartphone holders T=120 days Sample size = N*T=48,000 Target effective sample size is less than 48,000 and more than 400
Sample size (or number of clusters) by treatment arms
100 smartphone holders control
200 smartphone holders receive protection treatment
200 smartphone holders receive tax consulting treatment

(Factorial design, 100 smartphone holders per treatment combination)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
University of Pittsburgh
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials