Bribes vs. Taxes: Market Structure and Incentives
Last registered on April 11, 2018


Trial Information
General Information
Bribes vs. Taxes: Market Structure and Incentives
Initial registration date
April 09, 2018
Last updated
April 11, 2018 12:33 PM EDT
Primary Investigator
McGill University
Other Primary Investigator(s)
PI Affiliation
GSEM-University of Geneva, BGSE, UAB, BREAD, CEPR, IPA
PI Affiliation
The World Bank Group
PI Affiliation
The World Bank Group
Additional Trial Information
Start date
End date
Secondary IDs
PEDL ERG #3965
Firms in developing countries often avoid paying taxes by making informal payments to business tax inspectors. These bribes raise the cost of operating a business, and the price charged to consumers. To decrease these costs, we designed a feedback incentive scheme for business tax inspectors that rewards them according to the anonymous evaluation submitted by inspected firms. The scheme increases the bargaining power of firms in their relationship with tax officials and decreases the bribe size. In our model, firm heterogeneity shapes the interaction between firms and inspectors and the equilibrium bribery outcome. Feedback incentives make firms with larger pass-through more attractive for inspectors. A tilted incentive scheme that attaches higher weights to the evaluation of smaller firms prevents targeting along this margin while decreasing the bribe size. We evaluate both schemes in a field experiment in the Kyrgyz Republic. Our study highlights the role of firm heterogeneity and market structure in shaping the relationship between firms and tax inspectors while seeking clear evidence of pass-through of bribes to consumers.
External Link(s)
Registration Citation
Amodio, Francesco et al. 2018. "Bribes vs. Taxes: Market Structure and Incentives." AEA RCT Registry. April 11.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
We design and implement an intervention that uses monetary feedback incentives to tax inspectors to reduce bribe payments by increasing the bargaining power of firms in their relationship with tax officials. Our intervention design is based on the outcome of a simple model that describes the interaction between firms and tax inspectors, which highlights pass-through as a key factor in this interaction. We implement two incentive treatments. In both of them, inspectors are awarded a bonus payment on top of their base salary, and the bonus amount is a piece rate that increases with the anonymous evaluation (from 0 to 10) submitted by inspected firms. Specifically, we ask surveyed firms: "On a scale from 0 to 10 where 0 is a poor job and 10 is a good job, how would you rate the job of the tax inspectors during the last inspection?" The bonus amount increases proportionally with the improvement in average evaluation among firms inspected in the local office over the baseline average. The schemes in the two treatment arms differ in the weighting of the average evaluation. In the first treatment arm (Piece Rate), the average is unweighted. In the second treatment arm (Piece Rate Tilted) we attach different weights to the evaluation submitted by firms of different size.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Inspection probability; elasticity of demand faced by inspected firms; price charged by inspected firms.
Primary Outcomes (explanation)
Inspection probabilities are constructing a dummy equal to one if the firm is subject to inspection in the reference period; elasticity of demand is derived using the answer to the questions: "suppose that the market price of such product or service increases by X%. By how much the quantity sold would drop in percentage?" with X equal to 5, 10, and 20; price is reported price of most sold item (product or service).
Secondary Outcomes
Secondary Outcomes (end points)
Evaluation of inspector's job; reported awareness of gifts/informal payment to tax inspectors; average cost; average administrative cost; profits.
Secondary Outcomes (explanation)
Average cost and average administrative cost are derived as follows. We divide reported total revenues of the firm by the reported price of the most sold item in order to derive quantity sold. We then divide total and administrative cost by this quantity to obtain a measure of average cost and average administrative cost respectively.
Experimental Design
Experimental Design
We draw the baseline sample for our intervention from a list of businesses provided by the State Tax Service of the Kyrgyz Republic. Such list was obtained through sampling from the entire population of ever registered taxpayers, stratifying according to their tax category, turnover size, sector of activity, and local tax office. Given our focus on individual entrepreneurs, we built the list such that this category was overrepresented. For the experiment, we randomly assign the 50 local tax offices (356 inspectors) in the Kyrgyz Republic to either one of the two treatment groups, or to a control group. We stratify the randomization based on baseline raid inspection rates per office, and we then verify ex-post the balancedness of baseline observable firm-level characteristics. We build our follow-up sample starting from the baseline sample. We replace those businesses in the baseline sample that we could not locate with others belonging to the same local office catchment area and having either the same sector of activity or belonging to the same turnover size category. We also sample a number of additional businesses per local tax office under the sole condition that they have been inspected during the intervention period.
Experimental Design Details
Randomization Method
Randomization done in office by a computer.
Randomization Unit
Local tax office. Each local office counts on average 7 business tax inspectors, 47 surveyed firms in the baseline survey, and 60 surveyed firms in the follow-up survey.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
Sample size: planned number of observations
2,500 firms in the baseline sample; 3,000 firms in the follow-up sample.
Sample size (or number of clusters) by treatment arms
10 control local tax offices, 20 assigned to Piece Rate feedback incentives, 20 assigned to Piece Rate Tilted feedback incentives.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
IRB Approval Date
IRB Approval Number
Post Trial Information
Study Withdrawal
Is the intervention completed?
Intervention Completion Date
November 30, 2016, 12:00 AM +00:00
Is data collection complete?
Data Collection Completion Date
November 30, 2017, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
5,305 firms of which 2,339 in the baseline sample and 2,966 in the intervention sample.
Final Sample Size (or Number of Clusters) by Treatment Arms
10 control local tax offices, 20 assigned to Piece Rate feedback incentive scheme, 20 assigned to Piece Rate Tilted feedback incentive scheme.
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers