Hierarchy and Information

Last registered on March 08, 2019

Pre-Trial

Trial Information

General Information

Title
Hierarchy and Information
RCT ID
AEARCTR-0003946
Initial registration date
March 07, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 08, 2019, 3:58 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University College London

Other Primary Investigator(s)

PI Affiliation
World Bank Group

Additional Trial Information

Status
Completed
Start date
2016-06-13
End date
2016-09-16
Secondary IDs
Abstract
We develop a novel survey instrument, based on a survey of 1,831 public officials across 382 organizations, to construct measures of the individual-level information that bureaucrats have about their local conditions and operating environment. Specifically, we elicit the beliefs of bureaucrats across a range of key indicators upon which their work depends (demographic, socio-economic, and sector-specific indicators) and benchmark these beliefs against objective measures to calculate the scale of error for each bureaucrat. We tie descriptive evidence to the theoretical literature on information acquisition and document how information is associated with individual and organizational characteristics, focusing on measures of hierarchy.

We run an experiment to provide a random subsample of offices in the bureaucracy evidence briefings prior to eliciting their individual-level information to study how evidence briefings can alter the beliefs of policymakers. Such briefings have long been used within public service and are increasingly a focus of research entities such as the Centre for Economic Policy Research, JPAL/IPA, and the World Bank Group, yet there is surprisingly little evidence on whether such briefings do in fact inform public officials. We provide an information package that exactly mimics circulars (internal communications) of the Ethiopian government to a random subset of the officials that we study. The information package contains all the information we test for in our survey. Lowering the marginal cost of information acquisition dramatically and symmetrically in this way should yield superior information overall (Aghion and Tirole, 1997).

The errors made by treated officials are 0.28 of a standard deviation lower than control officials, consistent with evidence from a range of settings that reducing the cost of becoming informed increases information acquisition. However, the treatment effect arises totally from organizations with below-average management practices, reinforcing our descriptive findings that organizational incentives are key mediating factors in how new information is received and absorbed. This result is of significance to researchers who aim to update the beliefs of public officials.

Our experimental findings highlight that when the public sector is designed appropriately, the rationale for compensating interventions - which are often ad hoc and focused on short term goals - is weakened.
External Link(s)

Registration Citation

Citation
Rogger, Daniel and Ravi Somani. 2019. "Hierarchy and Information." AEA RCT Registry. March 08. https://doi.org/10.1257/rct.3946-1.0
Former Citation
Rogger, Daniel and Ravi Somani. 2019. "Hierarchy and Information." AEA RCT Registry. March 08. https://www.socialscienceregistry.org/trials/3946/history/42833
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The survey was designed such that all employee-level respondents (1366 public officials across 382 organizations) were asked about their beliefs on values of demographic and socio-economic indicators (the population, the percentage of rural inhabitants, and the unemployment rate) and indicators relating to their sector of work (for example, education officials were asked about the number of primary enrolees, the pupil-teacher ratio, the pupil-class ratio, and the pupil-school ratio; health officials were asked about the antenatal care rate, the rate of contraceptive acceptance, the rate of births delivered by a skilled attendant, and the proportion of fully immunized infants; etc).

District-level officials were asked about their own district; regional-level officials were asked about the (named) 25th, 50th and 75th percentile district in their region for the specified indicator; federal-level officials were first asked to choose the region that they knew best and subsequently were asked about the (named) 25th, 50th and 75th percentile district in that region for the specified indicator.

Prior to the survey we send a letter to all organizations to inform them of logistic details and to ask them to prepare interview space etc. For a random subset of organizations, we also attach an `information package', which provides the official administrative data corresponding to our policy variables of interest (the set of indicators that we subsequently ask about in the survey).

A set of federal organizations received the official administrative data for all regions for all indicators, highlighting the median district for each indicator for each region. Similarly, a random sub-sample of district jurisdictions were sent the official administrative data for their whole region for all indicators, also highlighting the median district for each indicator. Since the information package to the district-level jurisdictions had to pass through the regional offices (through official government channels for circulars), this meant that all regional officers were treated.
Intervention Start Date
2016-06-13
Intervention End Date
2016-09-16

Primary Outcomes

Primary Outcomes (end points)
The primary outcome is the scale of error that public officials make across all indicators that they are asked about.
Primary Outcomes (explanation)
We take the subjective assessment by the bureaucrat (elicited from the survey) and subtract the objective measure of the indicator for the specific district, based on administrative or survey data to construct an error.

First, we take absolute values of the errors so that deviations of different signs are comparable.

Second, our core transformation is to create z-scores of each of the variables by subtracting the mean and dividing by the standard deviation of all the errors related to a particular indicator. This scales the claims by an empirical measure of a `normalized' measure of error for that indicator. The interpretation of the z-score is a unitless measure of how many standard deviations of the full distributions of error away from the benchmark data an official's assessment is.

Third, we construct an overall measure of the scale of error that each bureaucrat makes by taking the mean across the z-score of the absolute error across all indicators over which the bureaucrat makes a claim.

Since, the distribution of errors of a number of the indicators we study are right-skewed, such that we may be underestimating the severity of an error in the sense that few people would make it, we test the robustness of our core measure to this criticism. We define a `skewness-adjusted' z-score, where we test for skewness of the distribution of a raw indicator and replace skewed indicators with their log. We then take the z-score of these transformed variables in a similar way to the above measure.

To show that the analysis is robust to alternative transformations, we also take the ordinal rank of absolute errors within an indicator and divide this by the maximum rank, thereby creating a `rank-proportion' score. This can be interpreted as the percentile of error made by an official relative to other officials in our data set.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Prior to the survey, we send an `information package' to a random subset of organizations. We use official government channels to send the information package, mimicking government circulars (internal communications).

We randomly select a subset of Federal-level organizations to receive the treatment (3 organizations are randomly chosen to be treated out of the 5 organizations in the sample of Federal organizations). We randomly select a subset of districts to receive the treatment (41 districts are treated out of 65 in the sample of districts).

Since the internal communications of the government are such that communications to districts must pass through Regional organizations, all Regional organizations are treated by default.

Prior to the start of the survey, in May 2016, all organizations participating in the survey are sent a letter informing them of the upcoming survey. The letter provides Federal Government authorization for the field team, to inform them of logistic details and to ask them to prepare interview space. To the randomly chosen organizations at the Federal level, a further attachment was added that provided the official administrative data for all regions for all indicators that we asked about in the survey, highlighting the median district for each indicator for each region. To the randomly chosen district-level organizations, a further attachment was added that provided the official administrative data for their whole region for all indicators, also highlighting the median district for each indicator. The letter for district-level organizations had to pass through Regional organizations with instructions to pass on to the specified districts, hence all Regional organizations are treated by default.

Experimental Design Details
Randomization Method
The randomization was done in an office using a computer, based on random.org random-number generation, assigning random numbers to Federal-level organizations and to District-level organizations to determine whether they would receive the information package treatment or not.
Randomization Unit
For Federal-level organizations, the unit of randomization is the organizations (the sector office within the Federal level). For district-level organizations, the unit of randomization is the district.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We sample: 5 Federal organizations; 66 districts. (In addition, 11 regions. 5 sectoral offices within each region and within each district.)

We cluster at the level of the district that we ask officials about: 66 districts.
Sample size: planned number of observations
246 Federal-level observations; 292 Regional-level observations; 828 district-level observations;
Sample size (or number of clusters) by treatment arms
3 Federal organizations treated, 2 Federal organizations control; 11 regions control; 41 districts treated, 25 districts control.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For a level of significance of 0.05, a two-sided test, and a power of 0.7; the sample design suggests a minimum detectable effect size of 0.30 standard deviations or 0.30 units (the main outcome variable is specified in standard deviations). This is 30% of the population average value.
IRB

Institutional Review Boards (IRBs)

IRB Name
Blavatnik School of Government’s Departmental Research Ethics Committee, University of Oxford
IRB Approval Date
2016-07-22
IRB Approval Number
SSD/CUREC1A/BSG C1A 16-010

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials