The effect of competition on the demand for management training: Experimental evidence from middle and large Ethiopian firms

Last registered on April 28, 2022

Pre-Trial

Trial Information

General Information

Title
The effect of competition on the demand for management training: Experimental evidence from middle and large Ethiopian firms
RCT ID
AEARCTR-0009289
Initial registration date
April 21, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 28, 2022, 5:51 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Warwick

Other Primary Investigator(s)

PI Affiliation
World Bank
PI Affiliation
Stanford
PI Affiliation
Stanford
PI Affiliation
Policy Studies Institute

Additional Trial Information

Status
On going
Start date
2022-04-14
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Firms in developing countries often rank low on management quality scales. What prevents firms to invest in management quality? A central hypothesis in the economics literature is that, in many contexts, product market competition is low and that low-intensity competition dampens firms’ incentives to improve the quality of their management. In this project, we experimentally test whether the threat of competition changes firms’ decision to invest in management training: we randomly provide a treatment designed to increase perceived competition in the product market, and study its impacts on the demand for management training among firms in urban Ethiopia.
External Link(s)

Registration Citation

Citation
Abebe, Girum et al. 2022. "The effect of competition on the demand for management training: Experimental evidence from middle and large Ethiopian firms ." AEA RCT Registry. April 28. https://doi.org/10.1257/rct.9289-1.0
Experimental Details

Interventions

Intervention(s)
See experimental design.
Intervention Start Date
2022-04-14
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
• Demand for training (binary indicator)
• Willingness to pay for training
Primary Outcomes (explanation)
• Demand for training (binary indicator): Whether the firm is interested in the training materials (and hence states their willingness to pay right away or invites our team for a second visit to elicit willingness to pay)

• Willingness to pay for training: Maximum amount of money the firm is willing to pay to obtain the training materials. As we expect this variable to have a skewed distribution, we will use standard methods to deal with outliers.

We will explore four key dimensions of heterogeneity:
(i) a Lerner index of competition as in Bloom and Van Reenen (2007), which we will discretize using the median value of the sample.
(ii) a score of management quality similar to that of Bloom and Van Reenen (2007), which we will discretize using the median value of the sample.
(iii) a variable capturing whether the firm tends to pursue a differentiation strategy vis-a-vis their competitors.
(iv) a perspective-taking scale, which we will discretize using the median value of the sample.

Secondary Outcomes

Secondary Outcomes (end points)
We will study impacts on a main downstream outcome: knowledge. In case we find impacts on “knowledge”, we will also test for impacts on “adoption” and “productivity”.

We will also study impacts on variables capturing the intensity of competition. These regressions will serve as a manipulation check.
Secondary Outcomes (explanation)
• Knowledge: We will measure knowledge with a battery of questions on the topics discussed during the training. We will present results both on the single variables and on summary indices for the following thematic areas:
• Branding
• Sales
• Customer engagement
• Product design
• Quality control
• Competition
• Adoption: we will create indices that capture business practices in each of the thematic areas mentioned above.
• Productivity: we will capture productivity by measuring the value of total sales, and sales per worker.
• Intensity of competition: firms’ answers to questions capturing firm perceptions of the intensity of competition, such as “Do you expect increased competition from your competitors in the coming months?”

Experimental Design

Experimental Design
We have developed a marketing management training tool for firms in Ethiopia. Further, we have developed a placebo product (a video discussing an expedition along the Omo river) that will be offered to the firms in the same way as the training tool.

We assigned firms to strata chosen to minimize differences in sector, size, age and location. We then randomized: (I) whether we give the training materials for free, or whether we ask firms to pay for them; (ii) whether we give the placebo materials for free, or whether we ask firms to pay for them, (iii) among the firms that have to pay for the training materials, whether they are told that a close competitor has been given the materials, and (iv) among the firms that have to pay for the placebo, whether they are told that a close competitor has been given the placebo. This generates five experimental groups:
1. Hold-out group: Firms in this group are interviewed first and are given the training product and the placebo product for free.
2. Competition group: Firms in this group are asked to pay for the training product. Before eliciting willingness to pay for the training product, these firms will be (truthfully) told that we have given the training product to up to three close competitors. These firms are not offered the placebo product.
3. Control group: Firms in this group are asked to pay for the training product. They are not given information on whether we offered the training product to other firms. They are also not offered the placebo product.
4. Placebo competition group: Firms in this group will be asked to pay for both the training product and the placebo product. We will elicit willingness to pay for both products. Before willingness to pay for the placebo product is elicited, firms will be told that up to three close competitors have been given the placebo product.
5. Placebo control group: Firms in this group will be asked to pay for both the training product and the placebo product. We will elicit willingness to pay for both products. Firms will not be told anything about whether these products are offered to other firms.

The comparison of willingness to pay for the training product in groups (2) and (3) will test our core hypothesis -- that perceived competition increases willingness to invest in management training. (Groups (4) and (5) will serve as an additional control group if we do not observe any significant differences in willingness to pay for the training product compared to group (3)). This effect is driven by two potential mechanisms. First, a rational, “pure competition” mechanism, whereby firms change their willingness to pay because they fear the competition from their peers. Second, a psychological “social consumption” mechanism, whereby firms increase willingness to pay as they simply want to imitate and have similar products as their peers.

The comparison of willingness to pay for the placebo product in groups (4) and (5) will enable to quantify and isolate the social consumption effect, as the placebo product does not confer any competitive advantage and thus should not activate the pure competition mechanism. Under the assumption that the social consumption effect is the same across tasks, we will be able to identify the pure competition effect by subtracting this estimate of the social consumption effect from the estimate of the treatment effect on willingness to pay for the training product.

The hold-out sample will be given both products for free and thus we will not elicit willingness to pay for the training for this sample. Instead, we use this sample to produce a number of “mental maps” or “directed-acyclical graphs” (DAGs) capturing firms’ mental models of competition. We will use descriptive information from these exercises to help us interpret the results of the experiment.
Experimental Design Details
Randomization Method
We rely on an existing dataset of about 1,474 medium-to-large firms in Addis Ababa and in the periphery of Addis Ababa – the AUDRI panel survey from Stanford University. Firms will be allocated to one of the five experimental conditions using a matched randomization procedure. We first construct blocks of 22 firms that minimize the Mahalanobis distance over a set of variables measured before the experiment. These variables include five sectoral dummies, the number of payroll employees, the number of years that the firm has been in operation, latitude and longitude. We then randomly allocate 3 firms in each block to each one of the following conditions: hold-out, control, placebo competition and placebo control. We also randomly allocated 6 firms to the following condition: competition. Finally, the remaining 4 firms are allocated to a pure control group which we do not survey for the time being, but can use as a comparison group in the future.
Randomization Unit
Firm
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
918 firms
Sample size (or number of clusters) by treatment arms
306 firms in the competition group, 153 firms in each of the other groups.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For the impact of the competition treatment on willingness to pay for the training product, we have an MDE of .22 standard deviations.
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford IRB
IRB Approval Date
2021-08-01
IRB Approval Number
34490

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials