Back to History Current Version

The Impact of Job-Specific Training on Short-Term Worker Performance: Evidence from a Field Experiment

Last registered on August 25, 2016

Pre-Trial

Trial Information

General Information

Title
The Impact of Job-Specific Training on Short-Term Worker Performance: Evidence from a Field Experiment
RCT ID
AEARCTR-0001515
Initial registration date
August 25, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 25, 2016, 2:08 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2015-01-01
End date
2016-03-01
Secondary IDs
Abstract
Remote and short-term work arrangements are increasingly common despite the limited incentives they provide for acquiring firm-specific knowledge. This study examines the importance and cost-effectiveness of firm-specific training for remote contract workers using a field experiment run among short-term, outside sales people employed by an insurance firm in Northern Kenya. In particular, I test whether whether giving these workers the option to invest in firm-specific training through a mobile training application affects their performance, retention, and subsequent firm earnings, and how this varies with monetary and competition-based incentives to invest in the training. Preliminary findings demonstrate that having access to firm-specific training significantly increases worker performance and retention, and that agents who do not invest in the training benefit indirectly from its availability through knowledge spillovers from agents who do invest. These results suggest that the provision of low cost, firm-specific training to short-term workers has the potential to significantly increase firm performance and that workers facing pay-for-performance incentives are frequently willing to invest in this training with little to no added incentives to do so. This study has implications for our understanding of short-term and remote worker management, and for our understanding of the development of the contract labor market and the on-demand economy more generally.
External Link(s)

Registration Citation

Citation
Lyons, Elizabeth. 2016. "The Impact of Job-Specific Training on Short-Term Worker Performance: Evidence from a Field Experiment." AEA RCT Registry. August 25. https://doi.org/10.1257/rct.1515-1.0
Former Citation
Lyons, Elizabeth. 2016. "The Impact of Job-Specific Training on Short-Term Worker Performance: Evidence from a Field Experiment." AEA RCT Registry. August 25. https://www.socialscienceregistry.org/trials/1515/history/10366
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2015-08-01
Intervention End Date
2015-10-01

Primary Outcomes

Primary Outcomes (end points)
Total insurance premiums collected in the intervention period, total insurance premiums collected in the post-intervention period, change in insurance premiums collected between the previous period and the intervention period, total value of livestock insured in the intervention period, total value of livestock insured in the post-intervention period, change in total value of livestock insured between the previous period and the intervention period, short-term worker retention in the post-intervention period, investment in the training, performance on the training quizzes
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In order to address my research question, I ran a randomized control trial in collaboration with the International Livestock Research Institute (ILRI) and Takaful Insurance of Africa (TIA). ILRI is a not-for-profit research institute that does research on a range of issues related to livestock. Among their projects is an Index Based Livestock Insurance product (IBLI) developed and managed by a team of Economists based in Nairobi.

TIA sells IBLI to Kenyan pastoralists using a sales-agent model in which agents are selected in part based on their literacy, and their relationship to their local community. They are tasked with selling IBLI within their communities for a two month sales windows. Sales windows occur twice a year (from January to February, and from August to September), and there is substantial agent turnover between windows. In the sales window covered by the experiment, TIA was providing IBLI in 6 counties in Northern Kenya; Garissa, Isiolo, Mandera, Marsabit, Moyale, and Wajir. Each county was divided into divisions (ranging from 4 to 16 per county). Each division was assigned one lead agent and several sub-agents. Lead agents are tasked with finding and training sub-agents, in addition to selling the product. Lead agents attend a short training camp where they learn about IBLI, and some tips on how to sell it.

Treatments:

To test short-term workers are willing to invest in firm-specific knowledge, and how the provision of this knowledge affects firm performance, I randomly assigned sales agents into one of three treatment groups or a control group. Agents in all three treatment groups receive access to a mobile training application that provides training modules, quizzes, and frequently asked questions on TIA and the insurance product they are selling. One treatment group was sent a version of the training application that offers agents the opportunity to be awarded phone credit for good performance on the module quizzes. Going forward, I refer to this as the incentive group. The second treatment group was sent a version of the training application that includes game features(specifically, a leader board and badges for participation and good performance on the module quizzes). Going forward, I refer to this as the competition group. The third treatment group does not receive any additional content on their training application. Going forward, I refer to this as the basic group. Agents in the control group did not receive access to the training application.

Agents in the incentive group were awarded KSH 250 in phone credit for each quiz for which they got at most one question wrong. Although agents can take each quiz as many times as they like, they can only be awarded phone credit once per quiz, and only for their first attempt at the quiz. Therefore, these agents could earn up to KSH 1750 from the application (about 17 USD at the time of the study). Agents in the competition group had access to a leader board on their applications that showed them how they ranked relative to all other agents (including those not in the competition group). Only agent numbers were listed on the leader board so as to protect agents privacy. Agents in this group could also receive digital badges for completing each quiz and separately for performing well on each quiz (i.e. getting at most one question wrong).

Training Provision:

Training was provided through an Android application developed by contracted computer programmers in a collaborative effort between myself, TIA, and IBLI. TIA began providing all agents with Android smart phones in the January-February, 2015 sales season to allow agents to record all their sales on a sales transaction application developed for TIA IBLI sales agents. This roll-out confirmed the willingness and ability of agents to use smart phone applications.

The training application was developed to provide simple, direct, and concise statements about the insurance product, and TIA. These statements are each under one of 7 training modules. At the conclusion of each training module, agents can complete a quiz on the content covered in the corresponding module. The application also includes a frequently asked questions section that agents can refer to when they are selling. This section includes 17 questions and answers which are meant to address confusion agents in previous sales windows had when selling the product.

Training applications were provided to agents in the treatment groups through SMS five days before the beginning of the sales window. SMS messages included download instructions, a link to the application, and depending on which treatment group agents were in, a message about the application. In the basic treatment group, agents were told they could use this application to help them better understand IBLI and potentially improve their sales. In the incentive treatment group, agents were told the same thing but were also told they could earn phone credit by performing well on the quizzes in the training application. In the competition treatment group, agents were told what agents in the basic group were told and were also told that they could see how their performance on the training application quizzes compares with other agents, and receive badges on the application for participating in the training and performing well on the quizzes. Over the next four days, each agent in the treatment groups was contacted by phone by a researcher at IBLI and/or by TIA county coordinators who are permanent employees to determine whether they had received the app and been able to download it. In some cases, because of poor network in some areas, efforts to reach agents by phone continued after the sales window had begun.

Data Collection:
Data on agent performance was collected from TIA after the sales window had closed and sales had been reconciled. Data on agent quiz performance and training engagement was collected through the training application. This data is transmitted to the application server when agents have cell phone network. Unfortunately, some data was not transmitted before phones were collected and wiped by TIA and, as a result, application use data is imperfect. Data on agent retention and subsequent performance was collected from TIA following the sales window after the window the experiment was run during (January-February, 2016).

Data on past agent performance and agent ability when available was collected from TIA. In particular, sales data from the previous sales window was collected for agents who previously sold IBLI with TIA. Lead agents who attended face-to-face training were also given tests at the beginning and end of training to determine how much they knew about IBLI and TIA coming into the training, and how much they learned from the training. These scores were collected from TIA. In addition, TIA has information from some lead agents who submitted resumes and completed surveys on education and previous sales experience.

I had planned for a survey of all lead and sub-agents to collect information about their educational, familial, and work experience backgrounds. Unfortunately, due to unforeseen events both within the regions included in the study and within the organizations I was working with, these surveys were not implemented.

Randomization:

To avoid agents in the control knowing about the training application or gaining access to the content, and to avoid agents in one treatment group learning about other versions of the application and potentially becoming upset by the differences or becoming aware of a potential study, randomization occurred at the division level.
Experimental Design Details
Randomization Method
Coin flip
Randomization Unit
weather division (sub-regions within counties whether spillovers between agents are expected)
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
54 weather divisions
Sample size: planned number of observations
296 sales agents
Sample size (or number of clusters) by treatment arms
85 sales agents control, 66 sales agents basic training, 69 sales agents training with monetary incentive, 76 sales agents training with competition features
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on previous season's sales and expected benefits from training: 48 agents in control and treatment to obtain statistically significant differences in premiums collected by insurance agents in treatment and control at the 5% level with a power of 80%.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials