Experimental Design
In order to address my research question, I ran a randomized control trial in collaboration with the International Livestock Research Institute (ILRI) and Takaful Insurance of Africa (TIA). ILRI is a not-for-profit research institute that does research on a range of issues related to livestock. Among their projects is an Index Based Livestock Insurance product (IBLI) developed and managed by a team of Economists based in Nairobi.
TIA sells IBLI to Kenyan pastoralists using a sales-agent model in which agents are selected in part based on their literacy, and their relationship to their local community. They are tasked with selling IBLI within their communities for a two month sales windows. Sales windows occur twice a year (from January to February, and from August to September), and there is substantial agent turnover between windows. In the sales window covered by the experiment, TIA was providing IBLI in 6 counties in Northern Kenya; Garissa, Isiolo, Mandera, Marsabit, Moyale, and Wajir. Each county was divided into divisions (ranging from 4 to 16 per county). Each division was assigned one lead agent and several sub-agents. Lead agents are tasked with finding and training sub-agents, in addition to selling the product. Lead agents attend a short training camp where they learn about IBLI, and some tips on how to sell it.
Treatments:
To test short-term workers are willing to invest in firm-specific knowledge, and how the provision of this knowledge affects firm performance, I randomly assigned sales agents into one of three treatment groups or a control group. Agents in all three treatment groups receive access to a mobile training application that provides training modules, quizzes, and frequently asked questions on TIA and the insurance product they are selling. One treatment group was sent a version of the training application that offers agents the opportunity to be awarded phone credit for good performance on the module quizzes. Going forward, I refer to this as the incentive group. The second treatment group was sent a version of the training application that includes game features(specifically, a leader board and badges for participation and good performance on the module quizzes). Going forward, I refer to this as the competition group. The third treatment group does not receive any additional content on their training application. Going forward, I refer to this as the basic group. Agents in the control group did not receive access to the training application.
Agents in the incentive group were awarded KSH 250 in phone credit for each quiz for which they got at most one question wrong. Although agents can take each quiz as many times as they like, they can only be awarded phone credit once per quiz, and only for their first attempt at the quiz. Therefore, these agents could earn up to KSH 1750 from the application (about 17 USD at the time of the study). Agents in the competition group had access to a leader board on their applications that showed them how they ranked relative to all other agents (including those not in the competition group). Only agent numbers were listed on the leader board so as to protect agents privacy. Agents in this group could also receive digital badges for completing each quiz and separately for performing well on each quiz (i.e. getting at most one question wrong).
Training Provision:
Training was provided through an Android application developed by contracted computer programmers in a collaborative effort between myself, TIA, and IBLI. TIA began providing all agents with Android smart phones in the January-February, 2015 sales season to allow agents to record all their sales on a sales transaction application developed for TIA IBLI sales agents. This roll-out confirmed the willingness and ability of agents to use smart phone applications.
The training application was developed to provide simple, direct, and concise statements about the insurance product, and TIA. These statements are each under one of 7 training modules. At the conclusion of each training module, agents can complete a quiz on the content covered in the corresponding module. The application also includes a frequently asked questions section that agents can refer to when they are selling. This section includes 17 questions and answers which are meant to address confusion agents in previous sales windows had when selling the product.
Training applications were provided to agents in the treatment groups through SMS five days before the beginning of the sales window. SMS messages included download instructions, a link to the application, and depending on which treatment group agents were in, a message about the application. In the basic treatment group, agents were told they could use this application to help them better understand IBLI and potentially improve their sales. In the incentive treatment group, agents were told the same thing but were also told they could earn phone credit by performing well on the quizzes in the training application. In the competition treatment group, agents were told what agents in the basic group were told and were also told that they could see how their performance on the training application quizzes compares with other agents, and receive badges on the application for participating in the training and performing well on the quizzes. Over the next four days, each agent in the treatment groups was contacted by phone by a researcher at IBLI and/or by TIA county coordinators who are permanent employees to determine whether they had received the app and been able to download it. In some cases, because of poor network in some areas, efforts to reach agents by phone continued after the sales window had begun.
Data Collection:
Data on agent performance was collected from TIA after the sales window had closed and sales had been reconciled. Data on agent quiz performance and training engagement was collected through the training application. This data is transmitted to the application server when agents have cell phone network. Unfortunately, some data was not transmitted before phones were collected and wiped by TIA and, as a result, application use data is imperfect. Data on agent retention and subsequent performance was collected from TIA following the sales window after the window the experiment was run during (January-February, 2016).
Data on past agent performance and agent ability when available was collected from TIA. In particular, sales data from the previous sales window was collected for agents who previously sold IBLI with TIA. Lead agents who attended face-to-face training were also given tests at the beginning and end of training to determine how much they knew about IBLI and TIA coming into the training, and how much they learned from the training. These scores were collected from TIA. In addition, TIA has information from some lead agents who submitted resumes and completed surveys on education and previous sales experience.
I had planned for a survey of all lead and sub-agents to collect information about their educational, familial, and work experience backgrounds. Unfortunately, due to unforeseen events both within the regions included in the study and within the organizations I was working with, these surveys were not implemented.
Randomization:
To avoid agents in the control knowing about the training application or gaining access to the content, and to avoid agents in one treatment group learning about other versions of the application and potentially becoming upset by the differences or becoming aware of a potential study, randomization occurred at the division level.