Back to History Current Version

Monitoring A, While Hoping for A & B: Experimental Evidence from a Multidimensional Task

Last registered on December 14, 2016

Pre-Trial

Trial Information

General Information

Title
Monitoring A, While Hoping for A & B: Experimental Evidence from a Multidimensional Task
RCT ID
AEARCTR-0001848
Initial registration date
December 14, 2016

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 14, 2016, 5:24 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

PI Affiliation
Allen Institute for Artificial Intelligence
PI Affiliation
Cornell University
PI Affiliation
UC San Diego
PI Affiliation
International Livestock Research Institute

Additional Trial Information

Status
Completed
Start date
2015-03-13
End date
2015-08-10
Secondary IDs
Abstract
Monitoring workers in order to match rewards to performance is a central justification for the importance of organizations and of management. However, when both inputs and outputs are difficult to measure, achieving this objective is difficult. We consider whether signaling and demonstrating monitor productivity on some performance dimensions leads to an improvement in worker performance on all dimensions. Preliminary results from a field experiment run among remote multi-dimensional task workers in rural Kenya demonstrate that increasing the visibility of monitoring on some dimensions improves performance on most of those dimensions, as well as performance on others. Our evidence is consistent with this monitoring acting as a signal of managerial productivity.
External Link(s)

Registration Citation

Citation
Chebelyon, Eddy et al. 2016. "Monitoring A, While Hoping for A & B: Experimental Evidence from a Multidimensional Task." AEA RCT Registry. December 14. https://doi.org/10.1257/rct.1848-1.0
Former Citation
Chebelyon, Eddy et al. 2016. "Monitoring A, While Hoping for A & B: Experimental Evidence from a Multidimensional Task." AEA RCT Registry. December 14. https://www.socialscienceregistry.org/trials/1848/history/12445
Experimental Details

Interventions

Intervention(s)
To test whether increasing the visibility and activity of monitoring on some task dimensions affected the performance of workers, we introduced two managerial treatments. Workers assigned the first treatment, which we will hereafter refer to as the "managerial activity" treatment, received a call from their manager every five days. During the call, the manager told each worker how many submissions they had made the previous day, and how many of those submissions were classified as having grass in them. The manager did not give workers any evaluation-based feedback on the quality or quantity of data received, and in particular, did not tell workers whether the photos were correctly specified as having grass in them. Workers assigned the second treatment, which we will hereafter refer to as the "monitoring" treatment, also received a call from their manager every five days. The beginning of the call was identical to the call in the managerial activity treatment. However, workers in this treatment group were also told which submissions from the prior day had correctly and incorrectly classified the presence of grass in the photo. In addition, the manager told workers how many submissions from the prior day included poor quality photos and were reminded that photos should be taken during the day, not be blurry, and capture a wide scene. The precise scripts the manager read workers in the respective treatments are as follows:

Managerial Activity Treatment: "Our records show that yesterday you completed and submitted [xx] surveys and that in [yy] of those surveys you indicated that there was grass"

Monitoring Treatment: "Our records show that yesterday you submitted [xx] surveys and that in [yy] of those surveys you indicated that there was grass. When we examined the photos. We agree with your grass categorization in [z1] cases but disagree in [z2] cases. Do you remember why you might have said there was no grass when there was grass or some grass when there was none in the photo? Our records also show that there were [z3] cases in which the photo was of very poor quality. Please remember that photos must be taken during the day, not be blurry, and you must stand back from objects so that the photo captures a wide scene"

The manager was instructed not to give any additional feedback or comments on the workers' performance or submissions and to make notes of all questions and comments from the workers during these calls.
Intervention Start Date
2015-04-24
Intervention End Date
2015-07-26

Primary Outcomes

Primary Outcomes (end points)
Quality and quantity of worker output. Specifically, number of survey submissions per day, accurate reporting of vegetation, quality of photo submission, whether or not submission was completed within permitted time period.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
To test whether active and visible monitoring of easy-to-measure dimensions of output changes worker performance, we employed a field experiment among remote workers in rural Kenya. In this section of the paper, we describe the population of workers in our sample, our treatment groups, and the implementation of our treatments.

Study Setting and Population:

We ran our experiment on 113 workers hired to collect and transmit information on rangeland conditions in rural areas of Central Kenya over a 149 day period. Workers were located in five divisions; two in Samburu County, two in Isiolo County, and one in Laikipia County. The data collection was part of a collaborative effort between the International Livestock Research Institute in Nairobi and Cornell University in Ithaca, New York to test the viability of information crowd-sourcing as a means for improving resource allocation among pastoralist communities (see https://www.udiscover.it/applications/pastoralism/ for more information on the purpose of the workers' tasks). Given the difficulties associated with finding labor to work in very remote regions and with the knowledge required to classify local rangelands, workers were hired from the population of pastoralists active in the region.

In order to collect and transmit information on rangeland conditions, pastoralists were supplied with smartphones that included cameras and GPS. A crowd-sourcing mobile application was developed for the purpose of this job, and pastoralists were to submit all their data through the application. To achieve a single completed survey, workers were required to take a photo in the application and then select whether the rangeland in the photo includes any grass, trees, or bushes, and, if so, whether each is green or brown in color. In addition, workers were required to indicate carrying capacity of the rangeland for cattle. Some of the pastoralists hired for this work are not literate or fluent in English, and some are not literate in any language. To ensure literacy was not required to complete the task, workers completed each classification step by selecting images on the application that corresponded to their responses. Workers could be paid between $0.05 and $0.40 for up to ten photo and classification submissions per day depending on the location photos were taken in. Higher prices were paid for photos from more remote locations. To try to ensure that they did not submit multiple photos of the same rangeland within a short time period, photos had to be submitted one hour apart. Moreover, to ensure rangelands would be visible in the photos, submissions had to be recorded between 7 am and 6 pm. Submissions that did not meet these qualifications were not paid for. Workers received three days of intensive training on the use of the smartphone, the application, and the task. They were employed on this job between March and August of 2015, and none of the workers were fired for any reason.

There are several dimensions of data submission quality that are relatively easy to verify, and several that are quite difficult. In particular, the location of the photo, the time it was taken, whether it had been previously submitted, and which classifications were made are automatically recorded with the data and easy to verify as a result. Location and time of the photo are particularly important to verify because payment is conditional on these characteristics. In contrast, the accuracy of the classifications made and the quality of the photo are difficult to verify because of the large quantity of data submitted. Workers may have an incentive to misclassify photos to reduce the time it takes to submit each one, for instance because choosing the first option on each screen in the application would be faster than choosing the correct option, or to submit quickly taken, poor quality photos. In addition, if they believe that aid to the region would be affected by the crowdsourcing effort, then they may have an incentive to classify photos as indicating rangeland conditions are worse than they are in reality.

Experimental Interventions:

To test whether increasing the visibility and activity of monitoring on some task dimensions affected the performance of workers, we introduced two managerial treatments. Workers assigned the first treatment, which we will hereafter refer to as the "managerial activity" treatment, received a call from their manager every five days. During the call, the manager told each worker how many submissions they had made the previous day, and how many of those submissions were classified as having grass in them. The manager did not give workers any evaluation-based feedback on the quality or quantity of data received, and in particular, did not tell workers whether the photos were correctly specified as having grass in them. Workers assigned the second treatment, which we will hereafter refer to as the "monitoring" treatment, also received a call from their manager every five days. The beginning of the call was identical to the call in the managerial activity treatment. However, workers in this treatment group were also told which submissions from the prior day had correctly and incorrectly classified the presence of grass in the photo. In addition, the manager told workers how many submissions from the prior day included poor quality photos and were reminded that photos should be taken during the day, not be blurry, and capture a wide scene. The precise scripts the manager read workers in the respective treatments are as follows:

Managerial Activity Treatment: "Our records show that yesterday you completed and submitted [xx] surveys and that in [yy] of those surveys you indicated that there was grass"

Monitoring Treatment: "Our records show that yesterday you submitted [xx] surveys and that in [yy] of those surveys you indicated that there was grass. When we examined the photos. We agree with your grass categorization in [z1] cases but disagree in [z2] cases. Do you remember why you might have said there was no grass when there was grass or some grass when there was none in the photo? Our records also show that there were [z3] cases in which the photo was of very poor quality. Please remember that photos must be taken during the day, not be blurry, and you must stand back from objects so that the photo captures a wide scene"

The manager was instructed not to give any additional feedback or comments on the workers' performance or submissions and to make notes of all questions and comments from the workers during these calls.

Study Implementation:

Both the managerial activity and the monitoring treatments were assigned to 34 workers in the study population, and the remaining 45 workers did not receive any phone calls from the local manager. Treatments were randomly assigned within each division to ensure that each division has workers in all three groups. Each day, the manager called all workers in the treatment groups in a single division resulting in one division being called per day. These calls began 43 days into the study period. To test whether the treatments continued to have effects after the calls stopped and whether the stickiness of the treatments depends on how long the treatment period is, we phased the calls out gradually. Specifically, we dropped 25% of the treatment group from the call list at a time with the first 25% being dropped 52 days after the start of the treatments and each subsequent 25% dropped after 15 days. All calls stopped 15 days before the end of the study period.

At the beginning of the study period, workers were surveyed by their local manager. The questionnaire asked about their educational and work backgrounds, their demographics, and their normal phone use. Workers were told that their activities would be used to study the viability of crowdsourcing for improving information on range land conditions and related topics, but did not know that we were studying questions related to worker management or that managerial interventions were being randomly assigned.
Experimental Design Details
Randomization Method
Randomization done in office by a computer
Randomization Unit
Division
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
113 workers
Sample size: planned number of observations
113 workers
Sample size (or number of clusters) by treatment arms
45 workers control, 34 workers managerial activity treatment, 34 workers monitoring treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Number of submissions per day: need a minimum detectable standardized effect size of 0.27 submissions between treatment and control using the pre-intervention period mean of 10.04 and standard deviation of 3.23 at the 5% level with a power of 80%.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials