A Field Experiment on Leveraging Intrinsic Motivation in Public Service Delivery

Last registered on December 06, 2014

Pre-Trial

Trial Information

General Information

Title
A Field Experiment on Leveraging Intrinsic Motivation in Public Service Delivery
RCT ID
AEARCTR-0000568
Initial registration date
December 06, 2014

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 06, 2014, 10:13 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 06, 2014, 10:14 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2012-11-01
End date
2016-12-31
Secondary IDs
Abstract
Although extrinsic and intrinsic motivation likely jointly explain the effort of many agents engaged in public service delivery, incentives typically only appeal to the former. In the context of a maternal health program in rural Uttar Pradesh, India, we develop a novel, mobile phone-based "intrinsic incentive" technology designed to increase health workers' intrinsic returns to effort. In a randomized field experiment, we test one version designed to have a larger effect on intrinsic utility against another designed to be less powerful. A one-year pilot experiment (March 2013 to February 2014) reveals the following findings. First, demand for both technologies is high; the average user accesses her software application approximately once every days for the duration of the one-year experiment. Second, average treatment effects are nil. Third, the high-powered intrinsic incentive is most effective when it leverages pre-existing intrinsic motivation; it produces a 38.4% increase in performance in the top tercile of intrinsically motivated workers. Finally, this effect appears to be mediated purely by making effort more intrinsically rewarding, and not by other mechanisms such as providing implicit extrinsic incentives.

Based on these hypothesis-generating findings, we conduct a replication experiment as an out-of-sample test in an adjoining population of health workers.
External Link(s)

Registration Citation

Citation
Lee, Scott. 2014. "A Field Experiment on Leveraging Intrinsic Motivation in Public Service Delivery." AEA RCT Registry. December 06. https://doi.org/10.1257/rct.568-2.0
Former Citation
Lee, Scott. 2014. "A Field Experiment on Leveraging Intrinsic Motivation in Public Service Delivery." AEA RCT Registry. December 06. https://www.socialscienceregistry.org/trials/568/history/3211
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We develop a novel mobile phone software application that is designed to act as an intrinsic incentive--an incentive that increases agents' intrinsic, and only intrinsic, returns to effort. We develop and test two intrinsic incentive interventions: one that is high-powered and another that is low-powered. Both are information services. The high-powered intrinsic incentive technology is a phone-based "self-tracking" tool that allows the health worker to graphically review her performance in a real-time manner. In contrast, the low-powered intrinsic incentive provides generic messages irrespective of the agent's effort. Concretely, it is a phone-based "advice and encouragement" tool that avails to the health worker messages about her job responsibilities as well as inspirational quotes.
Intervention Start Date
2014-06-01
Intervention End Date
2015-05-31

Primary Outcomes

Primary Outcomes (end points)
(1) Take-up (usage) of experimental intervention
(2) Work performance
(3) Customer satisfaction
(4) Health outcomes
Primary Outcomes (explanation)
(1) Take-up (usage) of experimental intervention -- As measured through user sessions directly logged by application server

(2) Work performance -- Home visits (the main job responsibility of the health workers) will be tracked electronically through the health workers' phone-based electronic health record system. Both initial registrations and follow-up visits will be tracked on a client basis.

(3) Customer satisfaction -- We will survey clients to assess their subjective attitudes toward the health workers.

(4) Health outcomes -- We will survey clients and attempt to obtain facility-based health records to assess subjective health outcomes such as health knowledge, attitudes, and practices, as well as objective outcomes such as delivery outcome.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Prior to the launch of the experimental interventions, a baseline survey will be conducted in which every health worker will be interviewed on a one-on-one basis. The baseline data will provide variables for stratified randomization.

We will randomly assign each health worker to one of the two intrinsic incentive treatments. The corresponding software will be installed on each health worker's phone, and the health workers within each treatment will be trained on the use of the software in batches. Trainings will be conducted at the implementing partner's project office in the administrative block's main town. All participants will be told that two different phone-based tools are being piloted.

Once the experiment is launched, care will be taken to preserve the economic intent of the intervention: to alter intrinsic returns to effort without altering real or perceived extrinsic returns to effort. In order to provide soft reminders of the software applications, a generic, automated SMS message will be sent to each health worker weekly. If participants report a malfunction of the phone or software, research staff will respond to the troubleshooting request in a demand-driven manner. Periodic (e.g., quarterly) checks may also be conducted to ensure that hardware and software are functioning correctly. Other than these interactions, there will be no regular monitoring or encouragement of use of the technologies. Midline and endline surveys may be conducted.
Experimental Design Details
Prior to the launch of the experimental interventions, a baseline survey will be conducted in which every health worker will be interviewed. The baseline data will provide variables for stratified randomization.

We will randomly assign each health worker to one of the two intrinsic incentive treatments. The corresponding software will be installed on each health worker's phone, and the health workers within each treatment will be trained on the use of the software in batches. Trainings will be conducted at the implementing partner's project office in the administrative block's main town. All participants will be told that two different phone-based tools are being piloted.

Once the experiment is launched, care will be taken to preserve the economic intent of the intervention: to alter intrinsic returns to effort without altering real or perceived extrinsic returns to effort. In order to provide soft reminders of the software applications, a generic, automated SMS message will be sent to each health worker weekly. If participants report a malfunction of the phone or software, research staff will respond to the troubleshooting request in a demand-driven manner. Periodic (e.g., quarterly) checks may also be conducted to ensure that hardware and software are functioning correctly. Other than these interactions, there will be no regular monitoring or encouragement of use of the technologies. A midline survey will be conducted.
Randomization Method
The minimum maximum t-statistic re-randomization method (as described in Bruhn et al., 2009) will be employed. Using the pre-specified stratification variables described below, one thousand draws of treatment assignments will be taken. For each draw, the maximum t-statistic of the t-tests for the stratification variables will be recorded. The one draw with the minimum maximum t-statistic will be chosen as the final treatment assignment.

In the pilot experiment, randomization was stratified by four variables: Hindi literacy, baseline work performance, village population, and subcenter. Literacy was directly measured during the baseline survey through an assessment involving asking the ASHAs to read a Hindi sentence. Baseline performance was operationalized as the number of client visits reported by each ASHA during her first 90 days of using CommCare. Because ASHAs received CommCare training in batches, the dates corresponding to the 90-day interval vary by ASHA, and because some were trained as late as October 2012, the interval was limited to 90 days to avoid censoring the performance data of these late trainees. ASHAs are assigned by the government to cover specific villages, and the village population variable is based on data from the 2001 Indian national census, which was the latest publicly available census at the time. Finally, subcenters are the first-level public health facilities in the local health system. They are staffed by auxiliary-nurse midwives (ANMs), and each subcenter takes as its catchment area a designated, contiguous cluster of surrounding villages. ASHAs are linked to subcenters by virtue of the village that they work in, and they are loosely supervised by the ANM in the subcenter.

In the replication experiment, randomization will be stratified by six variables: Hindi literacy (as described above), intrinsic motivation, extrinsic motivation, prosocial motivation, and number of follow-up visits in the previous 4 and 12 months. The three motivation variables will be based on psychometric scales administered during the baseline survey. Each variable is an unweighted average of the Likert-scale responses to the items constituting each scale. The decision to pre-specify these variables for sub-group analysis is based on findings from the pilot experiment which suggest interaction effects between the treatments and psychometric traits of the health workers. Finally, since one of the main outcome measures is home visits carried out by the health worker, we will stratify on baseline/lagged performance over the previous 4 and 12 months (to capture both short- and medium-term performance).
Randomization Unit
The individual health worker will be the unit of randomization.

Analysis of the pilot experiment demonstrated no evidence of informational or behavioral spillovers at the health worker level. As such, the replication experiment will randomize treatments at the health worker level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
Pilot experiment: 110 health workers; Replication experiment: 145 health workers
Sample size (or number of clusters) by treatment arms
Pilot experiment: 55 health workers high-powered incentive, 55 health workers low-powered incentive
Replication experiment: 73 health workers high-powered incentive, 72 health workers low-powered incentive (or vice versa)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
2012-07-06
IRB Approval Number
22559
IRB Name
Maulana Azad Medical College
IRB Approval Date
2014-10-12
IRB Approval Number
F.1/IEC/MAMC/(32)/4/2012/230

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials