Experimental Design Details
The experimental design consists of the following steps:
1. We recruit a team of 68 workers to each conduct approximately 60 interviews each.
- The unit of production is a household interview.
- Production requires the completion of a series of tasks (22 survey modules) by each worker.
2. We use field officer training sessions to collect baseline data on workers, including:
- Measures of general, firm-specific, and task-specific human capital
- Information on workers’ cognitive, non-cognitive and communication skills
- Information on prior experience of working with co-workers (network connections)
3. We randomize the assignment of workers to 6 teams of 10-11 members, stratifying by their gender and prior experience of working on the (underlying) research study.
4. On each day of the household survey, t=1 to T, we randomize the allocation of survey respondents (i.e., input quality) to workers, within each community/village. This will be carried out as follows:
- At day t-1, a team of mobilizers will visit prospective survey respondents to seek appointments on day t. The mobilizers will classify households into clusters of 2 to 3 households, based on their geographical proximity.
- We will randomize the allocation of these household clusters to workers (1 cluster/worker) assigned to a village.
5. Finally, we randomly allocate worker-days (NxT) to the inspection treatment (spot checks) using the following procedure:
- Worker-Level: We construct matched quadruplets (‘blocks’) of workers within each survey team to minimize the Mahalanobis distance of covariates that predict worker performance on the survey. These include a cognitive skills index, a communication skills index, and training quiz scores on groups of survey modules (groups QT1, QT2 and C described under ‘Intervention’).
- Worker-Day Level: We randomly assign worker-days to treatment (D_it=1) stratifying by the survey day, the worker’s team, and matched block. This procedure ensures that one worker per team-block (and three workers per 10 to 11-member team) are assigned to the inspection treatment every day.
Rationale:
The random allocation of interview clusters to workers (described in Step 4) helps us to construct worker-specific measures of performance that are independent of the characteristics of respondents interviewed.
The randomization of daily field inspections in Step 5 helps to generate experimental variation in the intensity of supervision that a worker is exposed to throughout the survey. This will allow us to isolate the impact of supervision on the performance of workers by comparing those who have received more vs. less intense supervision at any given point in time. Naturally, field supervisors retain their discretion on whom to monitor and assist throughout the survey. The experiment simply alters the probability that a worker is monitored and receives advice/tips from their supervisor on a given day.
Distinguishing Knowledge Flows from Other Hypotheses:
Supervisors will be provided checklists and performance reports to aid the provision of structured feedback on specific survey modules during their daily field inspection rounds. The division of the questionnaire into different segments for supervisor-led (QT1) or supervisor and technology-supported (QT2) learning allows us to isolate the impact of supervision on worker performance through knowledge flows alone vs. other competing hypotheses, e.g., monitoring, managerial traits. That is, we compare a worker’s average performance on production tasks where knowledge sharing from supervisors to workers is encouraged (QT1 or QT2) vs. production tasks where workers must learn or seek out information of their own volition (QC). Moreover, any differential treatment effects of supervision on survey questions in the QT2 group (vs. QT1 group) will allow us to estimate the interaction between worker learning aided by supervisors and digital monitoring technologies. Please refer to the ‘Intervention’ section for further information on the check lists (QT1, QT2) and performance reports (QT2).
Spillovers:
To further study if the effects of supervision are magnified by peer effects (through a ‘social multiplier’): we will study how a field officer’s performance varies with the average performance of his/her peers. To overcome concerns regarding the endogeneity of peer performance, we will use instrumental variable regression analysis. Specifically, we will instrument for average peer performance using the average supervision received by a field officer’s peers, after controlling for the supervisory inputs received by him/her. Since our randomization design ensures that supervisory inputs to different field officers are randomly assigned, we are assured that the proposed instrumental variable will satisfy the necessary exclusion restrictions. Finally, by studying the differential effects of peers in questions singled out for supervisor-led coaching (QT1 or QT2), our design can isolate the effects of peers through the hypothesized mechanism of social learning or knowledge flows across co-workers.