Ramping Up Learning-by-Doing: Evidence from a Field (Work) Experiment

Last registered on October 31, 2022

Pre-Trial

Trial Information

General Information

Title
Ramping Up Learning-by-Doing: Evidence from a Field (Work) Experiment
RCT ID
AEARCTR-0010106
Initial registration date
October 20, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 31, 2022, 10:38 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Northwestern University (Kellogg)

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2022-09-26
End date
2023-11-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project examines the process of worker learning-by-doing within a data collection firm in Uganda. We partner with the organization to test if investments in frontline and digital supervision enhance worker performance and learning by aggregating and transmitting knowledge within the firm. We propose an experimental design to identify the causal effects of knowledge flows from supervisors on individual worker performance. In the experiment, we randomize the intensity of supervision and on-the-job training across workers and production tasks. We further examine if the effects of one-on-one training from supervisors are magnified by the transmission of knowledge across co-workers.
External Link(s)

Registration Citation

Citation
Sen, Ritwika. 2022. "Ramping Up Learning-by-Doing: Evidence from a Field (Work) Experiment." AEA RCT Registry. October 31. https://doi.org/10.1257/rct.10106-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-10-12
Intervention End Date
2022-11-25

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes for this study will be measured under three key domains:
0. Take Up:
- Measures of supervisor presence during household surveys conducted by the worker: we use interview-level supervisor reports, worker reports, and GPS coordinates.
- Knowledge Flows: we use interview-level field notes from supervisors on i) their assessments of worker performance; and ii) whether they provided workers any advice/tips upon the completion of the interview. Workers also report the receipt of advice, the nature of advice received, and the mode of communication (if any) with their supervisors regarding each interview.

1. Production Efficiency:
- Time spent on a survey module by each worker. This is measured using time stamps (in milliseconds) in the electronic survey platform.
- Minutes over/under a benchmark range of time spent to complete each survey module

2. Output Quality
- Kullback-Liebler Divergence or Relative Entropy, i.e., the distance between the distribution of responses collected by worker x vs. all other workers. This measure is calculated at the survey-question level.
- The use of survey codes: ‘Don’t Know’, ‘Refused’ and ‘Other, Specify’ in each survey module.
- Error rates from backchecks of a random sample of respondents.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This project examines the performance and on-the-job learning processes of 68 workers who are recruited to conduct household surveys in the Central Region of Uganda for a (distinct) research project. We partner with the survey implementation company to implement the following experimental design as part of their data collection activities: (a) supervisors conduct random field inspections (spot checks) of workers during household interviews; and (b) supervisors are encouraged to share tips, tricks, or advice to help coach worker performance on a list of (pre-specified) priority survey modules. The randomization in part (a) will allow us to isolate the impact of supervision on worker performance by comparing those workers who receive more vs. less intense supervision at any given point in time. The design in part (b) will further allow us to pin down the effects of supervision on worker performance and learning through coaching alone vs. other competing hypothesis, e.g., monitoring, Hawthorne effects, managerial traits. That is, we will compare a worker’s average performance on survey modules where the aggregation and transmission of knowledge from supervisors is encouraged by design vs. survey modules where workers must learn-by-doing or seek out information from their co-workers (as they normally would). We will also shed light on the transmission of information within a firm, by testing if the effects of (randomly allocated) knowledge flows from supervisors to individual workers are magnified through a ‘social multiplier’, i.e., learning from peers.
Experimental Design Details
The experimental design consists of the following steps:
1. We recruit a team of 68 workers to each conduct approximately 60 interviews each.
- The unit of production is a household interview.
- Production requires the completion of a series of tasks (22 survey modules) by each worker.

2. We use field officer training sessions to collect baseline data on workers, including:
- Measures of general, firm-specific, and task-specific human capital
- Information on workers’ cognitive, non-cognitive and communication skills
- Information on prior experience of working with co-workers (network connections)

3. We randomize the assignment of workers to 6 teams of 10-11 members, stratifying by their gender and prior experience of working on the (underlying) research study.

4. On each day of the household survey, t=1 to T, we randomize the allocation of survey respondents (i.e., input quality) to workers, within each community/village. This will be carried out as follows:
- At day t-1, a team of mobilizers will visit prospective survey respondents to seek appointments on day t. The mobilizers will classify households into clusters of 2 to 3 households, based on their geographical proximity.
- We will randomize the allocation of these household clusters to workers (1 cluster/worker) assigned to a village.

5. Finally, we randomly allocate worker-days (NxT) to the inspection treatment (spot checks) using the following procedure:
- Worker-Level: We construct matched quadruplets (‘blocks’) of workers within each survey team to minimize the Mahalanobis distance of covariates that predict worker performance on the survey. These include a cognitive skills index, a communication skills index, and training quiz scores on groups of survey modules (groups QT1, QT2 and C described under ‘Intervention’).
- Worker-Day Level: We randomly assign worker-days to treatment (D_it=1) stratifying by the survey day, the worker’s team, and matched block. This procedure ensures that one worker per team-block (and three workers per 10 to 11-member team) are assigned to the inspection treatment every day.

Rationale:
The random allocation of interview clusters to workers (described in Step 4) helps us to construct worker-specific measures of performance that are independent of the characteristics of respondents interviewed.

The randomization of daily field inspections in Step 5 helps to generate experimental variation in the intensity of supervision that a worker is exposed to throughout the survey. This will allow us to isolate the impact of supervision on the performance of workers by comparing those who have received more vs. less intense supervision at any given point in time. Naturally, field supervisors retain their discretion on whom to monitor and assist throughout the survey. The experiment simply alters the probability that a worker is monitored and receives advice/tips from their supervisor on a given day.

Distinguishing Knowledge Flows from Other Hypotheses:
Supervisors will be provided checklists and performance reports to aid the provision of structured feedback on specific survey modules during their daily field inspection rounds. The division of the questionnaire into different segments for supervisor-led (QT1) or supervisor and technology-supported (QT2) learning allows us to isolate the impact of supervision on worker performance through knowledge flows alone vs. other competing hypotheses, e.g., monitoring, managerial traits. That is, we compare a worker’s average performance on production tasks where knowledge sharing from supervisors to workers is encouraged (QT1 or QT2) vs. production tasks where workers must learn or seek out information of their own volition (QC). Moreover, any differential treatment effects of supervision on survey questions in the QT2 group (vs. QT1 group) will allow us to estimate the interaction between worker learning aided by supervisors and digital monitoring technologies. Please refer to the ‘Intervention’ section for further information on the check lists (QT1, QT2) and performance reports (QT2).

Spillovers:
To further study if the effects of supervision are magnified by peer effects (through a ‘social multiplier’): we will study how a field officer’s performance varies with the average performance of his/her peers. To overcome concerns regarding the endogeneity of peer performance, we will use instrumental variable regression analysis. Specifically, we will instrument for average peer performance using the average supervision received by a field officer’s peers, after controlling for the supervisory inputs received by him/her. Since our randomization design ensures that supervisory inputs to different field officers are randomly assigned, we are assured that the proposed instrumental variable will satisfy the necessary exclusion restrictions. Finally, by studying the differential effects of peers in questions singled out for supervisor-led coaching (QT1 or QT2), our design can isolate the effects of peers through the hypothesized mechanism of social learning or knowledge flows across co-workers.
Randomization Method
The randomization of workers to daily supervisor checks is done in the office by a computer, using the software Stata.

The random assignment of interviews to workers (i.e., survey enumerators) is carried out in-person: workers assigned to a village/community draw from an urn with folded pieces of paper.
Randomization Unit
The unit of randomization is the worker-day.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
(1) Outcome Measures = 74,800. This consists of 68 workers, and 1100 measures of performance per worker (50 surveys * 22 modules per survey). The variation in worker performance across interviews in (1) will be useful for the estimation of worker learning curves and the analysis of causal mechanisms (e.g., the effects of monitoring). However, for part of the analysis, these observations will be aggregated to a sample size of approximately 2040, i.e., 68 workers * 30 measures of performance per worker (10 bins of experience/production time * 3 groups of ‘production tasks’ described in the experimental design). (2) Baseline Measures of Worker/Manager Characteristics = 105.
Sample size (or number of clusters) by treatment arms
The research design does not have a clearly delineated set of treatment arms, as we randomly vary: (1) the exposure to supervision across workers and days; (2) exposure to feedback from supervisors across three groups of production tasks. In other words, everyone is eligible to receive the treatment over the course of the trial but will differ in the intensity of treatment that they receive.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee for Innovations for Poverty Action IRB-USA
IRB Approval Date
2022-09-22
IRB Approval Number
IPA IRB Protocol #:15889 (Record Type: Amendment)

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials