Ramping Up Learning-by-Doing: Evidence from a Field (Work) Experiment

Last registered on October 31, 2022

Pre-Trial

Trial Information

General Information

Title
Ramping Up Learning-by-Doing: Evidence from a Field (Work) Experiment
RCT ID
AEARCTR-0010106
Initial registration date
October 20, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 31, 2022, 10:38 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Northwestern University (Kellogg)

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2022-09-26
End date
2023-11-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project examines the process of worker learning-by-doing within a data collection firm in Uganda. We partner with the organization to test if investments in frontline and digital supervision enhance worker performance and learning by aggregating and transmitting knowledge within the firm. We propose an experimental design to identify the causal effects of knowledge flows from supervisors on individual worker performance. In the experiment, we randomize the intensity of supervision and on-the-job training across workers and production tasks. We further examine if the effects of one-on-one training from supervisors are magnified by the transmission of knowledge across co-workers.
External Link(s)

Registration Citation

Citation
Sen, Ritwika. 2022. "Ramping Up Learning-by-Doing: Evidence from a Field (Work) Experiment." AEA RCT Registry. October 31. https://doi.org/10.1257/rct.10106-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-10-12
Intervention End Date
2022-11-25

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes for this study will be measured under three key domains:
0. Take Up:
- Measures of supervisor presence during household surveys conducted by the worker: we use interview-level supervisor reports, worker reports, and GPS coordinates.
- Knowledge Flows: we use interview-level field notes from supervisors on i) their assessments of worker performance; and ii) whether they provided workers any advice/tips upon the completion of the interview. Workers also report the receipt of advice, the nature of advice received, and the mode of communication (if any) with their supervisors regarding each interview.

1. Production Efficiency:
- Time spent on a survey module by each worker. This is measured using time stamps (in milliseconds) in the electronic survey platform.
- Minutes over/under a benchmark range of time spent to complete each survey module

2. Output Quality
- Kullback-Liebler Divergence or Relative Entropy, i.e., the distance between the distribution of responses collected by worker x vs. all other workers. This measure is calculated at the survey-question level.
- The use of survey codes: ‘Don’t Know’, ‘Refused’ and ‘Other, Specify’ in each survey module.
- Error rates from backchecks of a random sample of respondents.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This project examines the performance and on-the-job learning processes of 68 workers who are recruited to conduct household surveys in the Central Region of Uganda for a (distinct) research project. We partner with the survey implementation company to implement the following experimental design as part of their data collection activities: (a) supervisors conduct random field inspections (spot checks) of workers during household interviews; and (b) supervisors are encouraged to share tips, tricks, or advice to help coach worker performance on a list of (pre-specified) priority survey modules. The randomization in part (a) will allow us to isolate the impact of supervision on worker performance by comparing those workers who receive more vs. less intense supervision at any given point in time. The design in part (b) will further allow us to pin down the effects of supervision on worker performance and learning through coaching alone vs. other competing hypothesis, e.g., monitoring, Hawthorne effects, managerial traits. That is, we will compare a worker’s average performance on survey modules where the aggregation and transmission of knowledge from supervisors is encouraged by design vs. survey modules where workers must learn-by-doing or seek out information from their co-workers (as they normally would). We will also shed light on the transmission of information within a firm, by testing if the effects of (randomly allocated) knowledge flows from supervisors to individual workers are magnified through a ‘social multiplier’, i.e., learning from peers.
Experimental Design Details
Not available
Randomization Method
The randomization of workers to daily supervisor checks is done in the office by a computer, using the software Stata.

The random assignment of interviews to workers (i.e., survey enumerators) is carried out in-person: workers assigned to a village/community draw from an urn with folded pieces of paper.
Randomization Unit
The unit of randomization is the worker-day.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
(1) Outcome Measures = 74,800. This consists of 68 workers, and 1100 measures of performance per worker (50 surveys * 22 modules per survey). The variation in worker performance across interviews in (1) will be useful for the estimation of worker learning curves and the analysis of causal mechanisms (e.g., the effects of monitoring). However, for part of the analysis, these observations will be aggregated to a sample size of approximately 2040, i.e., 68 workers * 30 measures of performance per worker (10 bins of experience/production time * 3 groups of ‘production tasks’ described in the experimental design). (2) Baseline Measures of Worker/Manager Characteristics = 105.
Sample size (or number of clusters) by treatment arms
The research design does not have a clearly delineated set of treatment arms, as we randomly vary: (1) the exposure to supervision across workers and days; (2) exposure to feedback from supervisors across three groups of production tasks. In other words, everyone is eligible to receive the treatment over the course of the trial but will differ in the intensity of treatment that they receive.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Human Subjects Committee for Innovations for Poverty Action IRB-USA
IRB Approval Date
2022-09-22
IRB Approval Number
IPA IRB Protocol #:15889 (Record Type: Amendment)