Impact of additional e-Training on performance of research interviewers

Last registered on April 13, 2021

Pre-Trial

Trial Information

General Information

Title
Impact of additional e-Training on performance of research interviewers
RCT ID
AEARCTR-0007446
Initial registration date
April 12, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 13, 2021, 11:32 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Innovations for Poverty Action

Other Primary Investigator(s)

PI Affiliation
Innovations for Poverty Action
PI Affiliation
Innovations for Poverty Action

Additional Trial Information

Status
On going
Start date
2021-04-01
End date
2021-12-31
Secondary IDs
Abstract
The confidence in evidence from experimental studies hinges on the quality of primary data collected through in-person interviews. Interviewing for rigorous research requires knowledge and certain soft skills. Interviewers or enumerators, who come from varying backgrounds are traditionally trained via classroom instruction, where a trainer delivers content, reinforcing understanding with role-plays and mock-interviews. Whereas e-Learning modes are touted to be better mediums for individualized learning of complex ideas, they are also cost-effective, scalable and have become indispensable in the post-pandemic world. We propose to study the impact of additional, learner-oriented e-training on the performance of research interviewers, on their knowledge and skills and on retention of this learning in the medium-term. We will offer intensive e-training to randomly selected interviewers, in addition to their project-specific training, and follow their knowledge and performance through a combination of objective and self-reported measures before, during and 6 months after the e-training. This experiment will contribute to the discussion on mitigation of measurement errors in burgeoning experimental research.
External Link(s)

Registration Citation

Citation
Chaparala, Sohini, Mohammad Ashraful Haque and Sneha Subramanian. 2021. "Impact of additional e-Training on performance of research interviewers." AEA RCT Registry. April 13. https://doi.org/10.1257/rct.7446-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The treated enumerators will remotely go through a set of training videos in Bangla, covering modules on scientific and ethical surveying, digital literacy, communication and teamwork etc. These videos cover training content in depth using multimedia and also include embedded quizzes to scaffold learning. The enumerators will engage with this material at their own pace within a 6-hour window. They will be compensated for their time spent on the training.
Intervention Start Date
2021-05-23
Intervention End Date
2021-05-30

Primary Outcomes

Primary Outcomes (end points)
Average daily productivity on interviews;
Average data error rates;
Job-related knowledge;
Soft skills measured by Big5
Primary Outcomes (explanation)
The 'average daily productivity' outcome i.e., the number of surveys per day, will be adjusted to the target surveys per day. The error rates would also be recoded based on project-specific definition of errors.

Secondary Outcomes

Secondary Outcomes (end points)
Interviewer’s perceived competency;
Interviewer's aspirations, expectations and job satisfaction
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We propose a simple experimental design where 300 interviewers will be equally but randomly assigned, stratified by gender, education and experience, across the two groups: (a) Control Arm – this group will receive regular, synchronous training on projects, and (b) Treatment Arm – this group will receive regular, synchronous training on projects and additional e-training with embedded assessments. We will follow all interviewers through pre-intervention and post-intervention web surveys, while also collecting administrative data on their performance during the study.
Experimental Design Details
Randomization Method
The stratified random selection will be executed on the Stata software.
Randomization Unit
The unit of randomization is the individual interviewer.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No clusters are planned
Sample size: planned number of observations
300
Sample size (or number of clusters) by treatment arms
150 treatment, 150 control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials