Who Gets the Job Referral? Evidence from a Social Networks Experiment
Last registered on August 03, 2016

Pre-Trial

Trial Information
General Information
Title
Who Gets the Job Referral? Evidence from a Social Networks Experiment
RCT ID
AEARCTR-0001127
Initial registration date
August 03, 2016
Last updated
August 03, 2016 5:51 AM EDT
Location(s)
Region
Primary Investigator
Affiliation
Northwestern University
Other Primary Investigator(s)
PI Affiliation
University of California, Berkeley
Additional Trial Information
Status
Completed
Start date
2009-02-01
End date
2009-12-31
Secondary IDs
Abstract
We use recruitment into a laboratory experiment in Kolkata, India to analyze how job networks select individuals for employment opportunities. We present evidence that individuals face a tradeoff between choosing the most qualified individual for the job and the individual who is ideal from the perspective of their social network. The experiment allows randomly selected subjects to refer members of their social networks to subsequent rounds of the experiment and varies the incentive schemes offered to these participants. We find that when faced with performance pay, individuals are more likely to refer co-workers and less likely to refer family members. High ability participants who are offered performance pay recruit referrals who perform significantly better on a cognitive ability task and also prove to be more reliable as evidenced by their choices in the trust game and performance on an effort task.
External Link(s)
Registration Citation
Citation
Beaman, Lori and Jeremy Magruder. 2016. "Who Gets the Job Referral? Evidence from a Social Networks Experiment." AEA RCT Registry. August 03. https://www.socialscienceregistry.org/trials/1127/history/9905
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
This study examined the job referral process in a laboratory setting. A temporary laboratory was set up in Kolkata, India and a random sample of households was invited to participate through door-to-door solicitation. Sampled households were offered a fixed wage if they sent an adult male household member to the nearby study site. Upon arrival at the study site, individuals were asked to complete a survey that included questions on demographics, labor force participation, social networks, and two measures of cognitive ability: the digit span test and raven's matrices. Individuals were also asked to complete one (randomly chosen) task: one task emphasized cognitive ability while a second task emphasized pure effort. At the end of the experiment, individuals were paid Rs. 135 (US$3) for their participation. They were also invited to return with a male friend or family member between 18 and 60 years of age, and were offered payment for making the reference. While everyone was asked to refer a friend who would be highly skilled at the job, payment was randomized along two dimensions: the amount of fixed pay and whether the pay could depend on the referral's performance. There were five treatment groups in total:

1) Low-stakes performance pay (with fixed component of Rs. 60 and performance component of Rs. 0-20)
2) High-stakes performance pay (with fixed component of Rs. 60 and performance component of Rs. 0-50)
3) Very low fixed pay (with fixed component of Rs. 60 and no performance component)
4) Low fixed pay (with fixed component of Rs. 80 and no performance component)
5) High fixed pay (with fixed component of Rs. 110 and no performance component)
Intervention Start Date
2009-07-01
Intervention End Date
2009-12-31
Primary Outcomes
Primary Outcomes (end points)
- Effect of different types of financial incentives (fixed fees vs. performance pay) on the choice to make a referral
- Referrals' performance (measured through performance on cognitive task).
Primary Outcomes (explanation)
The measurement of performance we use takes into account three aspects of performance: the time spent on each puzzle, whether the participant ultimately solved the puzzle, and the number of incorrect attempts. Participants were asked to complete four puzzles, and the performance score is the average over all four puzzles and is standardized using the mean and standard deviation of the entire OP sample.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
An initial pool of subjects is asked to refer members of their social networks to participate in the experiment in subsequent rounds. We go door to door to recruit initial participants. These households are offered a fixed wage if they send an adult male household member to the study site, which is located nearby. Participants are assigned an appointment time, requested to be available for two hours of work, and are provided with a single coupon to ensure that only one male per household attends. Upon arrival at the study site, individuals complete a survey that includes questions on demographics, labor force participation, social networks, and two measures of cognitive ability: the digit span test and raven’s matrices. This initial group (original participants or OPs) faces an experimental treatment randomized along several dimensions. OPs are asked to complete one (randomly chosen) task: one task emphasizes cognitive ability while a second task emphasizes pure effort. The majority of our sample (including all high-stakes treatment groups) was assigned to the cognitive task, which we focus on in the published paper.

In the cognitive test, participants were asked to arrange a group of colored swatches according to a set of logical rules. A supervisor explains the rules to each participant, who is given a maximum time limit to complete each puzzle. When the participant believes he has solved a puzzle, he signals the supervisor, who either lets the participant continue to the next puzzle if the solution is correct, or points out the error and tells the participant to try again, allowing up to three incorrect attempts per puzzle.

At the end of the experiment, individuals are paid Rs 135 for their participation. They are also offered payment to return with a male friend or family member (a referral) between the ages of 18 and 60. All OPs are specifically asked to return with a referral “who would be good at the task you just completed.” A second randomization occurs to determine the amount of payment the OP will receive when he returns with a referral. Payment varies along two dimensions: the amount of pay and whether pay may depend on the referral’s performance. Participants are ensured that their payment will be at least a minimal threshold and given the specific terms of the payment arrangement. OPs are informed of the offer payment immediately prior to their exit from the laboratory. All participants are asked to make an appointment to return with a referral in a designated three-day window.
Experimental Design Details
Randomization Method
Real-time randomization was done at the individual level. After each original participant completed their survey, a staff member drew a token from a bag which indicated treatment assignment. Tokens were replaced and the bag was remixed between each draw to guarantee that each person faced an independent and identically distributed (i.i.d.)draw for each treatment.
Randomization Unit
Individual
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
562 original participants
Sample size: planned number of observations
562 original participants
Sample size (or number of clusters) by treatment arms
116 original participants with low-stakes performance pay treatment, 136 original participants with high-stakes performance pay treatment, 71 original participants with very low fixed pay treatment, 117 original participants with low fixed pay treatment, 122 original participants with high fixed pay treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Northwestern University IRB
IRB Approval Date
2008-11-13
IRB Approval Number
STU00005781
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
Yes
Intervention Completion Date
December 31, 2009, 12:00 AM +00:00
Is data collection complete?
Yes
Data Collection Completion Date
December 31, 2009, 12:00 AM +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
562 original participants
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
562 original participants
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication
Data Publication
Is public data available?
Yes
Program Files
Program Files
Yes
Reports and Papers
Preliminary Reports
Relevant Papers
Abstract
We use recruitment into a laboratory experiment in Kolkata, India to analyze how job networks select individuals for employment opportunities. We present evidence that individuals face a tradeoff between choosing the most qualified individual for the job and the individual who is ideal from the perspective of their social network. The experiment allows randomly selected subjects to refer members of their social networks to subsequent rounds of the experiment and varies the incentive schemes offered to these participants. We find that when faced with performance pay, individuals are more likely to refer co-workers and less likely to refer family members. High ability participants who are offered performance pay recruit referrals who perform significantly better on a cognitive ability task and also prove to be more reliable as evidenced by their choices in the trust game and performance on an effort task.
Citation
Beaman, Lori, and Jeremy Magruder. 2012. "Who Gets the Job Referral? Evidence From a Social Networks Experiment." American Economic Review 102(7): 3574-3593.