Field
Trial Title
|
Before
Does Greater Flexibility of Online labor Markets Encourage Female Participation? Evidence from Upwork
|
After
Does Greater Flexibility of Online labor Markets Encourage Female Participation? Evidence from Online Freelance Market
|
Field
Trial Status
|
Before
in_development
|
After
completed
|
Field
Abstract
|
Before
The online labor market affords greater flexibility that may favor women. And yet, female labor force participation in the online labor market remains limited. We conduct an online experiment on the freelance hiring platform Upwork to study the impact of greater flexibility in choosing work hours within a day on female participation. We post identical job advertisements that differ randomly only in job flexibility and the fee offered. We compare the responses to job postings with different levels of wages and flexibility to understand whether and how much women value flexibility. From the information collected in the pilot, we find that the share of women applicants indeed increases when jobs pay more or are more flexible.
|
After
The online labor market affords greater flexibility that may favor women. And yet, female labor force participation in the online labor market remains limited. We conduct an online experiment on a major freelance hiring platform to study the impact of greater flexibility in choosing work hours within a day on female participation. We post identical job advertisements that differ randomly only in job flexibility and the fee offered. We compare the responses to job postings with different levels of wages and flexibility to understand whether and how much women value flexibility. From the information collected in the pilot, we find that the share of women applicants indeed increases when jobs pay more or are more flexible.
|
Field
Trial End Date
|
Before
December 12, 2021
|
After
December 16, 2021
|
Field
Last Published
|
Before
November 18, 2021 12:10 PM
|
After
December 26, 2021 06:40 AM
|
Field
Study Withdrawn
|
Before
|
After
No
|
Field
Intervention Completion Date
|
Before
|
After
December 16, 2021
|
Field
Data Collection Complete
|
Before
|
After
Yes
|
Field
Final Sample Size: Number of Clusters (Unit of Randomization)
|
Before
|
After
80
|
Field
Was attrition correlated with treatment status?
|
Before
|
After
No
|
Field
Final Sample Size: Total Number of Observations
|
Before
|
After
320 Jobs (and the applicants to these jobs).
|
Field
Final Sample Size (or Number of Clusters) by Treatment Arms
|
Before
|
After
80 Low wage low flexible jobs, 80 high wage high flexibility jobs, 80 high wage low flexibility jobs and 80 low wage high flexibility.
|
Field
Is there a restricted access data set available on request?
|
Before
|
After
No
|
Field
Restricted Data Contact
|
Before
|
After
rak
|
Field
Program Files
|
Before
|
After
No
|
Field
Data Collection Completion Date
|
Before
|
After
December 16, 2021
|
Field
Is data available for public use?
|
Before
|
After
No
|
Field
Intervention End Date
|
Before
December 12, 2021
|
After
December 15, 2021
|
Field
Primary Outcomes (End Points)
|
Before
Number of female applicants, number of total applicants, share of female applicants
|
After
Number of female applicants in each job, number of male applicants in each job, number of total applicants in each job, share of female applicants out of total applicants in each job.
|
Field
Primary Outcomes (Explanation)
|
Before
|
After
These outcome variables will not be constructed, they will be directly measured.
|
Field
Experimental Design (Public)
|
Before
We collect data from applicant profiles in response to job postings on the online labor marketplace, Upwork. To select the sample for the study, we followed a three-step sampling protocol. First, we chose seven subcategories of job specialization from twelve broad job categories advertised on Upwork. These categories are Admin Support, Data Science & Analytics, Design & Creative, IT & Networking, Translation, Web, Mobile & Software Development, and Writing. Other subcategories, like Sales & Marketing, are not included because of logistic constraints. Second, we look for job postings in each of these subcategories that meet two criteria -- (i) are commonly posted on Upwork, and (ii) are within the range of the research budget. We identify 80 tasks that meet these criteria and prepare our own job posting in a format and language that are similar to those posted on the platform. This is important because we want the jobs that are posted as part of the experiment to resemble other job posts that are regularly seen on the website so that the experimental job posts do not stand out. Third, for each task, we create four job postings that differ only in flexibility to choose work hours within a day and wages. In high-flexibility jobs, freelancers can choose any 2-hour window on a specified date. For low-flexibility jobs, freelancers have to complete the work during a specified two-hour window during the day. The high-wage jobs pay USD $ 40 for the two hours and the low-wage job pays USD $30 for the two hours. Therefore, the four treatment arms of the experiment are:
1. Low wage, low flexibility
2. High wage, low flexibility
3. Low wage, high flexibility
4. High wage, high flexibility
In total, our sample will consist of 320 job postings (80 tasks time 4 postings/task). The experiment will be conducted over the span of four weeks. We will randomly pick one job posting from each of the 80 tasks to be posted in the first week of the experiment. For each of these 80 job postings, we will randomly assign a day of the week to publish the job. The same process will be followed in subsequent weeks for 4 weeks (using sampling without replacement). Job postings will be active for twenty-four hours, and applicant information will be collected at the end of the twenty-four hours. Once a posting is closed, we will randomly hire one of the applicants to complete the assigned job. This candidate will receive the promised payment.
|
After
We collect data from applicant profiles in response to job postings on the online labor marketplace on a major online freelance platform. To select the sample for the study, we followed a three-step sampling protocol. First, we chose seven subcategories of job specialization from twelve broad job categories advertised on the platform. These categories are Admin Support, Data Science & Analytics, Design & Creative, IT & Networking, Translation, Web, Mobile & Software Development, and Writing. Other subcategories, like Sales & Marketing, are not included because of logistic constraints. Second, we look for job postings in each of these subcategories that meet two criteria -- (i) are commonly posted on the platform, and (ii) are within the range of the research budget. We identify 80 tasks that meet these criteria and prepare our own job posting in a format and language that are similar to those posted on the platform. This is important because we want the jobs that are posted as part of the experiment to resemble other job posts that are regularly seen on the website so that the experimental job posts do not stand out. Third, for each task, we create four job postings that differ only in flexibility to choose work hours within a day and wages. In high-flexibility jobs, freelancers can choose any 2-hour window on a specified date. For low-flexibility jobs, freelancers have to complete the work during a specified two-hour window during the day. The high-wage jobs pay USD $ 40 for the two hours and the low-wage job pays USD $30 for the two hours. Therefore, the four treatment arms of the experiment are:
1. Low wage, low flexibility
2. High wage, low flexibility
3. Low wage, high flexibility
4. High wage, high flexibility
In total, our sample will consist of 320 job postings (80 tasks time 4 postings/task). The experiment will be conducted over the span of four weeks. We will randomly pick one job posting from each of the 80 tasks to be posted in the first week of the experiment. For each of these 80 job postings, we will randomly assign a day of the week to publish the job. The same process will be followed in subsequent weeks for 4 weeks (using sampling without replacement). Job postings will be active for twenty-four hours, and applicant information will be collected at the end of the twenty-four hours. Once a posting is closed, we will randomly hire one of the applicants to complete the assigned job. This candidate will receive the promised payment.
|
Field
Secondary Outcomes (End Points)
|
Before
The difference between the bid to the job and the wage offered by the job
|
After
Intensive Margin - Examples include the difference between the bid to the job and the wage offered by the job, underbid, overbid, length of the cover letter, quality of the cover letter, sample of work attached, order of application.
Composition - Examples include education of the applicants, country of the applicants, experience of the applicants, amount earned prior to this application, match quality.
Other outcomes - Examples include preferred time slots (outcomes - during the day, early morning, end of the day, full flexible) and Heterogeneity by week day and week end.
|
Field
Secondary Outcomes (Explanation)
|
Before
|
After
Underbid - whether the applicant has made a bid lower than what we have proposed as the price.
Overbid - whether the applicant has made a bid higher than what we have proposed as the price.
Length of the cover letter - Number of words in the cover letter.
Quality of cover letter - for example match of job relevant words.
Sample of work attached - whether a prior sample of work attached.
Order of application - for example percentile rank in the order of application.
Education - For example, have a bachelors degree, have a masters degree, have a PhD degree, years of higher education.
Country of applicants - For example from South Asia, from East Asia, from Africa, from Asia and Africa, from not high income countries.
Match Quality - For example match of applicants tagged skill and our skill, platform's mention of "Best Match".
|