How does the Organization of Crowdsourcing Affect Labor Supply, Productivity and Morals

Last registered on June 25, 2021

Pre-Trial

Trial Information

General Information

Title
How does the Organization of Crowdsourcing Affect Labor Supply, Productivity and Morals
RCT ID
AEARCTR-0006242
Initial registration date
June 23, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 25, 2021, 1:40 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Bremen

Other Primary Investigator(s)

PI Affiliation
University of Bremen

Additional Trial Information

Status
In development
Start date
2021-06-28
End date
2022-01-01
Secondary IDs
DFG HO 5296/3-1
Abstract
The organization of crowdsourcing comes in various forms. The most common organizational forms are crowd labor markets, crowd collaborative communities, and crowd contests. In this paper, we experimentally study labor supply, productivity, and morals under these three organizational forms of crowdsourcing.
External Link(s)

Registration Citation

Citation
Hornuf, Lars and Kristina Klein. 2021. "How does the Organization of Crowdsourcing Affect Labor Supply, Productivity and Morals." AEA RCT Registry. June 25. https://doi.org/10.1257/rct.6242-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
There are two interventions:

(1) Organizational form of crowdsourcing

(a) crowd labor market
(b) crowd collaborative community
(c) crowd contest

(2) Degree of overpay

(a) low
(b) high
Intervention Start Date
2021-06-28
Intervention End Date
2021-07-06

Primary Outcomes

Primary Outcomes (end points)
Labor supply
Labor productivity
Reaction to overpay
Primary Outcomes (explanation)
Labor supply is a positive decision by a participant to continue working after the focal crowdsourcing round. On the individual level, this is a dummy for each participant.

Labor productivity is the average time a participant needs to successfully solve a grid during a particular crowdsourcing round. On the individual level, this variable measures the time in seconds needed to successfully complete a grid.

Reaction to overpay is a positive decision by a participant to return the overpaid remuneration to the experimenter in a franked return envelope. On the individual level, this is a dummy for each subject.

Secondary Outcomes

Secondary Outcomes (end points)
We investigate the following second-order effects: the extent of overpay (low = EUR 1.00, high = EUR 5.00) and how it affects participants reaction to overpay.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We have designed a randomized control trial, in which participants will be randomly assigned to different treatment groups (crowd labor markets, crowd collaborative communities, and crowd contests) and extents of overpay. We use two controlled manipulations: organizational form of crowdsourcing and extent of overpay. The experiment is followed by a questionnaire in order to conduct balancing checks.
Experimental Design Details
For the experimental sample, all participants will be recruited from the BreLAB pool at the University of Bremen. The study is aimed exclusively at German speaking participants. In each round participants will be shown a maximum of 15 so-called grids (= groups) of nine images from which they should identify the robots. To do this, participants click on the respective picture numbers. When participants have fully classified a grid of images, they continue with the next grid. If they have correctly classified all nine images in a grid, participants will earn 0.13 cents for any correctly classified grid in the first round. Participants have 2 minutes for each round. If participants have correctly classified all 15 grids within 2 minutes in the first round, they will earn EUR 1.95. Participants will not earn money for wrongly classified grids. Each round follows the same structure. After the end of the first round, participants can choose to enter the next round and classify images again. The remuneration is then reduced by 11% in each new round so that in the second round a maximum of only EUR 1.80 can be earned in 2 minutes for 15 correctly classified grids. In round 10 it would only be 75 cents per 15 correctly classified grids. Participants can stop classifying images after each round. Participants can participate in a maximum of 10 rounds. Participants receive an additional EUR 5.00 fixed payment for showing up on time.

In the crowd labor markets intervention, participants are supposed to work alone and should not talk to the other participants. In the crowd collaborative community intervention, participants can communicate with others via a chat function. In the crowd contests intervention, only the best performing participant receives a compensation after the focal round, while the other participants receive no compensation. The crowd labor markets, crowd collaborative community intervention, and the crowd contests intervention will be randomly selected with 1/3 probability respectively. If only one participant signed up for a session, the session must be a crowd labor markets session, because no communication and competition can take place.

We investigate the effect of the three crowdsourcing organization interventions on (1) participants’ propensity to participate in the classification task, (2) their productivity during each round, and (3) their willingness to return the overpay. We also investigate the second-order effect of the extent of overpay (EUR 1.00, EUR 5.00) and how it affects participants’ willingness to return the overpay. The experiment is followed by a questionnaire to conduct balancing checks.
Randomization Method
We will randomly allocate sessions (and thus participants) with 1/3 probability to the crowd labor market, crowd collaborative community, and crowd contest treatment. If only one participant signs up for an experimental session, the session must be a crowd labor market session. Treatment status is assigned on the group level.
Randomization Unit
The unit of randomization is the experimental sessions.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
210 individual participants from the BreLAB pool
Sample size: planned number of observations
210 individual participants from the BreLAB pool
Sample size (or number of clusters) by treatment arms
70 crowd labor markets treatment, 70 crowd contest treatment, 70 crowd collaborative community treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Gesellschaft für experimentelle Wirtschaftsforschung e.V. (GfeW)
IRB Approval Date
2021-06-21
IRB Approval Number
U6Lrtnw7
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials