Lock-in Effects in Online Labor Markets

Last registered on May 19, 2021

Pre-Trial

Trial Information

General Information

Title
Lock-in Effects in Online Labor Markets
RCT ID
AEARCTR-0006136
Initial registration date
February 01, 2021
Last updated
May 19, 2021, 11:12 AM EDT

Locations

Region

Primary Investigator

Affiliation
University of Bremen

Other Primary Investigator(s)

PI Affiliation
University of Bremen
PI Affiliation
UCLouvain

Additional Trial Information

Status
In development
Start date
2021-02-08
End date
2021-09-30
Secondary IDs
DFG HO 5296/3-1
Abstract
Lock-in effects arise when individuals are dependent on a particular product or service and cannot switch to another product or service without incurring significant switching costs. In this paper, we theoretically and experimentally study switching behavior of digital workers, which results from the exploitation of locked-in digital workers by the respective labor market platform. We investigate the extent and frequency of the exploitation as second-order effects. Lock-in effects frequently result from the absence of reputation portability, thereby generating high switching costs for platform users. We therefore analyze whether reputation portability increases switching behavior in an online labor market context. Our experimental design explains switching based on (1) monetary motives and (2) other-regarding behavior, i.e., we examine whether digital workers base their decisions on the economic outcome of their earnings, or whether they react to what they believe to be the intentions of the platform.
External Link(s)

Registration Citation

Citation
Ciotti, Fabrizio, Lars Hornuf and Eliza Stenzhorn. 2021. "Lock-in Effects in Online Labor Markets." AEA RCT Registry. May 19. https://doi.org/10.1257/rct.6136-1.3000000000000003
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details

Interventions

Intervention(s)
There are three interventions:
(1) the strength of lock-in (reputation portability vs. no reputation portability)
(2) the exploitation of lock-in (platform fee vs. no platform fee)
(3) the equality of exploitation of lock-in (platforms that simultaneously charge identical fees vs. only the focal platform charges a fee).
Intervention Start Date
2021-02-08
Intervention End Date
2021-09-30

Primary Outcomes

Primary Outcomes (end points)
We distinguish switching based on monetary motives and other-regarding behavior. Our first outcome of interest is whether a worker, due to monetary motives, switches platforms if the focal platform charges a fee.

Our second outcome of interest is whether a worker, due to other-regarding behavior, switches platforms if the focal platform charges a fee.

Our third outcome of interest is whether reputation portability increases workers’ switching behavior.
Primary Outcomes (explanation)
If a worker switches the platform after a fee increase and the task is compensated at a higher rate on the other platform, we define this as a combination of monetary motives and other-regarding behavior. Pure monetary motives are calculated as the difference between the combined monetary motives and other-regarding behavior minus switching due to pure other-regarding behavior.

If a worker switches the platform after a fee increase and the task is equally or less compensated on the other platform, we define this as switching behavior due to other-regarding behavior.

Reputation portability is considered to increase switching between platforms if workers switch more often when reputation portability is enforced.

Secondary Outcomes

Secondary Outcomes (end points)
We investigate the following second-order effects: the extent of lock-in exploitation (fee of USD 0.00, USD 0.01, USD 0.05) and the frequency of lock-in exploitation (no, one, two fee increases), risk attitude, and the experience as a worker in crowdsourcing markets.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We have designed a randomized control trial, where all subjects will be randomly assigned to treatment and control groups. The study relies on a 2 × 2 × 2 design. We use a combination of controlled manipulations in a field setting and observe switching behavior of workers in online labor markets. The experiment is followed by a questionnaire to identify mediating factors, to control for confounding variables and individual heterogeneity.
Experimental Design Details
For the experimental sample, all subjects will be recruited from the crowdsourcing platform Amazon Mechanical Turk. The study is aimed exclusively at workers who are over 18 years old and citizens (or legal residents) of the United States. The experiment consists of a minimum of seven and a maximum of ten rounds, each round following the same structure. Starting with round seven and in order to prevent end round effects, a random mechanism decides with a probability of 1/3 whether the study ends with the last round completed. After the workers are assigned to a treatment condition, participants are asked to work in a new online labor market, which consists of two labor platforms: Platform% and Platform#. In each round, participants first choose on which platform they want to work. After they have made a decision, they are asked to complete a task on the respective platform, which consists of counting zeros from a series of zeros and ones. The performance of the participants will be rated with every task completion and is displayed by an average rating. In each round, the minimum amount offered for completing a task is USD 0.10. The workers are always paid for the job, independent of whether they were successful or not. Participants’ performance, however, determines their rating, which in turn affects the level of their earnings in the next round. Depending on the participant’s rating––ranging from one to five––on the respective platform, participants can also earn more money for the task. Participants will earn USD 0.15 when having reached a rating greater than 3.50, and USD 0.20 with a rating greater than 4.50. After task completion, participants will receive information on (1) their average rating achieved, (2) whether the platform charges (or increases) a fee from the next round onwards, (3) the earnings for completing the next task, and (4) the total earnings over all rounds. This information is provided respectively for Platform% and Platform#. After observing the information, participants again have to choose on which platform they want to work during the next round and start the task of counting the number of zeros thereafter. Starting with round four, a random mechanism decides in each round with a probability of 1/4 whether the platform charges a fee. The reason why fees are not charged in earlier rounds is that we want participants to establish a rating first, thereby creating the possibility of lock-in. To address second-order effects by studying the extent and frequency of a platform fee, another random mechanism decides with a probability of 1/2 whether the platform charges a fee of USD 0.01 or USD 0.05. Once a fee is introduced, it will not be reduced in the following rounds. However, again with a probability of 1/4, the fee can be increased from USD 0.01 to USD 0.05. The experiment is followed by a questionnaire to control for confounding variables and individual heterogeneity.
Randomization Method
A designated function by the software package Unipark will randomly assign participants to the treatments. Before we start with our analysis, we will check whether randomization by Unipark resulted in a balanced sample. In case socio-economic and other factors of the subjects are not balanced between control and treatment conditions, we will apply appropriate matching techniques.
Randomization Unit
Randomization will be done on an individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Since the experiment is not clustered, the number of clusters is the same as the sample size (see below).
Sample size: planned number of observations
2,112 individual workers from the crowdsourcing platform Amazon Mechanical Turk.
Sample size (or number of clusters) by treatment arms
- 132 individuals in the no portability, no platform fee, and identical platforms treatment conditions
- 132 individuals in the no portability, no platform fee, and differentiating platforms treatment conditions
- 396 individuals in the no portability, platform fee, and identical platforms treatment conditions
- 396 individuals in the no portability, platform fee, and differentiating platforms treatment conditions
- 132 individuals in the portability, no platform fee, and identical platforms treatment conditions
- 132 individuals in the portability, no platform fee, and differentiating platforms treatment conditions
- 396 individuals in the portability, platform fee, and identical platforms treatment conditions
- 396 individuals in the portability, platform fee, and differentiating platforms treatment conditions
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Our targeted sample size is based on an a-priori power calculation using logistic regression as statistical test at the 0.05 significance level with a power of 0.80.
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethikkommission Universität Bremen
IRB Approval Date
2020-07-16
IRB Approval Number
2020-16

Post-Trial

Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials