Managing Algorithm Development among Third Party Contractors

Last registered on August 23, 2023

Pre-Trial

Trial Information

General Information

Title
Managing Algorithm Development among Third Party Contractors
RCT ID
AEARCTR-0010371
Initial registration date
November 07, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 08, 2022, 3:49 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 23, 2023, 2:32 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
UC San Diego

Other Primary Investigator(s)

PI Affiliation
University College London

Additional Trial Information

Status
On going
Start date
2022-12-12
End date
2023-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The use of autonomous ML based pricing algorithms has grown in many markets in recent years. Recent research shows that such pricing algorithms can increase prices by softening competition and potentially engaging in collusive behavior. Additional research shows that the design of algorithms affects the extent to which they produce supra-competitive prices. These findings create substantial uncertainty for policymakers and for firm managers. It is unclear to what extent software designers actively consider competition or other
“downstream” effects of widespread adoption of their algorithms in markets when designing them. It is also unclear to what extent firms using algorithms can influence software developer decisions and designs. While outsourcing algorithm development can help firms access a larger
pool of high skilled workers, and allow more firms to compete in markets where algorithms are being used, this outsourcing may make it harder for firms to have influence over the ethics of the algorithms being developed. Thus, our paper seeks to study 1) how third party programmers develop pricing algorithms, 2) whether third party programmer training affects how they develop pricing algorithms, 3) whether employers can influence how third party programmers develop pricing algorithms by increasing programmer awareness of revenue maximization, and the
consequences from price collusion.
External Link(s)

Registration Citation

Citation
Ershov, Daniel and Elizabeth Lyons. 2023. "Managing Algorithm Development among Third Party Contractors." AEA RCT Registry. August 23. https://doi.org/10.1257/rct.10371-1.4
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-12-12
Intervention End Date
2023-12-31

Primary Outcomes

Primary Outcomes (end points)
1. The quality of the algorithm in terms of how well it predicts profit/revenue maximizing prices
2. Whether the algorithm will generate collusive recommendations either when interacting with a Bertrand-Nash pricing competitor, or when interacting with a copy of itself.
Primary Outcomes (explanation)
We will also explore heterogeneity in treatment effects by:
1. Educational background/skills – in particular, whether workers have some knowledge of/experience with economics or financial markets
2. Extent of collusion in home market – based on geographic location of worker

Secondary Outcomes

Secondary Outcomes (end points)
The length of time workers take to submit their output
Whether workers ask questions during the task (as well the content of the questions).
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We plan to test our research questions using a field experiment implemented on a global online labor market. We will advertise a job requiring workers to develop a pricing algorithm for a re-pricing effort in a particular market. We will provide hired programmers with data generated from a data generating process (DGP) known to the researchers so that we can assess how the algorithms were developed. To generate their algorithms, the programmers will receive a year’s worth of daily simulated data for a firm (price, quantity, and our marginal cost/cost of goods sold), as well as data from a simulated competitor (only price), and some data on market conditions (factors affecting cost and demand). The DGP is designed to appear realistic (fluctuations during the year, increases in demand over weekends). In the DGP, simulated firms will be using Bertrand-Nash pricing but with some errors, so that the programmers could improve on existing pricing.

We will hire beginner level programmers which is consistent with the simple design of many commercial algorithms (Neumann et al, 2019). This includes commercial pricing algorithms. For example, the programmers who designed the algorithm of a2i, a leading gasoline pricing software provider, initially ported an earlier simple algorithm from a university assignment used to study children’s play on playgrounds (Derakhshan et al 2016). Our empirical approach is also consistent with previous similar experimental interventions given as homework assignment in a programming bootcamp (Cowgill et al 2020).

To recruit programmers, we will post a job specifying a fixed price contracts, and a max budget of $50. We will hire all applicants who bid at most $50 on the job, and who demonstrated some understanding of machine learning. Programmers will be given 7 days to complete the task. Given the relative simplicity of the job, this should be plenty of time to complete it, even for workers who have additional jobs at the same time. We will continue posting jobs until we achieve our target sample size.

To identify the effects of knowledge about revenue maximization and consequences of algorithmic collusion, we will employ an RCT among our hired programmers. Specifically, we will hire 300 programmers, and randomize whether they are given job instructions that highlight economic details of the target market, the potential for pricing algorithms to be collusive, or nothing beyond the specific job instructions.

The specific text those in the economic treatment and collusion treatment will respectively receive beyond what the control group will receive is:

Economic Treatment: “When designing the algorithm, do not forget that consumers are generally less likely to buy more expensive products. If prices are higher, you make higher revenues from each purchase, but fewer people are going to buy. If prices are lower, you will make less revenue per customer, but attract more customers. That is, total sales and prices should always be negatively related. Please attempt to incorporate this restriction into the algorithm."


Collusion Treatment: “Any pricing algorithms companies implement can be subject to inspection and regulation by antitrust authorities, who are becoming increasingly concerned with algorithmic coordination. Examples of coordination include algorithms learning to "team up” with competitors and coordinate on maintaining high prices by punishing, or threatening to punish, competitor price reductions by reducing own price even further. Please take this into consideration when developing your algorithm.”

Otherwise, the instructions programmers receive will be identical across groups.

As hiring will be done sequentially (based on the order with which workers apply to the job postings), we will randomize which group a worker will be assigned to in advance of hiring. Specifically, we will randomize 300 numbers equally across the three groups and the number that matches the order in which a worker was hired corresponds to the experimental group that worker will be assigned to.
Experimental Design Details
Randomization Method
We randomized the order of hiring using Stata. Randomization code is available upon request.
Randomization Unit
Individual worker
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
300 workers
Sample size: planned number of observations
300 workers
Sample size (or number of clusters) by treatment arms
100 workers
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
UC San Diego Office of IRB Administration
IRB Approval Date
2022-03-04
IRB Approval Number
802030

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials