Experimental Design
We plan to test our research questions using a field experiment implemented on a global online labor market. We will advertise a job requiring workers to develop a pricing algorithm for a re-pricing effort in a particular market. We will provide hired programmers with data generated from a data generating process (DGP) known to the researchers so that we can assess how the algorithms were developed. To generate their algorithms, the programmers will receive a year’s worth of daily simulated data for a firm (price, quantity, and our marginal cost/cost of goods sold), as well as data from a simulated competitor (only price), and some data on market conditions (factors affecting cost and demand). The DGP is designed to appear realistic (fluctuations during the year, increases in demand over weekends). In the DGP, simulated firms will be using Bertrand-Nash pricing but with some errors, so that the programmers could improve on existing pricing.
We will hire beginner level programmers which is consistent with the simple design of many commercial algorithms (Neumann et al, 2019). This includes commercial pricing algorithms. For example, the programmers who designed the algorithm of a2i, a leading gasoline pricing software provider, initially ported an earlier simple algorithm from a university assignment used to study children’s play on playgrounds (Derakhshan et al 2016). Our empirical approach is also consistent with previous similar experimental interventions given as homework assignment in a programming bootcamp (Cowgill et al 2020).
To recruit programmers, we will post a job specifying a fixed price contracts, and a max budget of $50. We will hire all applicants who bid at most $50 on the job, and who demonstrated some understanding of machine learning. Programmers will be given 7 days to complete the task. Given the relative simplicity of the job, this should be plenty of time to complete it, even for workers who have additional jobs at the same time. We will continue posting jobs until we achieve our target sample size.
To identify the effects of knowledge about revenue maximization and consequences of algorithmic collusion, we will employ an RCT among our hired programmers. Specifically, we will hire 300 programmers, and randomize whether they are given job instructions that highlight economic details of the target market, the potential for pricing algorithms to be collusive, or nothing beyond the specific job instructions.
The specific text those in the economic treatment and collusion treatment will respectively receive beyond what the control group will receive is:
Economic Treatment: “When designing the algorithm, do not forget that consumers are generally less likely to buy more expensive products. If prices are higher, you make higher revenues from each purchase, but fewer people are going to buy. If prices are lower, you will make less revenue per customer, but attract more customers. That is, total sales and prices should always be negatively related. Please attempt to incorporate this restriction into the algorithm."
Collusion Treatment: “Any pricing algorithms companies implement can be subject to inspection and regulation by antitrust authorities, who are becoming increasingly concerned with algorithmic coordination. Examples of coordination include algorithms learning to "team up” with competitors and coordinate on maintaining high prices by punishing, or threatening to punish, competitor price reductions by reducing own price even further. Please take this into consideration when developing your algorithm.”
Otherwise, the instructions programmers receive will be identical across groups.
As hiring will be done sequentially (based on the order with which workers apply to the job postings), we will randomize which group a worker will be assigned to in advance of hiring. Specifically, we will randomize 300 numbers equally across the three groups and the number that matches the order in which a worker was hired corresponds to the experimental group that worker will be assigned to.