Competition Between Human and Artificial Intelligence in Digital Markets: An Experimental Analysis

Last registered on November 30, 2022

Pre-Trial

Trial Information

General Information

Title
Competition Between Human and Artificial Intelligence in Digital Markets: An Experimental Analysis
RCT ID
AEARCTR-0010439
Initial registration date
November 23, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 30, 2022, 2:27 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Regensburg

Other Primary Investigator(s)

PI Affiliation
University of Regensburg

Additional Trial Information

Status
In development
Start date
2022-11-23
End date
2023-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In digital markets business decisions are increasingly taken by artificial intelligence (AI). Especially in e-commerce, a growing share of retailers uses AI-driven algorithmic pricing, whereas remaining vendors rely on manual price setting. However, policymakers have raised concerns about anti-competitive tacit collusion between humans and AI that could allow firms to soften competition. Therefore, we empirically investigate outcomes that arise when humans and AI repeatedly interact in digital markets. Based on an economic laboratory experiment in near real-time, we compare the degree of tacit collusion in duopoly markets across settings with different decision makers and settings with different degrees of algorithmic decision support for human decision makers. In a between-subjects treatment design we systematically vary (i) the decision makers in a market between humans only, algorithms only and mixed market settings where humans and algorithms compete; and (ii) whether human participants receive decision support from an AI-driven pricing algorithm. Altogether, our study sheds light on competition in digital markets where AI plays an increasingly important role and thus bears timely policy and managerial implications.
External Link(s)

Registration Citation

Citation
Schauer, Andreas and Daniel Schnurr. 2022. "Competition Between Human and Artificial Intelligence in Digital Markets: An Experimental Analysis." AEA RCT Registry. November 30. https://doi.org/10.1257/rct.10439-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We run five between-subjects treatments that can be derived from our two main treatment dimensions. Along the first treatment dimension, we vary the decision makers that represent firms in a price competition duopoly market. Along the second treatment dimension, we vary whether human participants receive decision support from an AI-driven pricing algorithm.
Intervention Start Date
2022-11-24
Intervention End Date
2023-03-31

Primary Outcomes

Primary Outcomes (end points)
The average degree of tacit collusion in a market.
Primary Outcomes (explanation)
The degree of tacit collusion can be measured as the relative deviation of the average price of all firms in the market from the fully competitive price level (Nash Equilibrium) towards the joint profit maximizing price level (Collusive Equilibrium).

Secondary Outcomes

Secondary Outcomes (end points)
Average market price and profits of firms in a market.
Secondary Outcomes (explanation)
Average market prices and firms’ profits indicate how economically successful firms operate in a market.

Experimental Design

Experimental Design
Experiments are run in an experimental laboratory at the School of Business, Economics and Information Systems at the University of Passau. Treatments are randomized at the session level. Participants will be recruited from the student subject pool of the University of Passau. Each subject participates in only one treatment (between-subject design). In all treatments, subjects are fully informed about the timeline of the experiment.
Experimental Design Details
Participants of the experiment are recruited from the University of Passau’s subject pool PAULA via the ORSEE platform. As the experiment is conducted in German, participants must be proficient in the German language. The experiment is computerized with the Java-based experimental software Brownie. Subjects receive a monetary compensation for participating in the experiment. The amount of monetary reward depends on the decisions made in the experiment. Based on pilot sessions, we expect that subjects earn about 15.70 Euros on average. Subjects receive an additional participation fee of 3 EUR for completing a follow-up-questionnaire about the experiment. Each session is expected to last approximately 75 minutes.

The aim of this laboratory experiment is to investigate competitive settings where humans are in competition with an algorithm in a duopoly price competition market. In the experiment, we consider the competition model by Singh and Vives (1984). The model conceives a market with two symmetric firms. Each firm produces and sells a single good. Marginal costs are assumed to be zero for all goods. Each firm sets the price for its good as the strategic variable. The prices of all firms in the market determine the quantity sold by each firm. The profit of each firm is given by the price multiplied by the quantity.

Depending on the treatment, human decisions makers assume the role both firms in the market (treatment HH), one firm in the market (treatment HA), or no humans are present (treatment AA). The role of possibly remaining firms in the market are assumed by an AI-based computer algorithm. The algorithm, which follows a reinforcement learning approach (Q-learning) was pre-trained in computer simulations and self-play against, follows a profit maximizing approach and continues to learn during the experiment depending on the prices and quantities set in the market.
In the two treatments with algorithmic decision support (HA-DSS and HH-DSS), human participants receive recommendations for the price setting from another AI-based computer algorithm, which implements the same approach as autonomous pricing algorithms in the experiment. The decision support algorithm thus also follows a profit maximizing approach and continues to learn during the experiment depending on the set prices and quantities.

Our primary goal is to investigate differences in tacit collusion between the considered treatments. Our main outcome variables are the average degree of tacit collusion in a market, the average market price and firms’ profits.

The experimental procedure is as follows: At the beginning of the experiment, participants receive the instructions and have to answer control questions. After successfully answering the control question, the participants can familiarize themselves with the computerized market interface in a practice round. After completion of the practice round, the actual competition phase begins, which lasts exactly 30 minutes. During this time, participants can interact and set their prizes in near real time. At the end of the competition phase, the subjects receive an additional ex-post questionnaire in which they answer questions about their behavior and experience in the experiment, perceptions of their competitor, as well as general characteristics and demographics. At the end of the experiment, subjects receive their payoff, which consists of their firm’s profit earned in the competition phase and a fixed fee for completing the questionnaire.
Randomization Method
Randomization by computer in office
Randomization Unit
Experimental sessions
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
In all treatments the independent observation is at the market level, as competitors’ actions depend on each other and are correlated over periods. We schedule data collection aiming at 56 independent observations per treatment.

In the human-human competition treatments an independent observation requires participation of two human subjects, whereas in the treatments with a mixed market setting an independent observation only requires participation of one human subject. Market outcomes for algorithm-algorithm competition treatments are derived from simulation outcomes and require no participation of human subjects.
Sample size: planned number of observations
We schedule data collection aiming at 56 observations per treatment (with 14 human participants per session). Thus, we aim for a total of 336 individual participants across the four treatments.
Sample size (or number of clusters) by treatment arms
In each of the two treatments with a mixed market setting (HA) we plan with 56 participants, which totals 112 subjects. In each of the two treatments with a purely human market setting (HH) we plan with 112 participants, which totals 224 subjects.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For our primary outcome we aim to detect an effect size of at least d = 0.5 with a power level of 80% based on a pairwise test of treatment groups.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
German Association for Experimental Economic Research e.V.
IRB Approval Date
2022-11-22
IRB Approval Number
ZxT6aGbW

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials