Bidder Power in a Multi-Unit Auction

Last registered on November 01, 2023

Pre-Trial

Trial Information

General Information

Title
Bidder Power in a Multi-Unit Auction
RCT ID
AEARCTR-0012363
Initial registration date
October 29, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 01, 2023, 4:26 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Corvinus University of Budapest

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2023-10-31
End date
2024-07-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In a laboratory experiment, we examine bidder behavior in a multi-unit single-price auction. Two units of a good are for sale. Each bidder has a randomly-determined private valuation. We contrast the behavior of one human bidder in an auction with the behavior of two human bidders. Human bidders have a demand for two units each. The presence of computer-simulated bidders with demand for one unit ensures that the total demand is equal across treatments.
External Link(s)

Registration Citation

Citation
Orland, Andreas. 2023. "Bidder Power in a Multi-Unit Auction." AEA RCT Registry. November 01. https://doi.org/10.1257/rct.12363-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2023-10-31
Intervention End Date
2023-11-30

Primary Outcomes

Primary Outcomes (end points)
Subjects' decisions and market outcomes:
- two bids per round and subject (40 per subject; 40 per cluster in treatment B1, 80 per cluster in treatment B2),
- one market price per round (20 per cluster).
Primary Outcomes (explanation)
In the following, we list our hypotheses and the related statistical tests.

With Hypothesis 1, we examine if two large human bidders are able to lower the price in comparison to the situation with only one large human bidder.

Hypothesis 1: The price of the good in B2 is lower than the price in B1.

Test: We calculate the average price across all rounds for each individual (B1) and each pair of individuals (B2). Then we use the Wilcoxon-Mann-Whitney test to compare the average prices between the two treatments.

With Hypothesis 2, we examine if the presence of an additional large human bidder in B2 reinforces or weakens play according to the predicted strategy (i.e. bidding truthful for the (first) higher bid, and bidding lower than their valuation on the second (lower) bid) compared to B1.

Hypothesis 2a: Truthful bidding is not different between the two treatments.

Test: We calculate the average of the higher of the two bids for each individual (B1) and each pair of individuals (B2) across all rounds. We then use the Wilcoxon-Mann-Whitney test to compare these numbers between the two treatments. To account for the different valuations of each bidder in each round, we also use (i) the percentage deviation of each bid from the bidder's valuation, and (ii) the absolute deviation of each bid from the bidder's valuation.

Hypothesis 2b: Strategic bidding is not different between the two treatments.

Test: We calculate the average of the lower of the two bids for each individual (B1) and each pair of individuals (B2) across all rounds. We then use the Wilcoxon-Mann-Whitney test to compare these numbers between the two treatments. To account for the different valuations of each bidder in each round, we also use (i) the percentage deviation of each bid from the bidder's valuation, and (ii) the absolute deviation of each bid from the bidder's valuation.

We will also use regressions on individual-level data as an additional test for Hypotheses 1, 2a, and 2b.

Secondary Outcomes

Secondary Outcomes (end points)
Questionnaire items: subjects' gender, student of management or economics (1=yes,0=no), student of game theory (1=yes,0=no), high school math grade, individual decision times (in seconds).
Secondary Outcomes (explanation)
Here is the pipeline of planned exploratory analyses/questions:
- How do revenue and efficiency differ between treatments?
- Do subjects learn over time (the number of auctions)? Do subjects in B1 learn to bid truthfully on their (first) higher unit and to underbid on their (second) lower unit? Do subjects in B2 learn to coordinate (and underbid on their second unit) or do they compete stronger as time passes?
- Can past periods' bids and outcomes explain current periods' behavior?
- Do questionnaire characteristics explain observed behavior?

Experimental Design

Experimental Design
We conduct a lab experiment with two treatments.

In treatment B1, one subject with demand for two units of a good participates in an auction. Two units of the good are auctioned. Three computer-simulated bidders (each with demand for one unit of the good) bid their valuation.

In treatment B2, two subjects with demand for two units of a good participate in an auction. Two units of the good are auctioned. One computer-simulated bidder (with demand for one unit of the good) bids its valuation.

In each auction, the highest two bids receive a unit of the good. The price is determined by the highest rejected bid. The bidders of the successful bids pay their private valuation minus the price. All other bids do receive nothing.

Each individual (B1) or each pair of individuals (B2) goes through 20 rounds (auctions). In B2, the matching of pairs is fixed throughout the experiment.
Experimental Design Details
Randomization Method
A random subset of potential subjects in a database will receive an invitation to one or more of the sessions at a time. The randomization is computer-based. The potential subjects do not know the content of the experiment, and they also do not know the treatment for which they sign up at the time they register. Each subject only takes part in one experimental session.
Randomization Unit
Randomization is by experimental session. All participants of a session participate either in treatment B1 or treatment B2.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
40 independent observations per treatment: treatment B1 with 40 individual subjects, treatment B2 with 40 pairs of two subjects.
Sample size: planned number of observations
120 subjects.
Sample size (or number of clusters) by treatment arms
40 subjects in B1, 80 subjects in B2.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials