Bidding against the machine: Preference elicitation in the presence of bots

Last registered on January 28, 2026

Pre-Trial

Trial Information

General Information

Title
Bidding against the machine: Preference elicitation in the presence of bots
RCT ID
AEARCTR-0017750
Initial registration date
January 22, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 28, 2026, 7:09 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Texas A&M University

Other Primary Investigator(s)

PI Affiliation
Texas A&M University
PI Affiliation
Texas A&M University
PI Affiliation
Agricultural University of Athens

Additional Trial Information

Status
In development
Start date
2026-02-02
End date
2026-02-09
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Experimental economists increasingly rely on online experiments to elicit willingness-to-pay (WTP) values. However, administering and coordinating strategic interactions with simultaneous participation in online environments as required by many auction mechanisms remains challenging. Using automated agents (bots) offers a potential solution, but their influence on human bidding behavior is not well understood. Under expected utility theory and the independent private values framework, the Second-Price Auction (SPA) (Vickrey, 1961) is theoretically incentive compatible, meaning that bidding one’s true valuation is a weakly dominant strategy regardless of other participants’ bids. Hence, under expected utility theory, group size and composition should not affect bids. However, previous literature document deviations from optimal bids even in settings where the value of the item is fixed. This study employs the SPA mechanism to examine how the presence and proportion of computer-based bidders (bots) affect human bidding behavior and valuation outcomes. To address this, we implement a between-subjects experimental design in which human participants are randomly assigned to auction groups of four agents that vary exogenously in the number of humans and bot participants, while holding auction rules, information, and payoff structures constant. The experiment implements both induced-value auction tasks, which allow direct measurement of deviations from optimal bidding, as well as homegrown valuation tasks eliciting willingness-to-pay (WTP) for consumer goods. Based on the incentive compatibility of the second price auction, the null hypothesis is H0: Bots will not affect human bidding behavior. If the null hypothesis is rejected, then we expect higher dispersion of bids as
more bots are introduced in the bidding group.
External Link(s)

Registration Citation

Citation
Drichoutis, Andreas et al. 2026. "Bidding against the machine: Preference elicitation in the presence of bots." AEA RCT Registry. January 28. https://doi.org/10.1257/rct.17750-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Participants complete a series of Second-Price Auction (SPA) tasks administered online. In each task, participants submit a sealed bid for a single unit of a good. The highest bidder wins the auction and pays a price equal to the second-highest bid. All participants receive detailed instructions explaining the auction rules and payoff structure prior to participation. The key experimental intervention is the exogenous variation in group composition, specifically the number of automated bidders (bots) participating alongside human subjects in the auction. All groups are composed of four agents. We exogenously vary the number of bots in a group (0, 1, 2, or 3) based on the treatment assignment. Bots submit bids according to a pre-programmed bidding rule described in the Experimental Design section below. Auction rules, information, and payoff parameters are held constant across treatments. The experiment consists of two parts, the homegrown valuation setting and the induced-value setting. The homegrown setting is composed of 4 rounds, and participants bid for real consumer goods to elicit willingness-to-pay (WTP). The induced value setting is composed of two rounds and participants bid for tokens with known monetary redemption values, allowing direct measurement of deviations from truthful bidding.
Intervention (Hidden)
Intervention Start Date
2026-02-02
Intervention End Date
2026-02-09

Primary Outcomes

Primary Outcomes (end points)
(a) Individual willingness-to-pay values in the homegrown valuation setting.
(b) Deviation from truthful bidding in the induced value setting: i) absolute differences |Bid − V|. ii) relative absolute differences |Bid −V| / V.
Primary Outcomes (explanation)
In the homegrown setting, WTP is elicited for consumer goods, allowing examination of whether exposure to bots affects valuation means, dispersion, or consistency across rounds. More details are provided in the Analysis Plan. In the induced-value setting, the primary outcome is deviation from optimal bidding, measured as both relative deviations and absolute deviations from the induced value. These outcomes directly capture departures from the weakly dominant bidding strategy in the SPA. We use elicited bids to test reduced-form, distributional, and regression-based hypotheses concerning deviations from the weakly dominant strategy of truthful bidding in the SPA.

Secondary Outcomes

Secondary Outcomes (end points)
(a) Bid dispersion, measured by the variance and interquartile range of bids within treatments.
(b) Frequencies of overbidding and underbidding relative to induced values.
(c) Within-subject bid variability across repeated rounds.
Secondary Outcomes (explanation)
Secondary outcomes characterize the stability, direction, and dispersion of bidding behavior and the auction outcomes implied by submitted bids. Specifically, we examine bid dispersion within treatments, frequencies of overbidding and underbidding relative to induced values and within-subject bid variability across repeated rounds, which are reported descriptively
as mechanical consequences of bidding behavior. Please see the Analysis Plan in Section 6.

Experimental Design

Experimental Design
The experiment follows a between-subjects design with three treatments and a control condition. Each participant is assigned to one condition that determines the human-to-bot composition of their auction group. Group composition remains fixed for the entire experiment. All auction groups consist of four bidders. Our treatments vary the number of bots present in the group: (i) T0: 4 humans and 0 bots (ii) T1: 3 humans and 1 bot (iii) T2: 2 humans and 2 bots (iv) T3: 1 human and 3 bots. Participants are informed that some auction participants are automated agents (bots) as well as of the number of humans and bots in their group. Automated bidders submit bids drawn independently from a uniform distribution over the interval ($0 - $8). The distribution is common knowledge to the participants. The experiment is divided in two parts. Part 1 is the homegrown valuation environment and has 4 tasks that will be completed in a randomized order. In task 1, subjects are asked to submit bids to buy a 24 fl oz bottle of peanut oil. In task 2, subjects are asked to submit bids to buy a 24 fl oz of soybean oil. Tasks 3 and 4 are with the same products, but this time subjects will be provided with health information about the products.
Part 2 is the induced value environment and has 2 tasks that will be completed in a randomized order. In each task, subjects are asked to submit bids to buy a token that is worth a fixed monetary value. In the first task, the induced value is $4, and in the second task, the induced value is $5. At the end of the experiment, 10% of participants are randomly selected for payoff realization. For selected participants, one task from each part is randomly drawn to determine their bonus compensation.
Experimental Design Details
Randomization Method
Computer
Randomization Unit
Treatments are randomized at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not clustered
Sample size: planned number of observations
6,480 observations (1,080 individuals and 6 rounds)
Sample size (or number of clusters) by treatment arms
(i) T0 (4 humans and 0 bots): 270 subjects (∼68 groups), (ii) T1 (3 humans and 1 bot): 270 humans (90 groups), (iii) T2 (2 humans and 2 bots): 270 humans (135 groups), (iv) T3 (1 human and 3 bots): 270 humans (270 groups)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The statistical analysis of the study is a pairwise mean comparisons between treatment conditions using a repeated-measures framework. Following Cohen’s effect-size approach, the target detectable difference is expressed as Cohen’s d, the standardized mean difference: d = μ1−μ2/σ which is unit-free and comparable across outcomes (Cohen, 1988). We set a two-sided significance level of α = 0.05 and target power 1 - β = 0.80. Because each participant completes repeated auction rounds within each block, the required per-treatment sample size is adjusted for M repeated observations per subject and within-person correlation ρ (design-effect factor). The per-group sample size for comparing two means is: n = (2(z1−α/2 + z1−β)ˆ2/d2) · (1 + (M − 1)ρ/M), as in standard repeated measures power calculations (Diggle et al., 2002; Kupper and Hafner, 1989; Lui and Wu, 2005). Intuitively, for fixed M, higher ρ reduces the incremental information contributed by additional repeated measures and therefore increases the required number of participants, whereas larger d reduces required sample size. We compute sensitivity for M=4 (homegrown rounds) and ρ ∈ {0.25, 0.50, 0.75} under Cohen’s conventional benchmarks d ∈ {0.20, 0.50, 0.80}. Based on conservative assumptions, we target 245 participants per treatment arm and oversample by 10% to account for attrition and unusable responses, yielding a recruitment target of 270 participants per treatment arm (total N = 1,080). Primary analyses will use inference procedures that account for within-participant dependence arising from repeated rounds, with standard errors clustered at the individual level, while the sample size calculation above addresses the repeated-measures structure at the participant level.
IRB

Institutional Review Boards (IRBs)

IRB Name
Texas A&M Institutional Review Board
IRB Approval Date
2026-01-21
IRB Approval Number
STUDY2025-1416
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials