Objective Search Costs and Search Cost Estimation

Last registered on November 18, 2023

Pre-Trial

Trial Information

General Information

Title
Objective Search Costs and Search Cost Estimation
RCT ID
AEARCTR-0012497
Initial registration date
November 13, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 18, 2023, 6:15 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Innsbruck

Other Primary Investigator(s)

PI Affiliation
University of Innsbruck
PI Affiliation
Frankfurt School of Finance and Management
PI Affiliation
KU Leuven

Additional Trial Information

Status
In development
Start date
2023-11-15
End date
2024-11-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Search behavior in markets depends on how consumers perceive the gains from search and their knowledge about the distribution of potential outcomes. To examine how search costs can be estimated accurately, we conduct an online search experiment in which we (a) measure objective search costs by offering piece rates for search in addition to price savings and (b) examine search behavior under varying price scales, price distributions, and searchers’ knowledge about price distributions. We estimate search costs using an empirical model that allows for biased perceptions of the gains from search. The results from the experiment allow us to study which aspects of the search environment influences search cost estimates and informs future empirical work on search markets.
External Link(s)

Registration Citation

Citation
Düll, Adrian et al. 2023. "Objective Search Costs and Search Cost Estimation." AEA RCT Registry. November 18. https://doi.org/10.1257/rct.12497-1.0
Experimental Details

Interventions

Intervention(s)
Baseline Treatment

We recruit subjects on Amazon Mechanical Turk (AMT) to an online experiment which consists of two parts. The first part is a survey in which we elicit demographic information (age, gender, education) as well as measures of cognitive ability, trust, risk preferences, and labor supply on AMT. After the survey, we describe the second part, which consists of an experimental search task.

In the search task, subjects have to buy a fictitious product. They can search for the lowest price for this product in up to 100 shops. The price at each shop (in USD) is drawn from a uniform distribution on the interval [a,b]. This distribution is shown to subjects in the instructions to the search task and on the screen where they conduct their price search. Their payoff from the search task if they purchase the fictitious product at price p equals b – p. Upon entering the search screen they can also push a button to indicate that they do not want to search at all. In this case, their payoff from the search task is zero. Before subjects can enter the search task, they have to respond to a comprehension check. After entering the search task, subjects have three days for searching, i.e., they can take breaks and return to the search task by clicking on the link to the experiment.

To search an online shop, subjects have to record and manually enter a 16-digit code. The code varies between shops and subjects. The copy-and-paste option is disabled so that the task requires some effort. After discovering the price at a shop subjects see an overview page with all prices found so far. They can then purchase the fictitious product at any previously sampled shop or continue their search (the maximal number of searches is 100).

Treatment Variation

We implement the following treatments.

(1) Piece Rate Treatments: These treatments are identical to the baseline treatment, except that the price interval is given by [za,zb] for some value z < 1 and subjects earn a piece rate g > 0 for each searched shop in addition to the realized price savings. We consider three piece rate treatments with varying values of the piece rate g.

(2) Scale Treatments: These treatments are identical to the baseline treatment, except that the price interval is given by [za,zb]. In one scale treatment we have z = 1 (i.e., this treatment is identical to the baseline treatment), in another scale treatment we have z > 1.

(3) Distribution Treatments: These two treatments are identical to the baseline treatment, except that the distribution over prices is skewed to the right (or to the left). That is, the support of the price distribution is still [a,b], but there is more probability mass on low (or high) prices.

(4) No-information treatments: These treatments are identical to the baseline treatment and the two distribution treatments, except that subjects obtain no information about the price distribution, neither in the instructions nor on the screen where they conduct their search.

In all treatments, subjects earn 1 USD for the completion of the survey in the first part of the experiment (in addition to the earnings in the second part).

We will exclude subjects from the final sample who do not conduct at least one search and also do not indicate that they do not want to search at all (by pushing the corresponding button). Moreover, we exclude subjects who do not purchase the fictitious product at the lowest discovered price.
Intervention Start Date
2023-11-15
Intervention End Date
2024-11-30

Primary Outcomes

Primary Outcomes (end points)
(1) Observed prices, realized price savings

(2) Time required for each search, total search time

(3) Survey measures on demographics, cognitive ability, risk preferences, and labor supply on the online platform
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
All details, see description of intervention.
Experimental Design Details
Not available
Randomization Method
Randomization by a computer
Randomization Unit
We randomize at the individual level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
n/a
Sample size: planned number of observations
Planned number of observations: Our goal is to recruit at least 2,000 subjects in total. The task will be kept on AMT until 2,200 subjects completed the study.
Sample size (or number of clusters) by treatment arms
At least 200 subjects per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
n/a
IRB

Institutional Review Boards (IRBs)

IRB Name
Board for Ethical Questions in Science of the University of Innsbruck
IRB Approval Date
2023-11-13
IRB Approval Number
Certificate 97/2023
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information