Spatial Search: Experimental Evidence

Last registered on June 24, 2024


Trial Information

General Information

Spatial Search: Experimental Evidence
Initial registration date
June 07, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 24, 2024, 12:23 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

Northwestern University

Other Primary Investigator(s)

PI Affiliation
Northwestern University
PI Affiliation
University of Southern California

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
In various contexts, such as shopping, experimentation, and problem-solving, agents engage in dynamic trial-and-error search processes. They decide both where to search and when to stop before choosing from a set of available options. We investigate how behavior changes with the complexity of the search problem. To this end, we design a controlled lab experiment where participants uncover positions with unknown prizes, paying a search cost for each revealed position. We vary the complexity by restricting the possible prizes at different positions and telling participants about these restrictions.
External Link(s)

Registration Citation

Malladi, Suraj, Alejandro Martinez Marquina and Ilya Morozov. 2024. "Spatial Search: Experimental Evidence." AEA RCT Registry. June 24.
Experimental Details


Experimental Conditions:
1. [“Unrestricted”]. Prizes are not restricted.
2. [“High Variability”]. Prizes in adjacent positions cannot differ by more than 10 cents.
3. [“Low Variability”]. Prizes in adjacent positions cannot differ by more than 5 cents.
4. [“Quasi-Concave”]. Prizes in adjacent positions cannot differ by more than 10 cents AND the function that maps positions to prizes is quasi-concave (i.e., there is a unique maximum, and prizes decrease the farther one gets from this maximum).
5. [“Known Maximum”]. Prizes in adjacent positions cannot differ by more than 10 cents AND the participant knows that at least one position contains a prize of $1.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Hypotheses and outcome variables:

As we vary complexity, we look for evidence that consumers change their search behavior in response to the information provided about the variability of prizes. The following hypotheses are based on the stylized predictions of rational models of spatial search in Malladi (2022) and Banchio and Malladi (2024):
H1: Participants respond to the similarity of prizes in nearby positions.
- Test H1A: Compare average net payoffs (maximum discovered prize minus total search costs) in conditions 1 and 2. Better performance in condition 2 would indicate that consumers make use of spatial correlation to select better search locations.
- Test H1B: Compare average net payoffs in condition 3, 4, or 5 against those in condition 2. We expect participants to earn lower net payoffs in condition 2 due to higher search complexity.
- Test H1C: Test whether time spent on search per round increases with complexity. Condition 1 is the most complex and should be the slowest, condition 2 should lead to faster search, and conditions 3, 4, and 5 should lead to the fastest search due to the lowest complexity.

H2: Uncovering a low prize leads the participant to make a large step away from this position, whereas uncovering a high prize leads them to make smaller steps, staying near this position.
- Test H2: In condition 2, regress step size on (a) first discovered prize, (b) latest discovered prize, (c) difference between the latest discovered prize and the maximum prize uncovered by that point, or (d) difference between the last and second to last discovered prizes. We expect a negative coefficient. Repeat the same analysis in condition 3 where we expect a larger coefficient (stronger correlation leads to larger steps).

H3: Knowing there is a sweet spot leads participants to engage in “funneling”, taking progressively smaller steps and gradually zooming in on the maximum. Funneling is also more likely to arise in the low variability condition than in the high variability condition.
- Test H3. We will test whether participants in conditions 3 and 4 engage in funneling search more frequently than those in condition 2. Funnelers take steps in the same direction after discovering positive news (uncovered prize is better than the previous prize) and change direction after discovering bad news.

H4: Uncovering low prizes discourages participants from further searching in high-variability and low-variability conditions but not in unrestricted and known-maximum conditions.
- Test H4A: We will compute the share of search tasks with recall (returning to a previously uncovered position) by condition. Seeing recall would suggest the presence of discouragement effects.
- Test H4B: We will estimate the stopping regions nonparametrically as a function of (a) last uncovered prize, and (b) maximum prize the participant had before uncovering the last prize. We will use classification trees for estimation but may try other semi- and nonparametric estimation methods. We expect discouragement effects in conditions 2 and 3 but not in conditions 1 and 5.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We built an online app where participants can perform search tasks. In each task, a participant is presented with 100 positions, each containing an unknown prize between $0 and $1. The participant must choose which and how many positions to uncover. To reveal a prize in a given position, the participant must pay a search cost. Prizes are uncovered sequentially, which allows the participant to learn from past discoveries.
Experimental Design Details
Randomization Method
Randomization done at the back-end of our online platform hosted on
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
2,500 participants.
Sample size: planned number of observations
2,500 participants.
Sample size (or number of clusters) by treatment arms
We will aim to recruit 500 participants per condition, which amounts to 2,500 participants in total.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Northwestern University IRB
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents


MD5: c4d5b37de1756e16d86ce75629db46b1

SHA1: 9a773bb1374d6c0f7bb13fe62d5f7abb9461e6d1

Uploaded At: June 07, 2024


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials