Automating Automaticity: Online Lab Experiments

Last registered on July 18, 2025

Pre-Trial

Trial Information

General Information

Title
Automating Automaticity: Online Lab Experiments
RCT ID
AEARCTR-0015526
Initial registration date
May 02, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 06, 2025, 5:07 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 18, 2025, 3:57 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Cornell University

Other Primary Investigator(s)

PI Affiliation
University of Chicago
PI Affiliation
MIT
PI Affiliation
UC-Berkeley

Additional Trial Information

Status
In development
Start date
2025-07-14
End date
2025-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Consumer choices are increasingly mediated by algorithms, which use data on those past choices to infer consumer preferences and then curate future choice sets. Behavioral economics suggests one reason these algorithms may fail: choices can systematically deviate from preferences. For example, research shows that prejudice can arise not just from preferences and beliefs, but also from the context in which people choose. When people behave automatically, biases creep in; snap decisions are typically more prejudiced than slow, deliberate ones, and can lead to behaviors that users themselves do not consciously want or intend. As a result, algorithms trained on automatic behaviors can misunderstand the prejudice of users: the more automatic the behavior, the greater the error. This RCT is an online lab experiment where we test this idea.
External Link(s)

Registration Citation

Citation
Agan, Amanda et al. 2025. "Automating Automaticity: Online Lab Experiments." AEA RCT Registry. July 18. https://doi.org/10.1257/rct.15526-1.2
Experimental Details

Interventions

Intervention(s)
Hidden until after experiment takes place to preserve integrity of experiment.
Intervention (Hidden)
We will solicit white male subjects on the prolific platform. Our subjects look at a series of movie recommendations, each of which is attached to a poster by name (e.g. “Amanda A. recommends...”), and choose amongst a subset of those movies to potentially watch. The movies and posters are shown three at a time on the screen, much as results from search algorithms or social media sites might show up, and the user can click “load more” to see more. We randomly assign names to the reviews as in a traditional audit study.


The main randomized intervention is the context in which the respondents make decisions. Some are randomized into:
Rushed condition: subjects are told they have limited time to make a decision and a clearly visible countdown clock counts in milliseconds
Non-rushed condition: subjects are told they have plenty of time and the countdown clock counts in minutes.

Because we do not know how fast people need to go to induce a feeling of stress or automaticity, we plan to vary how much time is actually available in the rushed condition. In one experiment it will be 5 minutes. And in the other it will be 1 minute.
Intervention Start Date
2025-07-14
Intervention End Date
2025-07-18

Primary Outcomes

Primary Outcomes (end points)
Whether a particular movie was chosen/clicked. The proportion of movies seen that were chosen (everyone chooses 5, so this embeds variation in how many movies were seen). How that varies by race and/or gender of the recommender.
Primary Outcomes (explanation)


We will consider how whether a movie was clicked varies with the race and/or gender of the recommender and how that compares to the respondent's race and/or gender.


In-group will be defined as the recommendation coming from someone with a name whose perceived race or gender matches the respondent.

Secondary Outcomes

Secondary Outcomes (end points)
How the proportion of movies seen that are by in group recommenders varies by treatment
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Hidden until after experiment to preserve integrity
Experimental Design Details
Respondents will be randomly assigned to either the rushed or non-rushed conditions described above. Respondents will be drawn from white males on prolific.
Randomization Method
Randomization done by computer
Randomization Unit
Rushed vs non-rushed randomized at the individual respondent level
Race and gender perception of recommender is randomized at the movie level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1600 respondents split between control (no rush) and rushed being 5 minutes. Each Choose 5 movies.
1600 respondents split between control (no rush) and rushed being 1 minute. Each Choose 5 movies
Sample size: planned number of observations
8000 observations (1600 x 5 movies clicked), or ~42,000 (if we consider all movies seen) in 5 minutes rushed (vs control) Same in 1 minute rushed vs control.
Sample size (or number of clusters) by treatment arms
Equally split

~800 respondents in rushed, ~800 in non-rushed
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The main outcome is an interaction term of (race and/or gender of recommender x rushed for the clicked choice and slots above mean ranking for algorithmic order The minimum detectable effect size for the interaction is 2.1 percentage points.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Chicago Social and Behavioral Sciences IRB
IRB Approval Date
2021-03-22
IRB Approval Number
21-0412

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials