Recommender Systems and Consumer Choice - Experimental Evidence

Last registered on September 26, 2024

Pre-Trial

Trial Information

General Information

Title
Recommender Systems and Consumer Choice - Experimental Evidence
RCT ID
AEARCTR-0014017
Initial registration date
July 12, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 26, 2024, 12:22 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
CREST - ENSAE - Télécom Paris

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-07-13
End date
2025-06-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Regulators are increasingly getting concerned with the power of large online platforms to bias consumer recommendations. In light of these concerns, I study the option of giving consumers control over the recommender algorithm as a potential avenue for regulation. In an online experiment, I let subjects make choices from lists, akin to an e-commerce shopping experience. The list is generated either by an algorithm that is optimizing consumer utility, or by an algorithm that includes both consumer utility and platform profit as part of its objective. My aim is to understand how subjects make choices in this abstract setting, whether recommender algorithms influence their choices, and how subjects choose between recommender algorithms.
External Link(s)

Registration Citation

Citation
Schleef, Felix. 2024. "Recommender Systems and Consumer Choice - Experimental Evidence." AEA RCT Registry. September 26. https://doi.org/10.1257/rct.14017-1.0
Experimental Details

Interventions

Intervention(s)
Each subject completes a risk elicitation task. Following this task, each subject is asked to make choices from ordered lists of three outcome lotteries for 30 rounds. The lists of lotteries are created by two different algorithms. Subjects are aware of the fact that two different algorithms create their choice lists. After 20 rounds of the choice task, subjects are asked to disclose their willingness to pay to increase the probability that one or the other algorithm creates the lists of lotteries for the remaining 10 choice rounds.
Intervention Start Date
2024-07-15
Intervention End Date
2024-08-05

Primary Outcomes

Primary Outcomes (end points)
- Expected value of chosen lotteries
- Expected utility of chosen lotteries
- List rank of chosen lotteries
- Willigness-to-pay for expected utility algorithm
Primary Outcomes (explanation)
- Expected utility will be calculated under the assumption of constant relative risk aversion (CRRA)

Secondary Outcomes

Secondary Outcomes (end points)
- Risk aversion
- Search costs
Secondary Outcomes (explanation)
- Risk aversion will be calculated using the subjects choice in the first part of the experiment
- Search costs will be estimated with a logit model using the rank and expected utility of all chosen and non-chosen lotteries that the subject sees during the choice tasks

Experimental Design

Experimental Design
The experiment features three kinds of tasks. The first task is a risk preference elicitation task in the style of Johnson et al. (2021), where subjects have the prospect of gaining either the outcome of a lottery that pays 10$ with probability 50% and 0$ with probability 50% or a sure but unknown monetary amount X, that has been randomly drawn with equal probability between 0$ and 10$. Subjects are asked to declare a threshold
such that they would prefer receiving X if it is above the threshold, and they would prefer receiving the lottery if X is below the threshold.

The second task is a choice task, where subjects are asked to choose from lists of three-outcome lotteries that have been ordered and selected by one of two algorithms. These algorithms take into account the subjects risk aversion that has been calculated using the subjects threshold choice in the first task and assuming constant relative risk aversion. The outcomes of all lotteries are 10$, 5$, and 0$, and the respective probabilities are drawn to ensure that the probability of 10$ and 0$ does not exceed 50%.

Finally, I elicit subjects willingness to pay for one algorithm over the other. For this step, I give subjects an endowment of 1$. Using a slider, subjects can change the probability that they will face either algorithm in the next set of 10 choice rounds. Choosing equal probability between the two algorithms (50%-50%) is costless. Changing the probabilities by 1%, i.e. to 49% - 51% costs 0.02$. The cost increases linearly, so that the price of setting the probability to have either algorithm for the last 10 choice rounds is equal to the full endowment - 1$.
Experimental Design Details
Not available
Randomization Method
Randomization done on the server by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
no clustered randomization
Sample size: planned number of observations
300 individuals
Sample size (or number of clusters) by treatment arms
300 individuals (within subject treatment design)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Institut Louis Bachelier Institutional Review Board IRB00013336
IRB Approval Date
2024-02-28
IRB Approval Number
ILB-2024-001
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information