Strategic response to price discrimination: survey-based profiling versus prediction from seemingly unrelated choices. Causal effect of knowledge about scope of price discrimination on demand for privacy.

Last registered on May 16, 2022

Pre-Trial

Trial Information

General Information

Title
Strategic response to price discrimination: survey-based profiling versus prediction from seemingly unrelated choices. Causal effect of knowledge about scope of price discrimination on demand for privacy.
RCT ID
AEARCTR-0009440
Initial registration date
May 13, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 16, 2022, 5:19 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Lausanne

Other Primary Investigator(s)

PI Affiliation
the China Center for Behavioral Economics and Finance at the Southwestern University of Finance and Economics, in Chengdu
PI Affiliation
University of Gothenburg

Additional Trial Information

Status
In development
Start date
2022-05-13
End date
2022-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project investigates whether participants can strategically misrepresent information used for price discrimination. We create an artificial market, where participants of an experiment can buy a lottery. Prior to observing the price for the lottery, participants fill out a survey. We develop a statistical model which predicts the willingness to pay (WTP) for the lottery based on answers to the survey from the training sample. Based on this estimation, an individualized price is shown to the participants. Before filling out the survey, participants are informed that the answers could be used to determine the price. Thus, they could answer strategically, and right before observing the price, they are offered an option to hide the answers from the algorithm. We compare the ability to strategically bias answers in their favor between treatments that vary the survey: either survey used by insurance companies for risk profiling or survey of movie ratings. We also compare the demand for “hiding information.” In our intervention, we show participants the range of prices that could be realized depending on their answers to the survey, and hypothesize that it will increase the demand for “hiding information.”
External Link(s)

Registration Citation

Citation
Bo, Inacio, Li Chen and Rustamdjan Hakimov. 2022. "Strategic response to price discrimination: survey-based profiling versus prediction from seemingly unrelated choices. Causal effect of knowledge about scope of price discrimination on demand for privacy.." AEA RCT Registry. May 16. https://doi.org/10.1257/rct.9440-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-05-13
Intervention End Date
2022-05-31

Primary Outcomes

Primary Outcomes (end points)
1. Survey responses. We will only be able to compare distributions between treatments and the training data.
2. Decision to hide the response of survey.
3. Buying decisions.
4. Price paid.
5. Contrafactual price in case of an opposite decision of whether to hide the survey answers
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experimental Design (Public)
Training sample data
First, we develop a statistical model (or algorithm) for price discrimination in an artificial market. The good for sale is a lottery with 50% probability of winning £5. In the training sample, we will collect 700 responses to two surveys in random order:
1) Survey similar to the one conveyed by insurance companies (Standard survey). The questions aim to identify the risk preferences of the respondents.
2) Survey, where participants should rate movie genres. (Movies survey).
After that, participants enter the final stage where we elicit the willingness to pay (WTP) for the lottery using multiple price lists. The certain payout varies from £0.2 to £4 in £0.2 increments.
Based on the 700 responses from the training sample we develop two statistical models: the first predicts WTP based on the standard survey, and the second is based on the movies survey.

Baseline treatments
There will be two baseline treatments: Baseline Standard and Baseline Movies. In the main experiment, participants will go only through one of the surveys. Baseline Standard refers to the treatment with Standard survey. Baseline Movies refers to the treatment with Movies survey.
Before the survey, participants are warned that the information from the survey will be used for determining the price for the lottery in a later round of the experiment by a statistical model.
After participating in the survey, but before observing the price, participants can decide whether they want to hide the survey responses from the seller (imitating a private browsing option) for the price of £0.1. In case the participant decides to hide the survey responses, the price will be the one that maximizes revenue, given the reservation value of £1 (aka cost for the firm) and the distribution of WTP in the training sample. We refer to it as anonymous price. Participants will be warned that they will learn both individual and the anonymous price at the end of the experiment, to avoid curiosity motives.
Next, participants have to decide whether to buy the lottery at a given price, which is either determined by the algorithm based on survey responses (if the participants decided not to hide the responses) or anonymous. In the case of buying, the lottery is played out and the payoff of the participant for the last round is £5-p, if won or -p if lost. If negative, the payoff will be deducted from the reward of participants for filling out the survey.

Scope information treatments
There will be two scope information treatments: Scope Standard and Scope Movies.
The experiment runs exactly like the Baseline treatments. But before the survey, additionally to the warning that the information from the survey will be used for determining the price, the participants are informed about the lowest and the highest price that the algorithm can select, given their survey responses.
Experimental Design Details
Experimental Design (Public)
Training sample data
First, we develop a statistical model (or algorithm) for price discrimination in an artificial market. The good for sale is a lottery with 50% probability of winning £5. In the training sample, we will collect 700 responses to two surveys in random order:
1) Survey similar to the one conveyed by insurance companies (Standard survey). The questions aim to identify the risk preferences of the respondents.
2) Survey, where participants should rate movie genres. (Movies survey).
After that, participants enter the final stage where we elicit the willingness to pay (WTP) for the lottery using multiple price lists. The certain payout varies from £0.2 to £4 in £0.2 increments.
Based on the 700 responses from the training sample we develop two statistical models: the first predicts WTP based on the standard survey, and the second is based on the movies survey. We hypothesize that both standard and movies surveys will have a predictive power about WTP for lotteries. While the model might give different prices, we will truncate the suggested prices to be a minimum £1 and a maximum £3. £1 is chosen to have a lower bound price, which can be interpreted as the “marginal cost” of the product for the firm. This also allows us to see whether subjects buy the lottery when the price is higher than the predicted WTP, which would also point to a strategic response to the survey. The Standard survey is designed to contain questions related to risk aversion, while movie ratings are potentially related to risk preferences through extraversion, which correlates negatively with risk aversion.

Baseline treatments
There will be two baseline treatments: Baseline Standard and Baseline Movies. In the main experiment, participants will go only through one of the surveys. Baseline Standard refers to the treatment with Standard survey. Baseline Movies refers to the treatment with Movies survey.
Before the survey, participants are warned that the information from the survey will be used for determining the price for the lottery in a later round of the experiment by a statistical model.
After participating in the survey, but before observing the price, participants can decide whether they want to hide the survey responses from the seller (imitating a private browsing option) for the price of £0.1. In case the participant decides to hide the survey responses, the price will be the one that maximizes revenue, given the reservation value of £1 (aka cost for the firm) and the distribution of WTP in the training sample. We refer to it as anonymous price. Participants will be warned that they will learn both individual and the anonymous price at the end of the experiment, to avoid curiosity motives.
Next, participants have to decide whether to buy the lottery at a given price, which is either determined by the algorithm based on survey responses (if the participants decided not to hide the responses) or anonymous. In the case of buying, the lottery is played out and the payoff of the participant for the last round is £5-p, if won or -p if lost. If negative, the payoff will be deducted from the reward of participants for filling out the survey.
In the last task of the experiments, we elicit participants’ beliefs about the lowest and the highest price that the algorithm produced among 300 other participants, given their survey responses. We pay them £0.1 if they are within £0.2 from the lowest price, and £0.1 if they are within £0.2 of the highest price.

Scope information treatments
There will be two scope information treatments: Scope Standard and Scope Movies.
The experiment runs exactly like the Baseline treatments. But before the survey, additionally to the warning that the information from the survey will be used for determining the price, the participants are informed about the lowest and the highest price that the algorithm can select, given their survey responses.
Randomization Method
Randomization is done by prolific
Randomization Unit
participant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
no clustering
Sample size: planned number of observations
700 training sample. Each treatment 300 responses. 1200 in total.
Sample size (or number of clusters) by treatment arms
700 training sample.
Each treatment 300 responses. 1200 in total.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Committee of HEC Lausanne
IRB Approval Date
2022-05-11
IRB Approval Number
PREDICT
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials