Back to History Current Version

Decentralized Feature Selection - The Impacts of User-Control over Feature Selection on the Acceptance and Performance of Recommender Systems

Last registered on March 01, 2023

Pre-Trial

Trial Information

General Information

Title
Decentralized Feature Selection - The Impacts of User-Control over Feature Selection on the Acceptance and Performance of Recommender Systems
RCT ID
AEARCTR-0010667
Initial registration date
December 19, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 03, 2023, 4:36 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 01, 2023, 7:22 AM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Goethe Universität Frankfurt

Other Primary Investigator(s)

PI Affiliation
Universität Würzburg
PI Affiliation
PI Affiliation

Additional Trial Information

Status
Completed
Start date
2022-12-19
End date
2022-12-21
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Contemporary artificial intelligence (AI) systems have the potential to benefit both organizations and consumers. A prerequisite for AI systems to materialize their potential is the acceptance and, eventually, usage of these systems by consumers. However, the deployment of AI systems is associated with various challenges, too often leading to resistance and aversion to the algorithms. For instance, consumers oftentimes question the reliability and accuracy of AI systems and, thus, feel misunderstood by them. Literature shows that organizations may take action to reduce algorithm aversion. One possible means presented by prior studies is to allow users to modify AI models and their outcomes. In our work, we investigate how organizations may encounter algorithm aversion by involving users into the prediction-making process. More precisely, we propose an approach where organizations allow users to select which information they pass to the AI system - a process we call decentralized feature selection. In an experimental setting, we investigate the impact of this decentralized feature selection on consumers’ attitudes towards and the actual predictive performance of AI systems.
External Link(s)

Registration Citation

Citation
Bauer, Kevin et al. 2023. "Decentralized Feature Selection - The Impacts of User-Control over Feature Selection on the Acceptance and Performance of Recommender Systems ." AEA RCT Registry. March 01. https://doi.org/10.1257/rct.10667-2.0
Experimental Details

Interventions

Intervention(s)
We developed an experimental setting in which participants may
control which variables are submitted to an AI system whose predictions potentially influence the
participants’ payoff. In this experimental setting, participants may choose one binary lottery of a set of 200 binary lotteries which differ in terms of their risk level as well
as objective goodness of the potential payoffs. Participants may leverage a pre-trained AI system,
predicting the participants’ risk attitudes, for preselecting a lottery subset which is aligned with the
individual participants’ risk attitudes. Participants in the treatment group further have the opportunity
to select which attributes the AI system may process; in the following, we refer to this procedure as
decentralized feature selection.
Intervention Start Date
2022-12-19
Intervention End Date
2022-12-21

Primary Outcomes

Primary Outcomes (end points)
(1) Willingness to pay for usage of the AI system.
(2) Performance of the ML model incorporating decentralized feature selection
relative to the baseline ML model in terms of established performance metrics (RMSE, Accuracy,
ROC AUC, etc.)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The basic procedure of the experiment is the following: In stage 1, before informing the participants
about subsequent stages, we elicit attributes about participants via a questionnaire. These attributes
serve as input features for the AI-based risk attitude prediction. In stage 2, after the explanation of the
experiment, we present the participants the full list of the 200 possible binary lotteries with censored
payoffs. The lotteries differ in terms of their risk level as well as the goodness of the potential payoffs
(see subsection ‘Details on stage 1’ for more information). After the presentation of the full lottery list,
we introduce the AI system to the participants. The AI system is trained to predict participants’ risk
attitudes based on the information provided in the initial questionnaire. Using this prediction, the AI
system filters the five most suitable lotteries from the full list of 200 lotteries for each individual
participant. In stage 3, we perform the main treatment manipulation. We allow treatment participants
to perform decentralized feature selection. Participants in the baseline group do not get this
opportunity; in their cases, the AI system processes all information elicited in the initial questionnaire.
Following that, we ask all participants to state their willingness to pay (WTP) for leveraging the AI
system using the Becker-DeGroot-Marschak (BDM) method. In stage 4, participants make their lottery decision. Participants whose WTP was high enough for
receiving the AI support may inspect both the AI-based lottery preselection and the full list of 200
lotteries. Participants whose WTP did not reach the critical BDM-threshold do not receive the
AI-based preselection. In stage 5 we elicit the participants’ perception of the overall AI system, the
perception of the AI system’s prediction accuracy and the participants' revealed risk attitude.
Experimental Design Details
Randomization Method
Randomization done automatically by the computer.
Randomization Unit
Randomization of treatment and baseline group.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We do not plan clustering. Participants stem from Online Platform.
Sample size: planned number of observations
300
Sample size (or number of clusters) by treatment arms
150 participants per group (treatment vs. baseline).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

Documents

Document Name
Full pre-registration as pdf
Document Type
other
Document Description
File
Full pre-registration as pdf

MD5: 8f132e6e83c273ed722a0c19c61bcdf6

SHA1: 4c07bf096be36626265fd02e28c644262fdc41d6

Uploaded At: March 01, 2023

IRB

Institutional Review Boards (IRBs)

IRB Name
Gemeinsame Ethikkommission Wirtschaftswissenschaften der Goethe Universität Frankfurt und der Johannes Gutenberg-Universität Mainz
IRB Approval Date
2022-12-19
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials