Decentralized Feature Selection - The Impacts of User-Control over Feature Selection on the Acceptance and Performance of Recommender Systems

Last registered on April 17, 2023

Pre-Trial

Trial Information

General Information

Title
Decentralized Feature Selection - The Impacts of User-Control over Feature Selection on the Acceptance and Performance of Recommender Systems
RCT ID
AEARCTR-0010667
Initial registration date
December 19, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 03, 2023, 4:36 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 17, 2023, 8:50 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Goethe Universität Frankfurt

Other Primary Investigator(s)

PI Affiliation
Universität Würzburg
PI Affiliation
Universität Würzburg
PI Affiliation
Universität Mannheim

Additional Trial Information

Status
In development
Start date
2023-04-24
End date
2023-05-05
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Contemporary artificial intelligence (AI) systems have the potential to benefit both organizations and consumers. A prerequisite for AI systems to materialize their potential is the acceptance and, eventually, usage of these systems by consumers. However, the deployment of AI systems is associated with various challenges, too often leading to resistance and aversion to the algorithms. For instance, consumers oftentimes question the reliability and accuracy of AI systems and, thus, feel misunderstood by them. Literature shows that organizations may take action to reduce algorithm aversion. One possible means presented by prior studies is to allow users to modify AI models and their outcomes. In our work, we investigate how organizations may encounter algorithm aversion by involving users into the prediction-making process. More precisely, we propose an approach where organizations allow users to select which information they pass to the AI system - a process we call decentralized feature selection. In an experimental setting, we investigate the impact of this decentralized feature selection on consumers’ attitudes towards and the actual predictive performance of AI systems.
External Link(s)

Registration Citation

Citation
Zacharias, Jan et al. 2023. "Decentralized Feature Selection - The Impacts of User-Control over Feature Selection on the Acceptance and Performance of Recommender Systems ." AEA RCT Registry. April 17. https://doi.org/10.1257/rct.10667-3.0
Experimental Details

Interventions

Intervention(s)
We developed an experimental setting in which participants may
control which variables are submitted to an AI system whose predictions potentially influence the
participants’ payoff. In this experimental setting, participants may choose one binary lottery of a set of 200 binary lotteries which differ in terms of their risk level as well
as objective goodness of the potential payoffs. Participants may leverage a pre-trained AI system,
predicting the participants’ risk attitudes, for preselecting a lottery subset which is aligned with the
individual participants’ risk attitudes. Participants in the treatment group further have the opportunity
to select which attributes the AI system may process; in the following, we refer to this procedure as
decentralized feature selection.
Intervention Start Date
2023-04-24
Intervention End Date
2023-05-05

Primary Outcomes

Primary Outcomes (end points)
(1) Willingness to pay for usage of the AI system.
(2) Performance of the ML model incorporating decentralized feature selection
relative to the baseline ML model in terms of established performance metrics (RMSE, Accuracy,
ROC AUC, etc.)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
(1) Transparency
(2) Power
(3) Privacy intrusion
(4) Curiosity in ML prediction
(5) Perceived accuracy of the AI system
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The basic procedure of the experiment is the following: In stage 1, before informing the participants
about subsequent stages, we elicit attributes about participants via a questionnaire. These attributes
serve as input features for the AI-based risk attitude prediction. In stage 2, after the explanation of the
experiment, we present the participants the full list of the 200 possible binary lotteries with censored
payoffs. The lotteries differ in terms of their risk level as well as the goodness of the potential payoffs
(see subsection ‘Details on stage 1’ in the attached file for more information). After the presentation of the full lottery list,
we introduce the AI system to the participants. The AI system is trained to predict participants’ risk
attitudes based on the information provided in the initial questionnaire. Using this prediction, the AI
system recommends the five most suitable lotteries from the full list of 200 lotteries to each individual
participant. In stage 3, we perform the treatment manipulation. In a within-subjects design, we present
participants two different scenarios: One scenario where participants may perform decentralized
feature selection to influence the AI system (treatment condition), and another scenario without
decentralized feature selection, i.e. the AI system uses all participant attributes elicited in the initial
questionnaire (baseline condition). In each scenario, we ask participants to state their willingness to
pay (WTP) for leveraging the respective AI system. In stage 4, participants make their lottery decision.
Participants whose WTP was high enough for receiving the AI support may inspect both the AI-based
lottery recommendations and the full list of 200 lotteries. Participants whose WTP did not reach the
critical BDM-threshold do not receive the AI-based recommendations. In stage 5 we elicit additional
secondary measures as well as participants’ revealed risk attitudes.
Experimental Design Details
The complete analysis plan is attached.
Randomization Method
Randomization done automatically by the computer.
Randomization Unit
Randomization of treatment and baseline group.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We do not plan clustering. Participants stem from Online Platform.
Sample size: planned number of observations
400
Sample size (or number of clusters) by treatment arms
200 participants per group (treatment vs. baseline).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Gemeinsame Ethikkommission Wirtschaftswissenschaften der Goethe Universität Frankfurt und der Johannes Gutenberg-Universität Mainz
IRB Approval Date
2022-12-19
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials