Abstract
Experiment 1:
This experiment will study how the anthropomorphism of Artificial Intelligence (AI) (i.e., the gendered design) affects human users' trust and delegation of decision-making to an AI assistant.
This experiment implements an authority-delegation game following Fehr et al. (2013), in which subjects are matched with an assistant that can aid them to search for information for a card-picking decision.
In this game, there are 35 cards with four types: 1 Green Card, 1 Blue Card, 1 Red Card, and 32 Blank Cards. The followings are returns from each type of card:
(1) Blank Card: 0 token
(2) Green Card: 10 tokens
(3) Blue Card: 24 tokens
(4) Red Card: 40 tokens
Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover the positions of all colored cards and pick the most profitable card.
There are two ways of information search: (1) self-search; (2) delegating the search to an assistant.
(1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e., all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors.
(2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card.
This game will repeat for 20 rounds divided into two blocks (of ten rounds each), occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block.
After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel & Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others.
This experiment contains 6 treatments, in a between-subjects design.
Treatment 1: the assistant is a pre-programmed virtual assistant.
Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary".
Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James".
Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed.
Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself.
Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself.
In addition, we will conduct a lab session in the Human Behavior Lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 trials of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research.
Experiment 2:
This study is a follow-up to Experiment 1. Subjects will still participate in an advice-following game, but the procedure is slightly different and it is designed to answer the question of whether interactions with gendered VA pose concerns for enforcing stereotypes and biases in subsequent interactions with humans.
The game will repeat for 20 rounds divided into two blocks of ten rounds. In Block 1, subjects will be interacting with a virtual assistant, whose characteristics vary by treatment conditions (explained later). In Block 2, subjects will choose among one of four human assistants (two male names and two female names) whose search intensities are always high-quality, 80%, and then play the games with their chosen human assistant.
This experiment contains 6 treatments, in a between-subject design:
Treatment 1: The virtual assistant in Block 1 does not have a name, with search intensity of 60%.
Treatment 2: The virtual assistant in Block 1 is named Jennifer, with search intensity of 60%.
Treatment 3: The virtual assistant in Block 1 is named Charles, with search intensity of 60%.
Treatment 4: The virtual assistant in Block 1 does not have a name, with search intensity of 80%.
Treatment 5: The virtual assistant in Block 1 is named Jennifer, with search intensity of 80%.
Treatment 6: The virtual assistant in Block 1 is named Charles, with search intensity of 80%.
After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others.
In addition, we conduct a lab session in the Human Behavior Lab to collect data for the human assistant. The game procedure is similar to the lab session in Experiment 1, except that we excluded "Jennifer" and "Charles" from the fictitious name list for subjects.