Back to History

Fields Changed

Registration

Field Before After
Abstract This experiment will study how the anthropomorphism of Artificial Intelligence (AI) (i.e., the gendered design) affects human users' trust and delegation of decision-making to an AI assistant. This experiment implements an authority-delegation game following Fehr et al. (2013), in which subjects are matched with an assistant that can aid them to search for information for a card-picking decision. In this game, there are 35 cards with four types: 1 Green Card, 1 Blue Card, 1 Red Card, and 32 Blank Cards. The followings are returns from each type of card: (1) Blank Card: 0 token (2) Green Card: 10 tokens (3) Blue Card: 24 tokens (4) Red Card: 40 tokens Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover the positions of all colored cards and pick the most profitable card. There are two ways of information search: (1) self-search; (2) delegating the search to an assistant. (1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e., all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors. (2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card. This game will repeat for 20 rounds divided into two blocks (of ten rounds each), occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel & Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. This experiment contains 6 treatments, in a between-subjects design. Treatment 1: the assistant is a pre-programmed virtual assistant. Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary". Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James". Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed. Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself. Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself. In addition, we will conduct a lab session in the Human Behavior Lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 trials of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research. Experiment 1: This experiment will study how the anthropomorphism of Artificial Intelligence (AI) (i.e., the gendered design) affects human users' trust and delegation of decision-making to an AI assistant. This experiment implements an authority-delegation game following Fehr et al. (2013), in which subjects are matched with an assistant that can aid them to search for information for a card-picking decision. In this game, there are 35 cards with four types: 1 Green Card, 1 Blue Card, 1 Red Card, and 32 Blank Cards. The followings are returns from each type of card: (1) Blank Card: 0 token (2) Green Card: 10 tokens (3) Blue Card: 24 tokens (4) Red Card: 40 tokens Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover the positions of all colored cards and pick the most profitable card. There are two ways of information search: (1) self-search; (2) delegating the search to an assistant. (1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e., all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors. (2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card. This game will repeat for 20 rounds divided into two blocks (of ten rounds each), occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel & Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. This experiment contains 6 treatments, in a between-subjects design. Treatment 1: the assistant is a pre-programmed virtual assistant. Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary". Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James". Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed. Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself. Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself. In addition, we will conduct a lab session in the Human Behavior Lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 trials of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research. Experiment 2: This study is a follow-up to Experiment 1. Subjects will still participate in an advice-following game, but the procedure is slightly different and it is designed to answer the question of whether interactions with gendered VA pose concerns for enforcing stereotypes and biases in subsequent interactions with humans. The game will repeat for 20 rounds divided into two blocks of ten rounds. In Block 1, subjects will be interacting with a virtual assistant, whose characteristics vary by treatment conditions (explained later). In Block 2, subjects will choose among one of four human assistants (two male names and two female names) whose search intensities are always high-quality, 80%, and then play the games with their chosen human assistant. This experiment contains 6 treatments, in a between-subject design: Treatment 1: The virtual assistant in Block 1 does not have a name, with search intensity of 60%. Treatment 2: The virtual assistant in Block 1 is named Jennifer, with search intensity of 60%. Treatment 3: The virtual assistant in Block 1 is named Charles, with search intensity of 60%. Treatment 4: The virtual assistant in Block 1 does not have a name, with search intensity of 80%. Treatment 5: The virtual assistant in Block 1 is named Jennifer, with search intensity of 80%. Treatment 6: The virtual assistant in Block 1 is named Charles, with search intensity of 80%. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. In addition, we conduct a lab session in the Human Behavior Lab to collect data for the human assistant. The game procedure is similar to the lab session in Experiment 1, except that we excluded "Jennifer" and "Charles" from the fictitious name list for subjects.
Last Published May 03, 2023 04:39 PM June 30, 2023 12:13 AM
Intervention (Public) This experiment contains an authority-delegation game, in which subjects will be paired with an assistant to search for information for a card-picking decision. In this game, there are 35 cards with four types: Green Card, Blue Card, Red Card, and Blank Card. The followings are returns from each type of card: (1) Blank Card: 0 token (2) Green Card: 10 tokens (3) Blue Card: 24 tokens (4) Red Card: 40 tokens Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Bland Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover positions of all colored cards and pick the most profitable card. There are two ways of information search: (1) self-search; (2) delegating the search to an assistant. (1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e. all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors. (2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card. This game will repeat for 20 rounds divided into two blocks of ten rounds, occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. This experiment contains 6 treatments, in a between-subjects design. Treatment 1: the assistant is a pre-programmed virtual assistant. Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary". Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James". Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed. Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself. Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself. In addition, we conduct a lab session in the human behavior lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 rounds of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research. Experiment 1: This experiment contains an authority-delegation game, in which subjects will be paired with an assistant to search for information for a card-picking decision. In this game, there are 35 cards with four types: Green Card, Blue Card, Red Card, and Blank Card. The followings are returns from each type of card: (1) Blank Card: 0 token (2) Green Card: 10 tokens (3) Blue Card: 24 tokens (4) Red Card: 40 tokens Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Bland Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover positions of all colored cards and pick the most profitable card. There are two ways of information search: (1) self-search; (2) delegating the search to an assistant. (1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e. all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors. (2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card. This game will repeat for 20 rounds divided into two blocks of ten rounds, occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. This experiment contains 6 treatments, in a between-subjects design. Treatment 1: the assistant is a pre-programmed virtual assistant. Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Jennifer". Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "Charles". Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed. Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself. Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself. In addition, we conduct a lab session in the human behavior lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 rounds of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research. Experiment 2: This study is a follow-up to Experiment 1. Subjects will still participate in an advice-following game, but the procedure is slightly different and it is designed to evaluate whether interactions with gendered VA pose concerns for enforcing stereotypes and biases in subsequent interactions with humans. The game will repeat for 20 rounds divided into two blocks of ten rounds. In Block 1, subjects will be interacting with a virtual assistant, whose characteristics vary by treatment conditions (explained later). In Block 2, subjects will choose among one of four human assistants (two male names and two female names) whose search intensities are always 80%, and then play the games with their chosen human assistant. This experiment contains 6 treatments, in a between-subject design: Treatment 1: The virtual assistant in Block 1 does not have a name, with search intensity of 60%. Treatment 2: The virtual assistant in Block 1 is named Jennifer, with search intensity of 60%. Treatment 3: The virtual assistant in Block 1 is named Charles, with search intensity of 60%. Treatment 4: The virtual assistant in Block 1 does not have a name, with search intensity of 80%. Treatment 5: The virtual assistant in Block 1 is named Jennifer, with search intensity of 80%. Treatment 6: The virtual assistant in Block 1 is named Charles, with search intensity of 80%. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. In addition, we conduct a lab session in the Human Behavior Lab to collect data for the human assistant. The game procedure is similar to the lab session in Experiment 1, except that we excluded "Jennifer" and "Charles" from the fictitious name list for subjects.
Primary Outcomes (End Points) Choices between self-search and delegating to an assistant. Experiment 1: A dummy variable indicating whether the subject chooses to delegate to the assistant or to self-search. Experiment 2: (1) A dummy variable indicating whether the subject chooses a human assistant with a male name or a human assistant with a female name; (2) A dummy variable indicating whether the subject chooses to delegate to the assistant or to self-search.
Primary Outcomes (Explanation) We are interested in understanding how humanizing the AI algorithm may change the propensity to use it as a search assistant. Experiment 1: We are interested in understanding how humanizing the AI algorithm may change the propensity to use it as a search assistant. Experiment 2: We are interested in how the experience with the virtual assistant spillovers to subjects' choices of human assistants. Also, we are interested in the impact of the assistant's characteristics on subjects' decisions of delegation.
Experimental Design (Public) This experiment contains an authority-delegation game, in which subjects will be paired with an assistant to search for information for a card-picking decision. In this game, there are 35 cards with four types: Green Card, Blue Card, Red Card, and Blank Card. The followings are returns from each type of card: (1) Blank Card: 0 token (2) Green Card: 10 tokens (3) Blue Card: 24 tokens (4) Red Card: 40 tokens Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover positions of all colored cards and pick the most profitable card. There are two ways of information search: (1) self-search; (2) delegating the search to an assistant. (1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e. all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors. (2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card. This game will repeat for 20 rounds divided into two blocks of ten rounds, occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. This experiment contains 6 treatments, in a between-subjects design. Treatment 1: the assistant is a pre-programmed virtual assistant. Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary". Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James". Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed. Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself. Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself. In addition, we conduct a lab session in the human behavior lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 rounds of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research. Experiment 1: This experiment contains an authority-delegation game, in which subjects will be paired with an assistant to search for information for a card-picking decision. In this game, there are 35 cards with four types: Green Card, Blue Card, Red Card, and Blank Card. The followings are returns from each type of card: (1) Blank Card: 0 token (2) Green Card: 10 tokens (3) Blue Card: 24 tokens (4) Red Card: 40 tokens Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover positions of all colored cards and pick the most profitable card. There are two ways of information search: (1) self-search; (2) delegating the search to an assistant. (1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e. all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors. (2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card. This game will repeat for 20 rounds divided into two blocks of ten rounds, occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. This experiment contains 6 treatments, in a between-subjects design. Treatment 1: the assistant is a pre-programmed virtual assistant. Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Jennifer". Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "Charles". Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed. Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself. Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself. In addition, we conduct a lab session in the human behavior lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 rounds of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research. Experiment 2: This study is a follow-up to Experiment 1. Subjects will still participate in an advice-following game, but the procedure is slightly different to evaluate whether interactions with gendered VA pose concerns for enforcing stereotypes and biases in subsequent interactions with humans. The game will repeat for 20 rounds divided into two blocks of ten rounds. In Block 1, subjects will be interacting with a virtual assistant, whose characteristics vary by treatment conditions (explained later). In Block 2, subjects will choose among one of four human assistants (two male names and two female names) whose search intensities are always 80%, and then play the games with their chosen human assistant. This experiment contains 6 treatments, in a between-subject design: Treatment 1: The virtual assistant in Block 1 does not have a name, with search intensity of 60%. Treatment 2: The virtual assistant in Block 1 is named Jennifer, with search intensity of 60%. Treatment 3: The virtual assistant in Block 1 is named Charles, with search intensity of 60%. Treatment 4: The virtual assistant in Block 1 does not have a name, with search intensity of 80%. Treatment 5: The virtual assistant in Block 1 is named Jennifer, with search intensity of 80%. Treatment 6: The virtual assistant in Block 1 is named Charles, with search intensity of 80%. After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others. In addition, we conduct a lab session in the Human Behavior Lab to collect data for the human assistant. The game procedure is similar to the lab session in Experiment 1, except that we excluded "Jennifer" and "Charles" from the fictitious name list for subjects.
Randomization Method For the three treatments of AI assistants and the three treatments of human assistants, subjects will be randomized to one of the three treatments by the randomization program in oTree. This study will be administered on Forthright Acess and the random assignment of subjects to either treatments of AI assistants or treatments of human assistants will be handled with the Forthright Access platform. Experiment 1: For the three treatments of AI assistants and the three treatments of human assistants, subjects will be randomized to one of the three treatments by the randomization program in oTree. This study will be administered on Forthright Access and the random assignment of subjects to either treatments of AI assistants or treatments of human assistants will be handled with the Forthright Access platform. Experiment2: Subjects will be randomized into one of six treatments by the randomization program in oTree and by the Forthright Access platform.
Planned Number of Clusters Treatments will not be clustered. Experiment 1: Treatments will not be clustered. Experiment 2: Treatments will not be clustered.
Planned Number of Observations We plan to collect at least 79 subjects per treatment, with a total of 79*6 = 474 subjects. Experiment 1: We plan to collect at least 79 subjects per treatment, with a total of 79*6 = 474 subjects. Experiment 2: We plan to collect at least 100 subjects for the treatments with a female-named virtual assistant and the treatments with a gender-neutral virtual assistant. And we plan to collect at least 108 subjects for the treatments with a male-named virtual assistant. This is because in Experiment 1, we collected at least 100 subjects per treatment, and to keep both experiments consistent, we plan to target at least 100 subjects per treatment. In addition, because in Experiment 1, subjects delegated less to male-gendered virtual assistants, in order to assure equal number of delegations between male-gendered treatments and other treatments, we will over-recruit the sample size to at least 108 for the two male-gendered treatments in Experiment 2.
Sample size (or number of clusters) by treatment arms At least 79 subjects per treatment, and each subject with 20 trials of decisions (10 trials with an assistant search intensity of 60% and another 10 with assistant search intensity of 80%). Experiment 1: At least 79 subjects per treatment, and each subject with 20 trials of decisions (10 trials with an assistant search intensity of 60% and another 10 with assistant search intensity of 80%). Experiment 2: We plan to collect at least 100 subjects for the treatments with a female-named virtual assistant and the treatments with a gender-neutral virtual assistant. And we plan to collect at least 108 subjects for the treatments with a male-named virtual assistant. This is because in Experiment 1, we collected at least 100 subjects per treatment, and to keep both experiments consistent, we plan to target at least 100 subjects per treatment. In addition, because in Experiment 1, subjects delegated less to male-gendered virtual assistants, in order to assure equal number of delegations between male-gendered treatments and other treatments, we will over-recruit the sample size to at least 108 for the two male-gendered treatments in Experiment 2.
Power calculation: Minimum Detectable Effect Size for Main Outcomes We will need at least 79 subjects per treatment, based on a power of 0.9 and a significance level of 0.05, to detect a statistically significant difference between the high-intensity and low-intensity conditions. Power calculations are based on anticipated proportions of control and treatment arms taken from Fehr, Herz, and Wilkening (2013, The Lure of Authority: Motivation and Incentive Effects of Power) i.e., 13.9% and 35.5% (as reported in Figure 2). Experiment 1: We will need at least 79 subjects per treatment, based on a power of 0.9 and a significance level of 0.05, to detect a statistically significant difference between the high-intensity and low-intensity conditions. Power calculations are based on anticipated proportions of control and treatment arms taken from Fehr, Herz, and Wilkening (2013, The Lure of Authority: Motivation and Incentive Effects of Power) i.e., 13.9% and 35.5% (as reported in Figure 2). Experiment 2: We plan to collect at least 100 subjects for the treatments with a female-named virtual assistant and the treatments with a gender-neutral virtual assistant. And we plan to collect at least 108 subjects for the treatments with a male-named virtual assistant. This is because in Experiment 1, we collected at least 100 subjects per treatment, and to keep both experiments consistent, we plan to target at least 100 subjects per treatment. In addition, because in Experiment 1, subjects delegated less to male-gendered virtual assistants, in order to assure equal number of delegations between male-gendered treatments and other treatments, we will over-recruit the sample size to at least 108 for the two male-gendered treatments in Experiment 2.
Secondary Outcomes (End Points) Conditional on self-search, the choice of search intensity. Experiment 1: Conditional on self-search, the choice of search intensity. Experiment 2: Conditional on self-search, the choice of search intensity.
Back to top