Back to History Current Version

Anthropomorphism of Artificial Intelligence (AI)

Last registered on May 03, 2023

Pre-Trial

Trial Information

General Information

Title
Anthropomorphism of Artificial Intelligence (AI)
RCT ID
AEARCTR-0010921
Initial registration date
May 03, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 03, 2023, 4:39 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Texas A&M University

Other Primary Investigator(s)

PI Affiliation
Agricultural University of Athens
PI Affiliation
Texas A&M University

Additional Trial Information

Status
In development
Start date
2023-05-07
End date
2024-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This experiment will study how the anthropomorphism of Artificial Intelligence (AI) (i.e., the gendered design) affects human users' trust and delegation of decision-making to an AI assistant.

This experiment implements an authority-delegation game following Fehr et al. (2013), in which subjects are matched with an assistant that can aid them to search for information for a card-picking decision.

In this game, there are 35 cards with four types: 1 Green Card, 1 Blue Card, 1 Red Card, and 32 Blank Cards. The followings are returns from each type of card:
(1) Blank Card: 0 token
(2) Green Card: 10 tokens
(3) Blue Card: 24 tokens
(4) Red Card: 40 tokens
Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover the positions of all colored cards and pick the most profitable card.

There are two ways of information search: (1) self-search; (2) delegating the search to an assistant.
(1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e., all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors.
(2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card.

This game will repeat for 20 rounds divided into two blocks (of ten rounds each), occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block.

After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel & Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others.

This experiment contains 6 treatments, in a between-subjects design.
Treatment 1: the assistant is a pre-programmed virtual assistant.
Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary".
Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James".
Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed.
Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself.
Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself.

In addition, we will conduct a lab session in the Human Behavior Lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 trials of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research.
External Link(s)

Registration Citation

Citation
Drichoutis, Andreas, Marco Palma and Nanyin Yang. 2023. "Anthropomorphism of Artificial Intelligence (AI) ." AEA RCT Registry. May 03. https://doi.org/10.1257/rct.10921-1.0
Experimental Details

Interventions

Intervention(s)
This experiment contains an authority-delegation game, in which subjects will be paired with an assistant to search for information for a card-picking decision.

In this game, there are 35 cards with four types: Green Card, Blue Card, Red Card, and Blank Card. The followings are returns from each type of card:
(1) Blank Card: 0 token
(2) Green Card: 10 tokens
(3) Blue Card: 24 tokens
(4) Red Card: 40 tokens
Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Bland Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover positions of all colored cards and pick the most profitable card.

There are two ways of information search: (1) self-search; (2) delegating the search to an assistant.
(1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e. all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors.
(2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card.

This game will repeat for 20 rounds divided into two blocks of ten rounds, occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block.

After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others.

This experiment contains 6 treatments, in a between-subjects design.
Treatment 1: the assistant is a pre-programmed virtual assistant.
Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary".
Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James".
Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed.
Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself.
Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself.

In addition, we conduct a lab session in the human behavior lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks of ten rounds. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 rounds of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research.
Intervention Start Date
2023-05-07
Intervention End Date
2024-05-31

Primary Outcomes

Primary Outcomes (end points)
Choices between self-search and delegating to an assistant.
Primary Outcomes (explanation)
We are interested in understanding how humanizing the AI algorithm may change the propensity to use it as a search assistant.

Secondary Outcomes

Secondary Outcomes (end points)
Conditional on self-search, the choice of search intensity.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This experiment contains an authority-delegation game, in which subjects will be paired with an assistant to search for information for a card-picking decision.

In this game, there are 35 cards with four types: Green Card, Blue Card, Red Card, and Blank Card. The followings are returns from each type of card:
(1) Blank Card: 0 token
(2) Green Card: 10 tokens
(3) Blue Card: 24 tokens
(4) Red Card: 40 tokens
Subjects will be presented with 35 cards as a five-by-seven matrix, with one Green Card, one Blue Card, one Red Card, and 32 Blank Cards. Initially, all cards are hidden and only one Green Card's position is always visible at position 18. Subjects need to conduct an information search. A successful search will uncover positions of all colored cards and pick the most profitable card.

There are two ways of information search: (1) self-search; (2) delegating the search to an assistant.
(1) Self-search: subjects need to choose a search intensity to conduct a self-search. This search intensity is the probability for the search to be successful (i.e. all cards' positions are uncovered), associated with a cost of tokens following this function: cost = 25*(search intensity)^2. If a successful search occurs, subjects will be able to freely pick a card from all 35 cards with revealed colors.
(2) Delegating the search to an assistant: the assistant will conduct a search with a fixed search intensity (depending on the treatment, explained later). This search is cost-free, but once a successful search occurs, the assistant will automatically pick the Blue Card.

This game will repeat for 20 rounds divided into two blocks of ten rounds, occurring in random order. The search intensity of the assistant is 60% in one block and 80% in another. Subjects will be informed about this search intensity in a block only when they enter this block.

After the advice-following game, subjects will complete the following tasks: a lottery-choice decision designed by Eckel&Grossman (2000); a brief implicit association test on gender bias; a questionnaire surveying subjects' perception of assistant's gender, their daily use of virtual assistants, and their trust in others.

This experiment contains 6 treatments, in a between-subjects design.
Treatment 1: the assistant is a pre-programmed virtual assistant.
Treatment 2: the assistant is a pre-programmed virtual assistant, with the name "Mary".
Treatment 3: the assistant is a pre-programmed virtual assistant, with the name "James".
Treatment 4: the assistant is a human, with search data pre-collected from a real human subject in the Human Behavior Lab. The gender is not revealed.
Treatment 5: the assistant is a female, with search data pre-collected from a real female subject in the Human Behavior Lab. This subject will pick a fictitious name to present herself.
Treatment 6: the assistant is a male, with search data pre-collected from a real male subject in the Human Behavior Lab. This subject will pick a fictitious name to present himself.

In addition, we conduct a lab session in the human behavior lab to collect the data for the human assistant. Subjects will be invited to the lab to conduct 20 rounds of information search, divided into two blocks. In each block, they can choose a search intensity of either 60% or 80% and conduct the 10 rounds of search with this chosen intensity. Their search results will be recorded. And they need to report their gender and pick a fictitious name from a given list of names to represent themselves. They will be informed that their data might be used for future research.
Experimental Design Details
Not available
Randomization Method
For the three treatments of AI assistants and the three treatments of human assistants, subjects will be randomized to one of the three treatments by the randomization program in oTree.
This study will be administered on Forthright Acess and the random assignment of subjects to either treatments of AI assistants or treatments of human assistants will be handled with the Forthright Access platform.
Randomization Unit
Individual.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Treatments will not be clustered.
Sample size: planned number of observations
We plan to collect at least 79 subjects per treatment, with a total of 79*6 = 474 subjects.
Sample size (or number of clusters) by treatment arms
At least 79 subjects per treatment, and each subject with 20 trials of decisions (10 trials with an assistant search intensity of 60% and another 10 with assistant search intensity of 80%).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We will need at least 79 subjects per treatment, based on a power of 0.9 and a significance level of 0.05, to detect a statistically significant difference between the high-intensity and low-intensity conditions. Power calculations are based on anticipated proportions of control and treatment arms taken from Fehr, Herz, and Wilkening (2013, The Lure of Authority: Motivation and Incentive Effects of Power) i.e., 13.9% and 35.5% (as reported in Figure 2).
IRB

Institutional Review Boards (IRBs)

IRB Name
Texas A&M University
IRB Approval Date
2023-03-08
IRB Approval Number
IRB2023-0083M/153086