Voicebot and Deception

Last registered on November 08, 2022

Pre-Trial

Trial Information

General Information

Title
Voicebot and Deception
RCT ID
AEARCTR-0010368
Initial registration date
November 07, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 08, 2022, 3:55 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Claremont Graduate University

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2022-04-01
End date
2022-08-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
With technological advancements, interacting with a voicebot has become increasingly more common. Attitudes that promote or prevent dishonesty in human-human interactions may not carry over into human-machine interactions. There is little evidence of how interaction with a voicebot might affect dishonest behavior. We test whether individuals are more or less likely to lie to a voicebot than to a human using an online experiment in which subjects perform a coin-toss task and report the coin-toss outcomes. We vary the reporting channels, i.e., reporting via text, reporting to a voicebot, reporting to a sophisticated vociebot (AIbot), reporting to a human. For all the voice interaction treatments, we design an homogeneous online platform to mimic the real-world setting of voicebot interactions that varies only the robotic voice and pre-recorded human voice. We also vary the gender of voice for each treatment, i.e., feminine and masculine voice. We compare the difference between reporting to a machine (text, voicebot, or AIbot) and reporting to a human with pre-recorded voice. We also compare reporting via text and reporting via voice. We also analysis the effect of voice gender and the effect of sophistication of voicebot.
External Link(s)

Registration Citation

Citation
Zhang, Shanshan. 2022. "Voicebot and Deception." AEA RCT Registry. November 08. https://doi.org/10.1257/rct.10368-1.0
Experimental Details

Interventions

Intervention(s)
We run an online experiment via Prolific. Subjects perform a coin-toss task and report their outcomes. Subjects are randomly assigned to one of the four treatments. In the text treatment, subjects are told to report their coin-toss outcomes to a chatbot by answering a questionnaire. In the voicebot treatment, subjects are told to report the outcomes to a voicebot. In the AIbot treatment, subjects are told to report the outcomes to a sophisticated voicebot. In the human treatment, subjects are told to report the outcomes to an experimenter. At the end of the experiment, we also ask questions that measure the subject's attitudes towards AI and chatbot, trust, and demographics.
Intervention Start Date
2022-04-01
Intervention End Date
2022-08-01

Primary Outcomes

Primary Outcomes (end points)
The number of reported heads
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The acoustic feature of the recorded voice
Secondary Outcomes (explanation)
Pitch, intonation, Jitter, shimmer, HNR

Experimental Design

Experimental Design
We run an online experiment via Prolific, an online platform for recruiting online. We design an online voice chat platform for this expeirment. We use Qualtrics and the designed online platform (voice interaction platform) to collect answers. The study consists of three parts. In the first part, participants will be given an instruction on the study. In the second part, participants will toss a coin 7 times, record the results of coin tosses on a paper, and report the results to a human voice or a voicebot. Participants will be told that they will get 30 cents for each head they toss. The link to the voicebot or the human voice will be provided in Qualtrics. During the interaction with the voicebot or the human voice, the participants' voices will be recorded. After this, the participants will be redirected to the Qualtrics and answer questions in the third part. Participants will get an additional $1 for answering questions in this part. The questions include measurements of participants' perceptions of the Chatbot, measurements of their personality, and demographic information.
Experimental Design Details
Randomization Method
Randomization was done in qualtrics
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
350
Sample size: planned number of observations
350
Sample size (or number of clusters) by treatment arms
50 for each treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Claremont Graduate University Institutional Review Board
IRB Approval Date
2022-04-29
IRB Approval Number
4182
Analysis Plan

Analysis Plan Documents

Pre-analysis plan

MD5: c770e75768fca8a9d7ac09f01c435e56

SHA1: 9087b0ea14ac3d191be255f21880b4ace95cc0d4

Uploaded At: November 07, 2022

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Yes
Data Collection Completion Date
July 01, 2022, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
409
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials