Voicebot and Deception

Last registered on November 08, 2022


Trial Information

General Information

Voicebot and Deception
Initial registration date
November 07, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 08, 2022, 3:55 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.


Primary Investigator

Claremont Graduate University

Other Primary Investigator(s)

Additional Trial Information

Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
With technological advancements, interacting with a voicebot has become increasingly more common. Attitudes that promote or prevent dishonesty in human-human interactions may not carry over into human-machine interactions. There is little evidence of how interaction with a voicebot might affect dishonest behavior. We test whether individuals are more or less likely to lie to a voicebot than to a human using an online experiment in which subjects perform a coin-toss task and report the coin-toss outcomes. We vary the reporting channels, i.e., reporting via text, reporting to a voicebot, reporting to a sophisticated vociebot (AIbot), reporting to a human. For all the voice interaction treatments, we design an homogeneous online platform to mimic the real-world setting of voicebot interactions that varies only the robotic voice and pre-recorded human voice. We also vary the gender of voice for each treatment, i.e., feminine and masculine voice. We compare the difference between reporting to a machine (text, voicebot, or AIbot) and reporting to a human with pre-recorded voice. We also compare reporting via text and reporting via voice. We also analysis the effect of voice gender and the effect of sophistication of voicebot.
External Link(s)

Registration Citation

Zhang, Shanshan. 2022. "Voicebot and Deception." AEA RCT Registry. November 08. https://doi.org/10.1257/rct.10368-1.0
Experimental Details


We run an online experiment via Prolific. Subjects perform a coin-toss task and report their outcomes. Subjects are randomly assigned to one of the four treatments. In the text treatment, subjects are told to report their coin-toss outcomes to a chatbot by answering a questionnaire. In the voicebot treatment, subjects are told to report the outcomes to a voicebot. In the AIbot treatment, subjects are told to report the outcomes to a sophisticated voicebot. In the human treatment, subjects are told to report the outcomes to an experimenter. At the end of the experiment, we also ask questions that measure the subject's attitudes towards AI and chatbot, trust, and demographics.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The number of reported heads
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
The acoustic feature of the recorded voice
Secondary Outcomes (explanation)
Pitch, intonation, Jitter, shimmer, HNR

Experimental Design

Experimental Design
We run an online experiment via Prolific, an online platform for recruiting online. We design an online voice chat platform for this expeirment. We use Qualtrics and the designed online platform (voice interaction platform) to collect answers. The study consists of three parts. In the first part, participants will be given an instruction on the study. In the second part, participants will toss a coin 7 times, record the results of coin tosses on a paper, and report the results to a human voice or a voicebot. Participants will be told that they will get 30 cents for each head they toss. The link to the voicebot or the human voice will be provided in Qualtrics. During the interaction with the voicebot or the human voice, the participants' voices will be recorded. After this, the participants will be redirected to the Qualtrics and answer questions in the third part. Participants will get an additional $1 for answering questions in this part. The questions include measurements of participants' perceptions of the Chatbot, measurements of their personality, and demographic information.
Experimental Design Details
Randomization Method
Randomization was done in qualtrics
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
50 for each treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Claremont Graduate University Institutional Review Board
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents

Pre-analysis plan

MD5: c770e75768fca8a9d7ac09f01c435e56

SHA1: 9087b0ea14ac3d191be255f21880b4ace95cc0d4

Uploaded At: November 07, 2022


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Collection Completion Date
July 01, 2022, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials