Cheap Talk with the Bayesian Truth Serum

Last registered on June 23, 2023

Pre-Trial

Trial Information

General Information

Title
Cheap Talk with the Bayesian Truth Serum
RCT ID
AEARCTR-0011560
Initial registration date
June 12, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 23, 2023, 4:18 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2023-02-23
End date
2023-04-05
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Biased responses in survey studies could seriously harm and mislead our economic decision-making. In this study, we test the efficacy of an alternative bias-mitigating strategy, named the C-BTS. We implement three proof-of-concept experiments; First, as direct evidence of the efficacy of the C-BTS, we replicate the context of a previous study, Barrage and Lee (2010). Second, we check the efficacy of the C-BTS in eliciting the willingness-to-accept (WTA) for not using each of two popular social media apps, Facebook and Instagram for one week, using binary choices. Third, we also check the efficacy of the C-BTS in eliciting the willingness-to-pay (WTP) for keep using each of six social media apps, using the best-worst scaling format. Finally, by applying the C-BTS, we try to measure the consumer value of 12 popular AI-powered services in daily life.
External Link(s)

Registration Citation

Citation
Lee, Jae Joon. 2023. "Cheap Talk with the Bayesian Truth Serum." AEA RCT Registry. June 23. https://doi.org/10.1257/rct.11560-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We run this study on CloudResearch's recently launched platform, Connect. This study consists of 4 independent experiments as given below.

(1) Donation experiment (replication study of Barrage and Lee, 2010)

In this experiment, we examine whether each respondent is willing to donate $5 to a charity for helping children suffering from cancer. There are 5 groups in this experiment: (a) Real, (b) Hypothetical, (c) the BTS, (d) Cheap Talk, and (e) the C-BTS. First, for (a) Real group, we begin with some real-effort task to mitigate the house money effect. It is well known that the use of “free” initial money endowments can distort subsequent contributions in the experiment. The goal of this real-effort task is to avoid such a bias and elicit more accurate responses from the respondents on their willingness to donate for a charity. As a real effort task, we ask people to review the question wordings for our another study, and to make some suggestions. For each complete answer, they can earn $1. There are 6 questions, so they can expect to earn up to $6. Once they completed the real-effort task, they are asked to answer a main survey question. We explain that everyone participating in this experiment has 2 choices: keeping all the money they just earned or donating $5 from what they just earned to St.Jude Children's Research Hospital to help children suffering from cancer. Following previous literatures, the decision is made by a majority voting process. After answering this main question, they are asked to answer an attention-check question. Second, for (b) Hypothetical group, the context is just same as the Real group given above, but the responses are hypothetical. We ask people to suppose they have earned $6 additionally by working hard and carefully on an additional survey in this experiment, and whether they would like to keep all the money they just earned, or they want to donate $5 from what they just earned. (c) For the BTS group, participants will be asked to answer 10 training questions after reading the instruction on the BTS. The BTS incentive assigns a Truth Score to each response based on how each answer is surprisingly common in the population. subjects are instructed that truth-telling can increase the change of getting higher Truth Scores and the top 5% will get additional bonus of $20. Then, they are asked to answer the main donation question as already described above. Then, we additionally ask, "Out of 100 people who participate in this survey, how many people do you think would have chosen "YES" to the question you just answered?" (d) For Cheap-Talk group, subjects are asked to read the cheap talk instruction carefully, reminding them to consider the real situation in answering the question. Then they are asked to answer the main question as already described above. Finally, for (e) the C-BTS group, the procedure is the combination of the group (c) and (d). Respondents will be asked to answer 10 random training questions after reading the BTS instruction. Then, they are also required to read the cheap talk instruction carefully. After answering the main question, we additionally ask, "Out of 100 people who participate in this survey, how many people do you think would have chosen "YES" to the question you just answered?" as we do for the BTS-Only group.

(2) Binary choice experiment on 2 digital goods

In the exactly same way as we do for the (1) Donation experiment, there are 5 groups, including (a) Real, (b) Hypothetical, (c) the BTS, (d) Cheap Talk, and (e) the C-BTS. The difference is we ask respondents their willingness-to-accept for not using each of two social media apps, Facebook and Instagram for a week. In (a) Real group, for 6 random specific amount of dollars ($xx), they are asked whether they would prefer to keep access to Facebook (Instagram) or go without access to Facebook (Instagram) for a week and get paid $XX, as they have selected within the survey. In this group, the answer is consequential. We select 1 out of every 100 respondents and if they chose to go without access to Facebook for a week and get paid $XX (whatever they indicated in the survey), they are actually required to stop using Facebook in exchange of getting their selected payment value (up to $50). For this purpose, if they are chosen for a bonus, we ask participants' Facebook URL and Instagram handle, and they are required to deactivate their Facebook (or Instagram) for 1 week, and we check whether their account remains deactivated for a week or not. The participation fee and bonus will be paid through Connect platform. In (b) Hypothetical group, the questions are same as the ones given for Real group, but the responses are purely hypothetical. For (c) the BTS group, (d) Cheap-Talk group, and (e) the C-BTS group, the procedures (instructions and training questions) are same as the ones already described in (1) Donation experiment, but respondents are required to state their willingness-to-accept for not using each of two social media apps, Facebook and Instagram for a week, as just described above.

(3) BWS experiment on 6 digital goods

In this experiment, we examine how much respondents value each of 6 social media apps, including Facebook, Instagram, Pinterest, Snapchat, Twitter, and TikTok, by using a different question format called best-worst scaling. In best-worst scaling, people are repeatedly asked to choose choose "best" and "worst" choices from a list of 3 random items including some dollar values. For instance, we show "Not using Instagram for the next 1 week", "Not using Pinterest for the next 1 week", and "Earning $30 less for the next 1 week." Then we ask which one is their best and worst choices. Just like other experiments described above, there will be 5 groups, including (a) Real, (b) Hypothetical, (c) the BTS, (d) Cheap Talk, and (e) the C-BTS group. In each group, respondents are asked to answer 10 random best-worst-scaling questions. In (a) Real group, we first ask people to do the same real-effort tasks as the ones used for the donation experiment to mitigate the house money effect. In these real-effort tasks, respondents can earn $10 experimental currency for each complete answer. There are 6 questions, so they can expect to earn up to $60 experimental currency. Then we instruct respondents that we will randomly pick 1 out of every 100 respondents, and exchange the experimental currency they just earned for real money so that their responses to the best-worst-scaling questions can be fulfilled. We instruct that out of 3 options in one random question they answered, we would randomly choose one situation and make it to be fulfilled. The situation a respondent is most willing to experience is most likely to be chosen while the situation a respondent is least willing to experience is least likely to be chosen. More specifically, the situation a respondent is most willing to experience is selected with a 67% (2/3) chance, and the situation a respondent is least likely to experience will never be selected. The situation a respondent is neither most willing to nor least willing to experience is selected with the remaining 33% (1/3) chance. Then we ask a selected respondent to implement the chosen situation for a real stake. For instance, if "earing $5 less for the next 1 week" was chosen, we will deduct $5 from the money a respondent just earned earlier while he/she can keep using a social media app (e.g., Facebook) for the next 1 week. In contrast, if "not using a social media (e.g., Facebook) for the next 1 week" was chosen, we does not deduct any money from what he/she earned earlier but we ask him/her to deactivate your social media account for the next 1 week. For such a purpose we ask respondents to provide their social media page URL in the same way as we did for (2) SBDC experiment on 2 digital goods. In (b) Hypothetical group, the questions are same as the ones given for Real group, but the responses are purely hypothetical. For (c) the BTS group, (d) Cheap-Talk group, and (e) the C-BTS group, the procedures (instructions and training questions) are same as the ones already described in (1) Donation experiment, but respondents are required to elicit how much they value each of 6 social media apps by answering best-worst-scaling questions.

(4) BWS experiment on the value of AI-powered services in everyday life

Based on the results from three proof-of-concept experiments discussed above, this experiment is intended to measure the consumer value of 12 AI-powered services in daily life. We recruit only a single C-BTS group. Then, using the BWS format already discussed in (3), we measure the value of each AI-powered service.
Intervention (Hidden)
Intervention Start Date
2023-02-23
Intervention End Date
2023-04-05

Primary Outcomes

Primary Outcomes (end points)
In experiment (1), what percentage of people choose to donate $5 to help children suffering from cancer across 5 groups. Especially, the primary interest is the difference between Real group and the C-BTS group. In experiment (2), the willingness-to-accept (WTA) for not using each of two social media apps, Facebook and Instagram for a week, across treatment groups is our main outcome of interest. More specifically, we will calculate the median WTA for each app by fitting the observed responses to a binary logit model. In experiment (3), the willingness-to-pay (WTP) for using each of six social media apps, Facebook, Instagram, Pinterest, Snapchat, Twitter, and TikTok for a week, across treatment groups is our main outcome of interest. We fit the conditional logit model to the responses from each treatment group, and estimate the relative utility from each app. By fitting a specific utility functional form to the estimated relative utility, we also interpolate the approximate dollar value (WTP) for each app. In experiment (4), the main outcome of interest is the value individuals assign to each of 12 AI-powered services. We additionally investigate demographic heterogeneity in the WTPs for AI-powered services, by gender, age, income, and education level.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
As described above, from experiment (1) to experiment (3), we randomize participants into 5 treatment groups including Real, Hypothetical, the BTS, Cheap Talk, the C-BTS. The hypothetical group can be considered as a control group, while those with bias-mitigating strategies (Real choices, the BTS, cheap talk, and the C-BTS) can be seen as treatment groups. In experiment (4), there is no RCT component as we have only a single C-BTS group.
Experimental Design Details
Randomization Method
Randomization done by a computer
Randomization Unit
The randomization occur at individual level. Each participant is assigned to one of 5 treatment groups described above.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
We expect to recruit 2,150 participants.
Sample size (or number of clusters) by treatment arms
-Experiment (1): 400 participants (80 participants / group x 5 groups - 80 participants control, 320 participants for 4 treatment groups)
-Experiment (2): 750 participants (150 participants / group x 5 groups - 150 participants control, 600 participants for 4 treatment groups)
-Experiment (3): 750 participants (150 participants / group x 5 groups - 150 participants control, 600 participants for 4 treatment groups)
-Experiment (4): 250 participants (250 participants / group x 1 group - No RCT component)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford University
IRB Approval Date
2023-02-10
IRB Approval Number
68617

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
April 05, 2023, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
April 05, 2023, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
N/A
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
We had 1,908 respondents for analyses. The sample size is a little bit smaller then the expected size, due to screening questions and attention checks.
Final Sample Size (or Number of Clusters) by Treatment Arms
-Experiment (1): 380 responses for the analyses (Real:74, Hypothetical:80, CheapTalk:73, BTS:75, and C-BTS:78) -Experiment (2): 594 respondents (Real:126, Hypothetical:124, Cheap Talk:113, BTS:122, and CBTS:109) -Experiment (3): 668 respondents (Real:1588, Hypothetical:137, Cheap Talk:121, BTS:130, and C-BTS:122). -Experiment (4): 266 respondents
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials