Correcting cognitive biases for safer online behaviors: experimental evidence from migrants across major cities in Vietnam

Last registered on October 06, 2025

Pre-Trial

Trial Information

General Information

Title
Correcting cognitive biases for safer online behaviors: experimental evidence from migrants across major cities in Vietnam
RCT ID
AEARCTR-0016942
Initial registration date
October 05, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 06, 2025, 3:24 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
University of Economics Ho Chi Minh City (UEH)

Other Primary Investigator(s)

PI Affiliation
University of Economics and Law
PI Affiliation
University of Economics Ho Chi Minh City
PI Affiliation
Leibniz University Hannover

Additional Trial Information

Status
In development
Start date
2025-10-06
End date
2026-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Digital connectivity offers migrants in developing cities vital access to work, communication, and services, but also exposes them to significant cyber risks such as scams, phishing, and unsafe data sharing. This study examines whether cognitive biases contribute to these unsafe online behaviors, and whether a short behavioral intervention can mitigate them.
Using a randomized controlled trial among adult migrants living in major urban areas in Vietnam such as Ho Chi Minh, Ha Noi, Da Nang and Bien Hoa, participants are randomly assigned to either a treatment or control group. All participants complete baseline measures including a list experiment and self-reported indices of high-risk online behavior, cognitive bias tasks (availability and anchoring), and a short Cognitive Reflection Test. The treatment group views a four-minute video explaining cognitive biases and practical strategies for safer digital decision-making, while the control group views a neutral video unrelated to online safety.
Primary outcomes include the likelihood of sharing verified and unverified information and the response time in a simulated sharing scenario, capturing both behavioral change and deliberation. The study tests two main hypotheses: (1) cognitive biases are positively associated with risky online behaviors, and (2) brief corrective information reduces such behaviors by promoting reflective (System 2) thinking. Findings will provide experimental evidence on low-cost behavioral tools for improving cyber safety and digital resilience among vulnerable urban populations.
External Link(s)

Registration Citation

Citation
Ho, Thong et al. 2025. "Correcting cognitive biases for safer online behaviors: experimental evidence from migrants across major cities in Vietnam." AEA RCT Registry. October 06. https://doi.org/10.1257/rct.16942-1.0
Experimental Details

Interventions

Intervention(s)
Our intervention aims to (1) help participants recognize how common thinking shortcuts, known as cognitive biases, can influence their online decisions and increase their exposure to cyber risks, and (2) provide practical tips and strategies to mitigate these risks by encouraging more rational and reflective decision-making.
Phase I:
1. Recognizing personal biases:
Participants first answer short questions designed to reveal two common biases: anchoring bias (being overly influenced by the first piece of information) and availability bias (relying too much on easily remembered events). Afterward, we show them how these biases may have influenced their own answers. This helps raise awareness that everyday judgments and online decisions can be shaped by subtle cues.

2. Learning practical strategies:
Next, participants watch a short, four-minute educational video explaining how to think more critically when consuming and sharing information online. The video provides simple, concrete tips - such as pausing before sharing content, verifying information through trusted sources, and discussing uncertain information with others. The goal is to encourage reflective thinking and safer online behavior.

Participants in the control group watch a neutral video about tips for attending wedding or anniversary parties of the same length, ensuring that only the content (not the format or duration) differs between groups.

Phase II: Follow-up Reminders

To reinforce the treatment message and encourage retention, participants in the treatment group will receive short reminder messages via SMS or Zalo every two weeks. Each message will briefly restate key points from the intervention (e.g., pausing before sharing, verifying information sources) and include a link to rewatch the 4-minute instrumnetal video.

These reminders are designed to maintain awareness of safe online behaviors and strengthen the long-term effect of the bias-correction intervention.
Intervention (Hidden)
Phase I: Bias measurement & correction
Our intervention consists of two parts. First, we show the evidence of their own cognitive biases. Second, we provide practical tips and hints to help them become more aware and critical when processing information. Similar frameworks have been used in other studies across different contexts (Pham et al., 2024; Devine et al., 2012; Rahmawati & Santi, 2023) where raising awareness of the problem is considered an essential first step, followed by offering strategies to address it.
In the first part of the treatment, we address the existence of biases by showing the examples from the bias-related questions that participants have just answered. Specifically, we ask two sets of questions designed to reveal anchoring bias and availability bias. For anchoring bias, we follow the procedure outlined by (Berthet et al., 2022; Berthet, 2021) and for availability bias, we follow (Pachur et al., 2012). These questions are to elicit the presence of bias and help participants see its relevance before we provide the information intervention. During the first part of the intervention, we remind participants that we used techniques such as numerical anchoring or priming (e.g., mentioning a “traffic accident”) to influence their answers. By explicitly revealing these techniques, we encourage participants to reflect on how their responses may have been shaped by such cues and to reconsider their initial judgments.
In the second part of the treatment, our main goal is to raise awareness of the existence of cognitive biases and educate individuals on general tools to reduce them. We do not focus on any single type of bias. Since no single debiasing strategy works for all types of biases (Croskerry et al., 2013), we take a broader approach by promoting critical thinking through an educational video intervention. Critical thinking strategies have been widely used in previous research to reduce bias (Smith & Peloghitis, 2024; Croskerry et al., 2013). In our intervention, we provide practical tips to encourage critical thinking in the context of online media consumption. For instance, we suggest that people pause before sharing information, or prompt them to use their existing ability to critically evaluate content (e.g., by checking multiple sources or talking to others) (Tang & Sergeeva, 2025). We chose a video format for our intervention to ensure consistent delivery of information to all participants, following recommended guidelines (Haaland et al., 2023). This approach is more effective in conveying the messages and avoid the potential inconsistencies that may occur if using in-person training.
For the placebo intervention, we designed a video that provides tips for participating wedding or anniversary parties. The placebo and treatment videos were kept the same length.
Phase II: Nudging
To reinforce the treatment message and encourage retention, participants in the treatment group will receive short reminder messages via SMS or Zalo every two weeks. Each message will briefly restate key points from the intervention (e.g., pausing before sharing, verifying information sources) and include a link to rewatch the 4-minute instrumental video. These reminders are designed to maintain awareness of safe online behaviors and strengthen the long-term effect of the bias-correction intervention.

Reference:

Berthet, V. (2021). The measurement of individual differences in cognitive biases: A review and improvement. Frontiers in Psychology, 12, 630177.
Berthet, V., Autissier, D., & de Gardelle, V. (2022). Individual differences in decision-making: A test of a one-factor model of rationality. Personality and Individual Differences, 189, 111485.
Croskerry, P., Singhal, G., & Mamede, S. (2013). Cognitive debiasing 2: Impediments to and strategies for change. BMJ Quality & Safety, 22(Suppl 2), ii65–ii72.
Devine, P. G., Forscher, P. S., Austin, A. J., & Cox, W. T. (2012). Long-term reduction in implicit race bias: A prejudice habit-breaking intervention. Journal of Experimental Social Psychology, 48(6), 1267–1278.
Haaland, I., Roth, C., & Wohlfart, J. (2023). Designing information provision experiments. Journal of Economic Literature, 61(1), 3–40.
Pachur, T., Hertwig, R., & Steinmann, F. (2012). How do people judge risks: Availability heuristic, affect heuristic, or both? Journal of Experimental Psychology: Applied, 18(3), 314.
Pham, T., Goto, D., & Tran, D. (2024). Child online safety education: A program evaluation combining a randomized controlled trial and list experiments in Vietnam. Computers in Human Behavior, 156, 108225.
Rahmawati, F., & Santi, F. (2023). A Literature Review on the Influence of Availability Bias and Overconfidence Bias on Investor Decisions. East Asian Journal of Multidisciplinary Research, 2(12), 4961–4976.
Smith, G., & Peloghitis, J. (2024). Approaching Cognitive Bias in Critical Thinking Instruction. JALT Postconference Publication, 2023, 339–346.
Tang, H., & Sergeeva, A. (2025). Shots and Boosters: Exploring the Use of Combined Prebunking Interventions to Raise Critical Thinking and Create Long-Term Protection Against Misinformation (No. arXiv:2505.07486). arXiv. https://doi.org/10.48550/arXiv.2505.07486
Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11(1), 99–113.
Zhang, Z., & Cheng, Z. (2024). Users’ unverified information-sharing behavior on social media: The role of reasoned and social reactive pathways. Acta Psychologica, 245(October 2023), 104215. https://doi.org/10.1016/j.actpsy.2024.104215


Intervention Start Date
2025-10-06
Intervention End Date
2025-12-31

Primary Outcomes

Primary Outcomes (end points)
In the within-subjects design, the outcome measures focus on changes in participants’ past and intentional behaviors before (baseline survey) and after the intervention (endline survey to be implemented about one month after the intervention). Specifically, before the treatment, we measure participants’ behavior during the previous month related to sharing unverified online content using both direct (self-reported) (Zhang & Cheng, 2024) and indirect (list experiment) methods (Pham et al., 2024).
In the between-subjects design, we compare the treatment and control groups to evaluate the overall effectiveness of the bias-awareness and critical-thinking intervention. To assess the immediate treatment effect, we compare participants’ likelihood of sharing unverified information and their response times when asked about sharing the online treatment content. We also record the time participants take to respond when asked whether they would share the video content used in the treatment. This response time serves as an indicator of cognitive processing depth — faster responses may reflect heuristic thinking, while slower and more deliberate responses suggest greater cognitive reflection following the intervention. In addition, the study includes the Cognitive Reflection Test (Thomson & Oppenheimer, 2016) to examine whether the intervention enhances cognitive reflection, thereby reducing the likelihood of sharing unverified information.

Reference
Pham, T., Goto, D., & Tran, D. (2024). Child online safety education: A program evaluation combining a randomized controlled trial and list experiments in Vietnam. Computers in Human Behavior, 156, 108225.
Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11(1), 99–113.
Zhang, Z., & Cheng, Z. (2024). Users’ unverified information-sharing behavior on social media: The role of reasoned and social reactive pathways. Acta Psychologica, 245(October 2023), 104215. https://doi.org/10.1016/j.actpsy.2024.104215

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This study employs a randomized controlled experimental design to assess the impact of a behavioral video intervention and follow-up nudges on reducing high-risk online information sharing. Participants were randomly assigned to either a treatment or control group. At baseline, all participants completed measures of high-risk online behavior using both direct self-reports and an indirect list experiment. They were also assessed for cognitive biases, specifically anchoring and availability. Following this, participants viewed a four-minute video tailored to their group: the treatment group watched a video promoting bias awareness and critical thinking strategies, while the control group viewed a placebo video unrelated to online behavior, focusing instead on tips for organizing wedding or anniversary parties.

Immediately after the video, participants in both groups completed attention and understanding checks, a Cognitive Reflection Test (CRT), and responded to outcome measures including the likelihood of sharing verified and unverified information and the timing of their responses to sharing prompts. These assessments aimed to capture both immediate cognitive engagement and behavioral intent following the intervention. Demographic characteristics, internet use frequency, and prior scam exposure were collected as control variables.

In Phase II, only participants in the treatment group received brief behavioral reminder messages every two weeks via SMS or Zalo. These nudges were designed to reinforce the bias-awareness content of the initial video. At the end of the study, an endline survey was administered to both groups to reassess high-risk behavior through the same indirect and direct methods used at baseline, as well as to re-measure information-sharing likelihood and response timing. This design allows for evaluating both the short-term and sustained impacts of the intervention and nudges on participants' online behavior.
Experimental Design Details
Randomization Method
At individual level by the automatic Survey Solution system.
Randomization Unit
Individual level
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Approximately 250 migrants are randomly assigned to each treatment arm, resulting in about 500 migrants in total. With two rounds of data collection (baseline and endline), this yields approximately 1,000 observations.
Sample size: planned number of observations
Approximately 250 migrants are randomly assigned to each treatment arm, resulting in about 500 migrants in total. With two rounds of data collection (baseline and endline), this yields approximately 1,000 observations.
Sample size (or number of clusters) by treatment arms
Approximately 250 migrants are randomly assigned to each treatment arm, resulting in about 500 migrants in total. With two rounds of data collection (baseline and endline), this yields approximately 1,000 observations.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Given a sample of 250 individuals per group (500 total), measured at baseline and endline, the minimum detectable effect size (MDES) is approximately 0.07–0.09 units, assuming an outcome standard deviation of 0.4–0.5. This corresponds to detecting a change of about 14%–18% of the mean outcome (if mean = 0.5), with 80% power and 5% significance, accounting for repeated measures and modest within-subject correlation.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Economy and Environment Partnership for Southeast Asia (EEPSEA)
IRB Approval Date
2025-09-30
IRB Approval Number
EEPSEA-HTH-300925-01
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials