The Eyes Never Lie: Understanding Discrimination through Virtual Reality

Last registered on March 15, 2026

Pre-Trial

Trial Information

General Information

Title
The Eyes Never Lie: Understanding Discrimination through Virtual Reality
RCT ID
AEARCTR-0016945
Initial registration date
October 07, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 13, 2025, 10:08 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 15, 2026, 7:29 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Monash University

Other Primary Investigator(s)

PI Affiliation
Monash University
PI Affiliation
Monash University
PI Affiliation
Monash University

Additional Trial Information

Status
In development
Start date
2026-03-16
End date
2026-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A large body of research shows that social identity shapes how individuals are perceived, evaluated, and rewarded across a wide range of contexts. Yet, less is known about how these disparities emerge in real time—specifically, whether they stem from differences in attention and comprehension or from judgments made after information has been processed. This study investigates how characteristics of an information source influence the extent to which identical content is attended to and retained. Using a controlled experimental environment that equalizes delivery across conditions, we measure participants’ engagement and short-run learning under varying identity cues. By isolating early-stage mechanisms of information processing, the study contributes to understanding how subtle biases can affect learning and communication even when content quality is held constant.
External Link(s)

Registration Citation

Citation
Della Lena, Sebastiano et al. 2026. "The Eyes Never Lie: Understanding Discrimination through Virtual Reality." AEA RCT Registry. March 15. https://doi.org/10.1257/rct.16945-2.0
Experimental Details

Interventions

Intervention(s)
Participants will take part in a controlled experimental task designed to examine how characteristics of an information source influence attention and learning. Each participant is exposed to a series of brief informational messages delivered by speakers who vary in apparent identity. The study measures how participants engage with and retain information under different identity conditions. The intervention focuses on understanding mechanisms of attention and information processing rather than the content of the messages themselves.
Intervention Start Date
2026-03-16
Intervention End Date
2026-12-31

Primary Outcomes

Primary Outcomes (end points)
We will focus on two primary outcomes:
1) Face gaze share.
2) Avatar story recall.
Primary Outcomes (explanation)
1) Face gaze share measures the proportion of visual attention directed toward the avatar’s face during the interaction. It is defined as the share of total fixation time that falls within the predefined face region of interest (ROI) of the avatar. Fixations are identified using the eye-tracking algorithm implemented in the VR headset software. The outcome ranges from 0 to 1, with higher values indicating greater allocation of visual attention to the avatar’s face. The regions of interest included in the definition of face are: Left and right eyes, left and right cheek, and lips. The regions of interest not included in the definition of face are: Face neighborhood, left or right shoulder, upper torso, table, and background.
2) Avatar story recall measures whether participants correctly recall key information contained in the story (randomly) associated with each avatar. After each interaction, participants answer four questions about the story presented by the avatar. The outcome is coded as an indicator equal to 1 if the participant correctly answers the question related to the avatar’s story and 0 otherwise. We will then construct a variable representing the share of correct answers.

Secondary Outcomes

Secondary Outcomes (end points)
The variables we will use as secondary outcomes are:
1) Trust behavior toward avatars: Amount of money allocated to each avatar in the trust game.
2) Beliefs about trustworthiness: Participant expectations about how much each avatar returned in the trust game.
3) Avatar and story evaluations: Self-reported enjoyment and liking of the speakers and stories.
4) Post-experience social attitudes: Responses to survey questions on immigration, gender norms, and broader public policy preferences measured after the VR experience.
Secondary Outcomes (explanation)
Secondary outcomes capture behavioral responses toward avatars, beliefs about their trustworthiness, subjective evaluations of the VR interaction, and changes in social attitudes following the experience.
• Trust behavior: Participants receive an endowment of $5 and decide how much to allocate to each of four avatars. Allocations are made independently for each avatar, and the amount sent is tripled before reaching the avatar. The amount allocated to each avatar constitutes the behavioral measure of trust toward that character.
• Expected reciprocity: After the trust game outcome is revealed, participants report how much they believe each avatar returned. These responses capture beliefs about the trustworthiness and expected reciprocity of each avatar.
• Avatar and story evaluations: Participants rate how much they enjoyed each speaker and each story on a 0–10 scale. These measures capture subjective reactions to the avatars and narratives presented during the VR experience.
• Post-experience social attitudes: After the VR interaction, participants respond to a set of attitudinal questions regarding immigration, gender roles, and broader public policy questions. Responses are measured on Likert scales (from -2 to 2; -2 = strongly disagree; -1 = somewhat disagree; 0 = neither agree or disagree; 1 = somewhat agree; 2 = strongly agree) and will be analyzed both individually and through composite indices constructed using principal component analysis.

Experimental Design

Experimental Design
The core experimental variation follows a 2×2 design based on two observable characteristics of the speakers: gender (male, female) and ethnicity (White, Black). This results in four avatar types.

Each avatar delivers one of four standardized stories. The pairing between avatars and stories is randomized across participants. Because four avatars can be paired with four stories, the experiment generates 16 possible avatar–story combinations.

The primary treatment variation of interest operates at the avatar identity level (gender × ethnicity). Story content is included as a control through story fixed effects in the analysis.
Experimental Design Details
Not available
Randomization Method
Randomization occurs at the individual participant level within each experimental session. Participants attend laboratory sessions with up to 20 individuals completing the task simultaneously; however, sessions serve only as logistical groupings and are not units of treatment assignment.

Each participant is exposed to four avatars and four stories during the VR interaction stage. The mapping between avatars and stories is randomized at the participant level, such that each story is delivered by one of the four avatar types (male–White, male–Black, female–White, female–Black).

Because four avatars can be paired with four stories, the randomization generates 16 possible avatar–story combinations across participants. This design ensures that story content and avatar characteristics vary independently in the sample.

In the analysis, treatment effects are identified from variation in avatar identity within each story, with story fixed effects included to control for differences in narrative content.
Randomization Unit
Randomization occurs at the individual participant level within each experimental session. Participants attend laboratory sessions with up to 20 individuals completing the task simultaneously; however, the session serves only as a logistical grouping. Within each session, participants are independently assigned to a randomized mapping between the four avatars and the four stories presented during the VR interaction stage.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable. Randomization occurs at the individual participant level, not at a clustered level. Participants are recruited and participate in laboratory sessions of up to 20 individuals, but sessions serve only as logistical groupings and are not units of treatment assignment.
Sample size: planned number of observations
1,200 individual participants. This is the planned target sample size for the study. If recruitment and resources allow, the final sample may exceed this number, but the study is designed around a target of 1,200 observations.
Sample size (or number of clusters) by treatment arms
Participants are randomly assigned at the individual level to one of 16 avatar–story combinations, reflecting the crossing of avatar identity and message story. The primary treatment effects of interest operate at the avatar level. Accordingly, the main specifications pool observations across stories and include story fixed effects, so that treatment effects are identified from variation in avatar assignment within each story. Analyses at the avatar–story combination level are treated as secondary treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The study targets 1,200 participants, each contributing four avatar-level observations, for a total of 4,800 participant–avatar observations. The main power calculation is based on the primary avatar-level specification with participant fixed effects, story fixed effects, and order fixed effects, and with standard errors clustered at the participant level. In the absence of pilot data, the calculation relies on standard design assumptions. Specifically, we assume balanced treatment assignment, 80% power, a 5% two-sided significance level, a within-participant correlation of 0.4 to account for repeated observations within individuals, and an outcome standard deviation of 0.25 for primary outcomes measured on a 0–1 scale. Under these assumptions, the minimum detectable effect for the main avatar-level treatment effects is approximately 0.12 standard deviations, corresponding to about 0.03 points on the 0–1 scale, or roughly 3 percentage points.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Monash University Human Research Ethics Committee
IRB Approval Date
2026-03-09
IRB Approval Number
49532
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information