Beyond Becker And Phelps: Experimental Evidence On Hiring Discrimination From Saudi Arabia

Last registered on November 10, 2025

Pre-Trial

Trial Information

General Information

Title
Beyond Becker And Phelps: Experimental Evidence On Hiring Discrimination From Saudi Arabia
RCT ID
AEARCTR-0017052
Initial registration date
November 07, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 10, 2025, 9:53 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
King Faisal University; University of Exeter

Other Primary Investigator(s)

PI Affiliation
University of Exeter

Additional Trial Information

Status
In development
Start date
2026-01-01
End date
2028-09-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A central puzzle in labour economics is not simply whether discrimination exists, but why. Decades of audit studies and field experiments show persistent bias in hiring (Bertrand and Mullainathan 2004; Oreopoulos 2011), yet the underlying mechanisms remain contested. Do employers discriminate because they dislike certain groups (Becker 1957)? Because they rationally infer productivity from noisy signals (Phelps 1972)? Or because they conform to perceived organizational norms (Arrow 1973; Rivera 2012)?
These mechanisms are not observationally equivalent, but they are rarely distinguished in empirical research. Existing audit studies and résumé experiments typically conflate them, because they present evaluators with résumés containing perfectly observed signals, thereby ruling out statistical discrimination from the outset. Social-norm influences are also difficult to isolate, as field settings often entangle peer effects with monetary or institutional incentives.
This study offers the first experimental design capable of jointly disentangling all three mechanisms within a single unified framework. We achieve this by integrating noisy productivity signals, norm-based treatments into a carefully controlled résumé-screening task. To our knowledge, no prior experiment has combined these elements, nor has any been conducted in the distinctive institutional setting of Saudi Arabia, a high-context, authoritarian labour market where religion, nationality, and cultural fit are particularly salient. This dual novelty, theoretical innovation and contextual originality, ensures the contribution is both globally relevant and locally significant.
External Link(s)

Registration Citation

Citation
Alsaleh, Bashayer and Stephane Hlaimi . 2025. "Beyond Becker And Phelps: Experimental Evidence On Hiring Discrimination From Saudi Arabia." AEA RCT Registry. November 10. https://doi.org/10.1257/rct.17052-1.0
Experimental Details

Interventions

Intervention(s)
In each round, participants are presented with five résumés and asked to select two candidates who they believe would maximize company profits. The résumés are drawn from a pool of approximately 120 profiles generated using a 2 × 2 × 2 × 2 factorial design that varies along four identity dimensions:
(1) Citizenship (Saudi vs. non-Saudi),
(2) Religion (Muslim vs. non-Muslim),
(3) Education (domestic vs. international university), and
(4) Cultural alignment (high vs. low, signaled via hobbies, extracurricular activities, and language fluency).

This intervention allows for identifying patterns of labor-market discrimination and preference formation across repeated discrete-choice rounds.
Intervention Start Date
2026-01-01
Intervention End Date
2028-09-28

Primary Outcomes

Primary Outcomes (end points)
Hiring Efficiency
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Productivity Cost of Bias
2. Belief Updating
3. Norm Responsiveness
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experimental survey with paired résumé evaluations (online experiment)
Experimental Design Details
Not available
Randomization Method
Randomization was performed by a computer algorithm that randomly assigned treatment conditions (e.g., CV characteristics) across the 120 rounds for each participant.
Randomization Unit
Individual (within-subject randomization)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Each participant represents one independent decision-maker. The treatment is applied within-subject across 120 CV evaluations, so there is no clustering at a higher level
Sample size: planned number of observations
Over 20,000 decision-level observations for each group, ensuring sufficient statistical power for within- and between-group analyses.
Sample size (or number of clusters) by treatment arms
•Participant Cohorts: The sample will include two balanced groups:
1.University students or early career entrants (n = 200)
2.HR professionals with recruitment experience (n = 200)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
•Power Analysis Details: With ~400 participants each completing 25–30 decision tasks, the study will generate over 20,000 individual hiring decisions. This provides at least 80% statistical power to detect effect sizes of 5 percentage points at a conventional 5% significance level (α = 0.05).
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number