AI-supported Skills Elicitation, Personalized Information, and Youth Unemployment in South Africa

Last registered on November 25, 2025

Pre-Trial

Trial Information

General Information

Title
AI-supported Skills Elicitation, Personalized Information, and Youth Unemployment in South Africa
RCT ID
AEARCTR-0017231
Initial registration date
November 18, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 25, 2025, 7:30 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Oxford

Other Primary Investigator(s)

PI Affiliation
Oxford Martin School
PI Affiliation
Oxford Martin School

Additional Trial Information

Status
In development
Start date
2025-11-20
End date
2026-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We will run a randomized evaluation with unemployed youth (ages 18–35) on South Africa’s SAYouth.mobi platform, in partnership with Harambee, to test scalable ways of helping jobseekers identify their skills and act on better information. The study has two parts. First, we offer an AI-supported skills-elicitation and CV-builder conversation (powered by a large language model) to see whether it helps young people surface prior experience, describe their skills more clearly, and progress in job search. Second, we test how to present personalized labour-market facts so they motivate rather than discourage. Some participants receive only a concise “good-news” message that highlights which of their skill areas are in higher demand (“Pull-only”). Others receive the same recommendation plus an added “bad-news” message about all skill areas with lower demand (“Push+Pull”). Both versions recommend the same action, allowing us to isolate whether adding unpleasant facts reduces engagement (information avoidance) or whether more disclosure helps, as standard theory predicts. Primary outcomes include on-platform behaviour (clicks and applications), with follow-ups on beliefs, confidence, well-being, and near-term employment. We plan to enroll roughly 4,000 SAYouth users and use the results to inform how public employment services and digital job platforms should personalize advice and disclosure at scale.
External Link(s)

Registration Citation

Citation
Baier, Jasmin, Christian Meyer and Aarya Shinde. 2025. "AI-supported Skills Elicitation, Personalized Information, and Youth Unemployment in South Africa." AEA RCT Registry. November 25. https://doi.org/10.1257/rct.17231-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We implement a two-part, individual-level randomized intervention on SAYouth.mobi (in partnership with Harambee) targeting unemployed South African youth (18–35).
1. AI-supported skills elicitation & CV builder (vs. status quo and a static skills-picking form):
Half of participants are offered an AI conversation (embedded in our Compass tool) that helps them surface prior experience (including informal work), translate it into a clearer skills profile, and generate a CV; the other half completes a control online task and proceed with business-as-usual platform resources during the study window. We track clicks and usage from the chatbot and links it provides.
2. Personalized labour-market information (cross-randomized):
Participants are also randomized to one of two concise, personalized messages that provide information on the labour demand of their elicited skills.
• Pull-only (“good-news only”): highlights one of the user’s skills that is above their portfolio average in current vacancy demand.
• Push+Pull (“good-news + bad-news”): shows the same good-news line, plus an added “bad-news” line about the user's skill areas with below-average demand. This isolates whether adding unpleasant facts discourages action despite identical recommended next steps. We hold message length constant across arms.

Design notes and delivery: Recruitment and consent occur on-platform; up to ~4,000 users are enrolled. Messages are delivered in-app and/or by link and are brief, quantitative, and auditable (shares/counts derived from tagged vacancies). Participant incentives (not tied to treatment assignment): participants in both groups receive R30 to complete the online task to offset internet costs; those who complete the phone survey receive extra R50.
Intervention Start Date
2025-11-24
Intervention End Date
2025-12-23

Primary Outcomes

Primary Outcomes (end points)
These may still be reduced in the final PAP (uploaded before endline)
Primary treatment: job interviews/callbacks, job offers, employment (at ~3 months), earnings (at ~3 months), confidence in skill communication
Cross-randomization: information demand/avoidance, level of job search effort, variance in job search effort, job search directedness, posterior beliefs about demand for one’s skills
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
These may still be reduced in the final PAP (uploaded before endline)
Primary treatment: time use on unpaid work, employment quality, job satisfaction (adjusted cantril ladder), reservation wage, aspirations
Cross-randomization: motivation, locus of control, recall of demand for one’s skills, jobmarket-specific self-efficacy, short CES-D, Substitution away from non-recommended skill areas (applications to “X” falling vs. “Y” rising); reporting total vs. reallocated search,
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We conduct an individual-level randomized evaluation with unemployed South African youth (ages 18–35) on the national employment platform SAYouth.mobi, in partnership with Harambee Youth Employment Accelerator. The goal is to test scalable, low-cost ways of helping jobseekers identify and communicate their skills, update beliefs about their labour-market prospects, and act on better information
The experiment follows a 2 × 2 factorial design with two interventions cross-randomized at the individual level:

1. AI-supported skills elicitation & CV builder (vs. status quo).
Half of participants are offered access to an AI conversation that helps them recall prior experiences (including informal work), translate them into clearly expressed skills, and generate a CV. The comparison group completes a control online task (to make effort levels and compensation equal) and continues to use standard SAYouth.mobi tools during the same period.

2. Personalized labour-market information
Participants receive short personalized messages summarizing how their skills relate to current job demand. Both versions recommend the same next step (e.g., explore or apply for opportunities in line with highly demanded skill areas), but one presents only positive information, while the other combines positive and negative facts. This variation allows us to test whether adding “bad news” discourages or sharpens engagement and belief updating, given the more complete set of information. The design ensures that all participants are exposed to realistic, actionable information.

Participants are randomly assigned by the platform at the time of onboarding. Recruitment, consent, and delivery occur online within SAYouth.mobi and at the start of the online task/conversation with the AI tool. We plan to enroll roughly 4,000 participants, with follow-up data collected from platform usage logs and phone surveys at ~3 months after onboarding.

Primary outcomes include employment measures, behavioural measures (click-throughs and job applications), and belief updating about job prospects. Secondary outcomes include self-confidence, well-being, and other downstream employment indicators.
Experimental Design Details
Not available
Randomization Method
online after consent/accepting participation
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
4000
Sample size (or number of clusters) by treatment arms
2000 treatment, 1800 business-as-usual control, 200 static-form control
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The MDE is expressed at the individual level and standardized (units = standard deviations). Under our pre-analysis assumptions (two-sided α = 0.05, power = 0.80, equal allocation across arms, individual-level randomization, planned sample ≈ 4,000), the standardized minimum detectable effect is 0.089 standard deviations. In other words, we are powered to detect a treatment effect equal to 0.089 × SD of the outcome (0.089 × √[p(1−p)] for binary outcomes).
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Oxford Blavatnik School of Government (BSG) DREC
IRB Approval Date
2025-05-01
IRB Approval Number
1742799
IRB Name
University of Cape Town Faculty of Commerce Research Ethics Committee (REC)
IRB Approval Date
2025-07-04
IRB Approval Number
COM/01926/2025