Can we influence teenagers' decision to study economics? Experimental evidence on stereotypes, self-efficacy and academic choice.

Last registered on October 24, 2022


Trial Information

General Information

Can we influence teenagers' decision to study economics? Experimental evidence on stereotypes, self-efficacy and academic choice.
Initial registration date
October 12, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 17, 2022, 5:28 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 24, 2022, 3:49 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

KU Leuven

Other Primary Investigator(s)

PI Affiliation
KU Leuven
PI Affiliation
KU Leuven

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Important decisions, such as those related to academic or job choices, can have a huge impact in people’s life. Although the evidence on social choice theory is substantial, the causal evidence using an economics framework is still insufficient. In this paper we propose a causal framework to analyse the impact of self-efficacy, outcome expectation and social image on the decision of high school students to study economics in university. For this, we propose a randomized controlled trial with two interventions, one focused on improving economic literacy of students and a second one focused on changing stereotypes often associated with economists. To analyse our framework, causal mediation analysis and heterogeneity analysis will be used. As secondary outcome, we want to verify the effect of the treatment among different political preferences, gender, and socio-economic status.
External Link(s)

Registration Citation

De Cort, Willem, Kristof De Witte and Francisco Pitthan. 2022. "Can we influence teenagers' decision to study economics? Experimental evidence on stereotypes, self-efficacy and academic choice.." AEA RCT Registry. October 24.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


Schools are assigned to the following three experimental conditions:
- Baseline group: Students do not receive any treatment.
- Treatment group 1: Students receive an economics education course, which gives a sample of what students are expected to see in higher education economics classes.
- Treatment group 2: In addition to the content received by the treatment group 1, students receive videos, infographics and information about ‘non-standard’ careers in economics with a high social impact.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
(1) Likelihood of studying economics on higher education.
(2) Economic literacy.
(3) Self-efficacy.
(4) Outcome expectation.
(5) Economics stereotypes.
(6) Social image.
(7) Political views.
Primary Outcomes (explanation)
(1) Likelihood of studying economics on higher education will be asked directly to students on a scale from 0 to 100. Students will also be asked about the likelihood of studying their best or second best subject (i.e besides economics) on higher education.
(2) Economic literacy is based on questions that measure students’ proficiency on the topics they learned during the course. This survey was developed together with teachers that helped to create the course materials.
(3) Self-efficacy will be measured with scales based on May (2009); Kundu (2016); and Lent et al. (2016) adapted for the decision of students to study economics.
(4) Outcome expectation on study and job outcomes was based on the scale from Betz & Voyten (1997).
(5) Economic stereotypes will be measured by a Linkert-scale on how much students associate common stereotypes with economists.
(6) Social image will be measured by a Linkert-scale based on how positively/negatively students’ social circle would evaluate their decision to study economics and on how positively or negatively they evaluate economic stereotypes to be.
(7) Political views will be estimated by a two-dimensional scale based on Evans et al. (1996), which measures left-right and libertarian-authoritarian values.
Besides the tests of the main outcome variables, pre-existing conditions of students (e.g. grades in math and language, situation of family, previous interest in economics, socioeconomic status) and school (e.g. private/public, region, performance in past standardized tests, funding by student) will also be collected.

Our main hypothesis is that the first intervention (i.e. sample course on what students will learn in higher education economics) will improve economic literacy, self-efficacy and outcome expectation. Those variables will then serve as mediating variables, which we expect to increase the likelihood of someone studying economics in higher education. For students that find the course too hard or uninteresting, this effect might be zero or negative. For the second intervention (i.e. videos, images, infographics and testimonies about non-standard jobs in economics), our hypothesis is that this will reduce the amount of negative stereotypes towards studying economics, which can influence the subjective social image of studying economics. Both social image and stereotypes can also serve as mediating variables to increase the likelihood of studying economics.
Moreover, as secondary outcome, we would like to analyse heterogeneous treatment effects. More specifically, the treatment effect depending on (i) political views (i.e. we believe that the second intervention might have a higher effect size to left-leaning students); (ii) gender (considering differences in competitiveness, as seen in Buser et al. (2014), we expect the first intervention to have a higher effect size to girls); and (iii) socio-economic status (our hypothesis is that low socio-economic status students might receive a higher impact from both interventions, than students in more favourable environments).

Betz, N. E., & Voyten, K. K. (1997). Efficacy and outcome expectations influence career exploration and decidedness. The Career Development Quarterly, 46(2), 179-189.
Buser, T., Niederle, M., & Oosterbeek, H. (2014). Gender, competitiveness, and career choices. The quarterly journal of economics, 129(3), 1409-1447.
Evans, G., Heath, A., & Lalljee, M. (1996). Measuring left-right and libertarian-authoritarian values in the British electorate. British Journal of Sociology, 93-112.
Kundu, A., & Ghose, A. (2016). The relationship between attitude and self-efficacy in mathematics among higher secondary students. Journal of Humanities and Social Science, 21(4), 25-31.
Lent, R. W., Ezeofor, I., Morrison, M. A., Penn, L. T., & Ireland, G. W. (2016). Applying the social cognitive model of career self-management to career exploration and decision-making. Journal of Vocational Behavior, 93, 47-57.
May, D. K. (2009). Mathematics self-efficacy and anxiety questionnaire (Doctoral dissertation, University of Georgia).

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Schools that registered for participation were randomized to the aforementioned three experimental conditions right-after completion of the pre-test. We assessed the level of likelihood of studying economics, economic literacy, self-efficacy, outcome expectations, economic stereotypes, and social image of all students before as well as after followed the course. Political views and socio-economic status questions were asked only in the pre-tests. Students assigned to the control groups completed the same test at the same time as students in the treatment groups. The students in the baseline group complete the tests even though they will not receive any intervention between the tests. A second post-test will also be applied after a waiting period.
Experimental Design Details
- The students in the baseline group, will complete the pre-test in the beginning of the experiment and a first post-test after four weeks. Between the two tests no intervention will be performed, but after the end of the experiment, schools in the baseline condition can receive the course material if they wish so (for ethical reasons, not leaving any schools behind).
- For the case of students in the first treatment condition, they will also complete in the beginning of the experiment the pre-test. The schools then can schedule a date to apply the course material (between October 27th 2022 and February 22nd 2023) and students will answer the post-test immediately after the course. The intervention consists in a junior college course material, where students can experience similar content that is seen in higher education economics.
- Students in the second treatment group will follow the same schedule as students in the treatment 1 condition. They will also receive the content of the junior college in economics received by the treatment 1 group, but in addition will watch videos, read texts and infographics about non-traditional careers in economics.

Randomization Method
Schools will be randomly assigned to the different experimental conditions by a random number generator in STATA after completion of pre-tests. This will be stratified by school characteristics.
Randomization Unit
The treatment was randomized at school level. All students and teachers in the same school were assigned to the same experimental group. In this way, all teachers in the same school received the same teaching material and instructions in order to minimize the possibility of spill-over effects and contamination of the different experimental group intra-cluster.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Given previous randomized controlled trial studies with Flemish schools in Belgium (e.g. Iterbeke et al., 2020; Maldonado et al., 2019; Pitthan & De Witte, 2022), our work will aim to reach similar number of participating schools (e.g. between 40 and 60).

Iterbeke, K., De Witte, K., Declercq, K., & Schelfhout, W. (2020). The effect of ability matching and differentiated instruction in financial literacy education. Evidence from two randomised control trials. Economics of Education Review, 78, 101949.
Maldonado, J. E., De Witte, K., & Declercq, K. (2019). The effects of parental involvement in homework. Two randomised controlled trials in financial education. FEB Research Report Department of Economics DPS19. 14.
Pitthan, F. & De Witte, K. (2022). Improving Financial Literacy by Mitigating Behavioural Biases: A Causal Mediation Analysis on the Effects of Behavioural-Based Financial Education. G53 Network Working paper series.
Sample size: planned number of observations
With an average number of participating students per school of 45, we plan to have around 2700 students.
Sample size (or number of clusters) by treatment arms
Baseline group = 630 pupils, 14 schools
Treatment group 1 = 1035 pupils, 23 schools
Treatment group 2 = 1035 pupils, 23 schools
Average number of schools per condition = 20
Average number of pupils per condition = 900
Average number of pupils per school = 45
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The computation is based on List et al. (2011) and accounts for intra-cluster correlation in the calculation of the minimal detectable effect size. In our planned experimental setting, we expect 20 schools in each experimental condition, with on average 45 students by school. Details of the calculation: According to List et al. (2011), in a clustered design, the minimum number of observations in each experimental group can be computed as follows: n=2(t_(α/2)+t_β)²(σ/δ)²(1+(m-1)ρ) This implies that the minimum detectable effect size is equal to: δ=σ/√(n/(2(t_(α/2)+t_β)²(1+(m-1)ρ))) Or the minimum detectable effect size expressed as a fraction of a standard deviation is equal to: δ/σ=1/√(n/(2(t_(α/2)+t_β)²(1+(m-1)ρ))) Following the standards of the literature and the rules of thumb from List et al. (2011), we can set a significance level of 0.05 and power to 0.80, which would lead us to t_(α/2)=1.96 and t_β=0.84 (from normal tables). Replacing in the formula (with the planned sample size and assuming intra-correlation of cluster equal to 0.1) we get the minimal detectable effect size of: δ/σ=1/√(900/(2(1.645+0.84)²(1+(45-1)0.1)))=0.27 Reference List, J., Sadoff, S. and Wagner, M. (2011), So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design, Experimental Economics 14, 439-457

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials