Experimental Design Details
Main survey
1.First stage - basic information
We initially gather information such as student ID numbers, major, post-graduation intentions, and internship experiences from the graduates. It’s worth noting that we did not inquire about covariates like GPA and some demographic variables in the questionnaire as they are already included in the school’s (anonymized) administrative data. We just use student ID numbers to link responses from the questionnaire with the information in the database.
2.Second stage - belief elicitation
The second stage of our survey consists of two parts. In the former part, we focus on participants' job search plans for the upcoming month. We begin by asking their expected monthly income and minimum monthly income they are willing to accept for a job. Additionally, we examine their expected job search efforts, including the weekly hours they plan to dedicate to job searching and the number of applications they expect to submit each week. Besides, we ask participants to estimate their anticipated success rate in securing offers relative to the total number of applications submitted.
In the latter part of the stage, we explore participants' beliefs about key labor market parameters, particularly regarding some primitives in job search models. Based on their knowledge of the labor market, we ask them to estimate the employment rate, median offer rate, and average monthly income for last year’s graduates in their major field. Additionally, we inquire about other expectations that may be relevant to the labor market. Following each question, participants rate their confidence in these estimates using a 5-point Likert scale, ranging from "very unsure" to "very sure."
3.Third stage - information provision
We prepare two types of intervention information: average monthly income, and median offer-to-application ratio. The specific contents are as follows.
Average Monthly Income (AMI):
•Your initial estimate for the average monthly income of last year’s graduates in your major field was X1 RMB.
•A sample survey conducted among last year’s graduates reveals that the actual average monthly income is Y1 RMB.
•This means the average monthly income of last year’s graduates in your major field is higher/lower than your estimate by (Y1-X1) RMB.
(Comprehension Check)
•The average monthly income of last year’s graduates is (higher/lower) compared to your estimate.
Median Offer-to-Application Ratio (MOAR):
•Your initial estimate for the median offer-to-application ratio among last year's graduates in your major field was X2%.
•A sample survey conducted among last year's graduates reveals that the actual median success rate is Y2%, meaning that for 10 job applications submitted, 0.1*Y2 job offers were received.
•This means the median job application success rate of last year’s graduates in your major field is higher/lower than your estimate by (Y2-X2)%.
(Comprehension Check)
•The job application success rate of last year's graduates is (higher/lower) compared to your estimate.
Note that the two categories of information correspond to specific questions in the belief elicitation stage. The values of the variables within this information are tailored to each participant's major field of study and their prior responses. The specific values of Y1 and Y2 are drawn from the university’s survey data from the previous cohort of graduates.
We implement a stratified randomization strategy at the class level. Participants are segregated into four distinct groups, which constitutes a 2x2 experimental design. Interventions assigned to each group are as follows:
Control (T0): No information
Treatment 1(T1): AMI
Treatment 2(T2): MOAR
Treatment 3(T3): MOAR+AMI
In this way, we can examine the individual effects of the interventions while also exploring their interaction effects. To ensure that participants fully understand the information provided, we incorporate a comprehension check following the intervention details. After participants submit their responses, the correct answers will be displayed in a pop-up window.
4.Fourth stage - belief updating
In this stage, we re-elicit the participants’ beliefs in the similar way as in the second stage. The only difference is that we replace the reference group with their peers in the same cohort who intend to work, instead of last year’s graduates. These questions reveal the job seekers’ expectations of the current labor market.
In addition, we re-elicit the participants’ reservation wages and their job-search strategies for the upcoming month.
5.Fifth stage - supplementary information
In the final stage of our analysis, we broaden the scope of our inquiry to include a wider range of labor market beliefs and personality traits. The beliefs encompass participants’ expectations about future economic conditions, how their peers' jobs match their major and geographic location, as well as their self-assessed competitiveness compared to their peers. The personality traits under examination include social network pressure, openness to new experiences, locus of control, risk preference, time preference, self-esteem, and optimism. These questions may facilitate a more thorough exploration of the underlying mechanisms and heterogeneous effects.
Reminders
We sent email and/or text messages to remind them of the intervention content 4 weeks and 16 weeks after the main survey. To avoid participants mistaking reminders for new information, we tell them that we are repeating the information in the previous survey.
Follow-up survey
The follow-up survey will be administered before students' graduation, as part of the Career Center's annual routine. For researchers, this survey provides valuable data on students' job-search behavior and labor market outcomes. Additionally, we add some questions related to labor market beliefs, which offer insights into the dynamic evolution of these beliefs throughout the job search process.
Since the follow-up survey does not involve any experimental interventions, its content remains consistent for all participants. The datasets from both survey waves are merged using students' unique identifiers. However, due to attrition, the follow-up survey dataset is typically smaller than the initial survey dataset.