The study program Economics and Business Administration at the university where the trial is going to be implemented requires students to collect 180 credit points to graduate. Students are expected to graduate after three years (six semesters). The study plan therefore assigns courses worth 30 credit points to each semester.
The intervention focusses on the first two semesters, each consisting of six compulsory courses. A business simulation course takes part in the first week of the first semester, leaving five courses to be completed in the remainder of the first semester.
Given that the institutional setting features a very salient rule of completing 30 credit points per semester, our experimental design is based on the notion that students set themselves a target (number of credit points to be completed) for each semester. We expect that for most students, this target should be equal to 30 credit points (or 35 credit points if the respective student aims at completing the math course scheduled for the second semester already in the first semester). Our design furthermore builds on the notion that students, depending on their perceptions of how study effort translates into academic achievement, adjust their study effort in order to meet the target.
Survey data collected from an earlier cohort of students suggest that a large share of students do not complete 30 credit points per semester, delaying their graduation. At the same time, the survey also suggests that most students do not work full time even if one aggregates the hours worked to study and the hours worked to earn income. This pattern is consistent with the notion that many students set their individual target performance to the institutionally prescribed level of 30 credit points per semester, but underestimate the study effort needed to meet this target. The intervention is meant to provide information that should help students to correct a possible misperception regarding the effort-performance link. Specifically, we inform students about survey-based estimates of the likelihood to meet a target of 30 credit points, conditional on different effort levels. The webpage presenting the information makes explicit that these estimates report only correlations, and that the information provided is not sufficient to derive predictions about individual performance. The purpose of the trial is to test whether students make use of the information provided to update their perception of how much study effort is needed in order to meet a target of 30 credit points, and whether this translates into a change in effort and performance. The surveys implemented in treatment and control will enable us to analyze (a) if students have fixed targets, (b) what these targets are (number of credit points students plan to complete), and (c) the students' perceptions regarding the study effort needed to meet a target of 30 credit points. The fact that we repeat the survey on (b) and (c) will further allow us to test (d) if the treatment shifts the targets, and (e) if it shifts perceptions. One should note, however, that the surveys used to elicit targets and perceptions are not incentivized. Hence, it is possible that we will not be able to detect a shift in perceptions even if it is present.
The webpage hosting the experiment is designed such that students can log in and view the information provided only once. This is meant to limit spillovers of the treatment to students in the control group.
About 850 students did enroll for the study programm Economics and Business Administration for the fall semester of 2019. We randomly assigned half of the students to treatment and the remaining half to control. The mailing of the invitation took place in the third week of the semester. The email invitations contained a link to a web page. Students could log in to the web page using their student ID. The web page features the surveys and information (only in treatment) as described above. During the first (fall) semester, we track students' activities on the platform containing e-learning materials for some (but not all) of the courses which are part of the curriculum of the first semester. After the end of the exam period (April 2020), we will collect the individual data on exam performance. We plan to extend the experimental design to cover the second study semester. However, in case the results from the first semester suggest that students did not respond to the intervention, we may decide to stop the trial after the first semester. In case we continue into the second semester, the default plan is to split both treatment and control into subtreatments for a second round of random treatment assignment: Half of the first-semester treatment group is supposed to receive the treatment again in the second semester. The other half will not be treated again. The same split will be implemented in the first-semester control group, resulting in a total of four treatment arms (treatment status in first semester - treatment status in second semester: treatment - treatment; treatment - control; control - treatment; control - control). However, if the results from the first semester suggest that splitting the treatment arms will result in too little power, we might also choose a randomization scheme where we simply continue the first-semester treatment assignment into the second semester. This would enable us to test how effective the information treatment is if applied in the first two semesters of the study program. The treatment is supposed to be similar to the one in the first semester. Again, students will be invited to log in to the webpage, featuring the surveys as described for the first semester, and the treatment (in case student is assigned to treatment in the second semester). The information provided in the treatment group (performance and effort-performance link) will refer to the second semester, though. After the end of the second semester, the effort data (from the e-learning platform) and the exam data will be collected, and we will start the data analysis.
The information presented to students in the treatment is collected as follows. We implemented a survey among the last baseline cohort of students starting with the program before the beginning of the intervention. The survey collects detailed information on hours worked in the first semester. This data is linked to the administrative data to derive correlations between effort and performance. For the intervention in the second semester, we plan to repeat the survey with the same baseline cohort to derive corresponding correlations for the program's second semester. This survey is planned to take place shortly before the beginning of the intervention in the fall of 2020.
We have mentioned the caveat that we might stop the trial after the first semester if the evidence suggests that the intervention does not have any effect. There is another caveat regarding the continuation of the intervention in the second semester. This caveat refers to the correlations between study effort and performance that we plan to derive from the second survey described in the previous paragraph. Given that the results of the first survey show a strong correlation between study effort and performance, we expect also to see a similar correlation in the second semester. However, if it turns out that the survey results do not display such a correlation, we might decide to stop the trial after the first semester.