Information on the Effort-Performance Link and Academic Achievement

Last registered on November 13, 2019

Pre-Trial

Trial Information

General Information

Title
Information on the Effort-Performance Link and Academic Achievement
RCT ID
AEARCTR-0004626
Initial registration date
November 13, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 13, 2019, 11:11 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Erlangen-Nuremberg

Other Primary Investigator(s)

PI Affiliation
University of Erlangen-Nuremberg

Additional Trial Information

Status
In development
Start date
2019-05-01
End date
2021-07-31
Secondary IDs
Abstract
We study a cohort of students at the Department of Economics and Business Administration at a German university in their first study year, spanning two semesters. Administrative data shows that many students underperform relative to the suggested curriculum in the first study year, consisting of six courses (each giving five credit points) per semester. Survey evidence collected from earlier cohorts of students is consistent with the notion that this is due to students underestimating the effort needed to earn 30 credit points per semester. We devise an intervention that informs students about the success probabilities of students with different effort levels. The intervention is implemented at the beginning of the first semester and repeated at the beginning of the second semester. To determine whether and how students respond to the intervention, we measure the students' study effort and their performance in exams.
External Link(s)

Registration Citation

Citation
Hardt, David and Johannes Rincke. 2019. "Information on the Effort-Performance Link and Academic Achievement." AEA RCT Registry. November 13. https://doi.org/10.1257/rct.4626-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Students are invited via email to log in to a webpage designed for the trial. In the treatment group, the page elicits how students choose their study effort, what their individual target performance is (number of credit points they plan to earn in the current semester) and what effort (hours worked per week) they think is necessary to pass exams worth 30 credit points per semester. The webpage then provides information about the distribution of the performance of earlier cohorts of students in the respective semester, and the success probabilities of students with different effort levels. Finally, the webpage again elicits the students' individual targets and their perceptions regarding the effort needed to pass exams worth 30 credit points. In the control group, the same surveys are implemented, but no information on actual performance is provided. We plan to implement the intervention in two consecutive semesters. This means we will study the student cohort starting the program in the fall semester of 2019 in their first and second semester.
Intervention Start Date
2019-10-15
Intervention End Date
2020-07-31

Primary Outcomes

Primary Outcomes (end points)
- Several measures of study effort.
- Credit points of exams students register for.
- Credit points of exams students take.
- Credit points of exams students pass.
All outcomes are to be considered separately for the two semesters. In addition, we also plan to evaluate the aggregated outcomes over both semesters.
Primary Outcomes (explanation)
Measures of study effort: The measures are partly to be constructed using data from e-learning platforms available for some (but not all) courses. As the structure and content of the e-learning courses is under the control of the individual lecturers and commonly revised before and during the respective semester, we cannot detail in this pre-registration how exactly we will construct the measures. If possible, we would like to measure the time students spend working on e-learning materials. If this turns out to be impossible, we will construct alternative measures, like the frequency of visiting the platforms or certain parts of it.

In addition to measures based on usage data from the e-learning platform, we will also collect attendance data in one of the exercise courses in the first semester. We aim at collecting this data in three consecutive sessions of the respective course. Attendance will be self-reported, though. We plan to use as an outcome the individual student's share of exercise courses she attended in person (taking on the values 0%, 33%, 66%, and 100%).

Secondary Outcomes

Secondary Outcomes (end points)
- Grade point averages over all exams taken (first and second semster).
- Indicator whether or not students state to have a fixed target performance for the current semester.
- Individual target performance (number of credit points students plan to earn in the current semester).
- Hours worked per week students think are necessary to pass exams worth 30 credit points per semester.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The study program Economics and Business Administration at the university where the trial is going to be implemented requires students to collect 180 credit points to graduate. Students are expected to graduate after three years (six semesters). The study plan therefore assigns courses worth 30 credit points to each semester.

The intervention focusses on the first two semesters, each consisting of six compulsory courses. A business simulation course takes part in the first week of the first semester, leaving five courses to be completed in the remainder of the first semester.

Given that the institutional setting features a very salient rule of completing 30 credit points per semester, our experimental design is based on the notion that students set themselves a target (number of credit points to be completed) for each semester. We expect that for most students, this target should be equal to 30 credit points (or 35 credit points if the respective student aims at completing the math course scheduled for the second semester already in the first semester). Our design furthermore builds on the notion that students, depending on their perceptions of how study effort translates into academic achievement, adjust their study effort in order to meet the target.

Survey data collected from an earlier cohort of students suggest that a large share of students do not complete 30 credit points per semester, delaying their graduation. At the same time, the survey also suggests that most students do not work full time even if one aggregates the hours worked to study and the hours worked to earn income. This pattern is consistent with the notion that many students set their individual target performance to the institutionally prescribed level of 30 credit points per semester, but underestimate the study effort needed to meet this target. The intervention is meant to provide information that should help students to correct a possible misperception regarding the effort-performance link. Specifically, we inform students about survey-based estimates of the likelihood to meet a target of 30 credit points, conditional on different effort levels. The webpage presenting the information makes explicit that these estimates report only correlations, and that the information provided is not sufficient to derive predictions about individual performance. The purpose of the trial is to test whether students make use of the information provided to update their perception of how much study effort is needed in order to meet a target of 30 credit points, and whether this translates into a change in effort and performance. The surveys implemented in treatment and control will enable us to analyze (a) if students have fixed targets, (b) what these targets are (number of credit points students plan to complete), and (c) the students' perceptions regarding the study effort needed to meet a target of 30 credit points. The fact that we repeat the survey on (b) and (c) will further allow us to test (d) if the treatment shifts the targets, and (e) if it shifts perceptions. One should note, however, that the surveys used to elicit targets and perceptions are not incentivized. Hence, it is possible that we will not be able to detect a shift in perceptions even if it is present.

The webpage hosting the experiment is designed such that students can log in and view the information provided only once. This is meant to limit spillovers of the treatment to students in the control group.

About 850 students did enroll for the study programm Economics and Business Administration for the fall semester of 2019. We randomly assigned half of the students to treatment and the remaining half to control. The mailing of the invitation took place in the third week of the semester. The email invitations contained a link to a web page. Students could log in to the web page using their student ID. The web page features the surveys and information (only in treatment) as described above. During the first (fall) semester, we track students' activities on the platform containing e-learning materials for some (but not all) of the courses which are part of the curriculum of the first semester. After the end of the exam period (April 2020), we will collect the individual data on exam performance. We plan to extend the experimental design to cover the second study semester. However, in case the results from the first semester suggest that students did not respond to the intervention, we may decide to stop the trial after the first semester. In case we continue into the second semester, the default plan is to split both treatment and control into subtreatments for a second round of random treatment assignment: Half of the first-semester treatment group is supposed to receive the treatment again in the second semester. The other half will not be treated again. The same split will be implemented in the first-semester control group, resulting in a total of four treatment arms (treatment status in first semester - treatment status in second semester: treatment - treatment; treatment - control; control - treatment; control - control). However, if the results from the first semester suggest that splitting the treatment arms will result in too little power, we might also choose a randomization scheme where we simply continue the first-semester treatment assignment into the second semester. This would enable us to test how effective the information treatment is if applied in the first two semesters of the study program. The treatment is supposed to be similar to the one in the first semester. Again, students will be invited to log in to the webpage, featuring the surveys as described for the first semester, and the treatment (in case student is assigned to treatment in the second semester). The information provided in the treatment group (performance and effort-performance link) will refer to the second semester, though. After the end of the second semester, the effort data (from the e-learning platform) and the exam data will be collected, and we will start the data analysis.

The information presented to students in the treatment is collected as follows. We implemented a survey among the last baseline cohort of students starting with the program before the beginning of the intervention. The survey collects detailed information on hours worked in the first semester. This data is linked to the administrative data to derive correlations between effort and performance. For the intervention in the second semester, we plan to repeat the survey with the same baseline cohort to derive corresponding correlations for the program's second semester. This survey is planned to take place shortly before the beginning of the intervention in the fall of 2020.

We have mentioned the caveat that we might stop the trial after the first semester if the evidence suggests that the intervention does not have any effect. There is another caveat regarding the continuation of the intervention in the second semester. This caveat refers to the correlations between study effort and performance that we plan to derive from the second survey described in the previous paragraph. Given that the results of the first survey show a strong correlation between study effort and performance, we expect also to see a similar correlation in the second semester. However, if it turns out that the survey results do not display such a correlation, we might decide to stop the trial after the first semester.
Experimental Design Details
The study program Economics and Business Administration at the university where the trial is going to be implemented requires students to collect 180 credit points to graduate. Students are expected to graduate after three years (six semesters). The study plan therefore assigns courses worth 30 credit points to each semester.

The intervention focusses on the first two semesters, each consisting of six compulsory courses. A business simulation course takes part in the first week of the first semester, leaving five courses to be completed in the remainder of the first semester.

Given that the institutional setting prescribes a very strict rule of completing 30 credit points per semester, our experimental design is based on the notion that students set themselves a target (number of credit points to be completed) for each semester. We expect that for most students, this target should be equal to 30 credit points (or 35 credit points if the respective student aims at completing the math course scheduled for the second semester already in the first semester). Our design furthermore builds on the notion that students, depending on their perceptions of how study effort translates into academic achievement, adjust their study effort in order to meet the target.

Survey data collected from an earlier cohort of students suggest that a large share of students do not complete 30 credit points per semester, delaying their graduation. At the same time, the survey also suggests that most students do not work full time even if one aggregates the hours worked to study and the hours worked to earn income. This pattern is consistent with the notion that many students set their individual target performance to the institutionally prescribed level of 30 credit points per semester, but underestimate the study effort needed to meet this target. The intervention is meant to provide information that should help students to correct a possible misperception regarding the effort-performance link. Specifically, we inform students about survey-based estimates of the likelihood to meet a target of 30 credit points, conditional on different effort levels. The webpage makes explicit that these estimates report only correlations, and that the information provided is not sufficient to derive predictions about individual performance. The purpose of the trial is to test whether students make use of the information provided to update their perception of how much study effort is needed in order to meet a target of 30 credit points, and whether this translates into a change in effort and performance. The surveys implemented in treatment and control will enable us to analyze (a) if students have fixed targets, (b) what these targets are (number of credit points students plan to complete), and (c) the students' perceptions regarding the study effort needed to meet a target of 30 credit points. The fact that we repeat the survey on (b) and (c) will further allow us to test (d) if the treatment shifts the targets, and (e) if it shifts perceptions. One should note, however, that the surveys used to elicit targets and perceptions are not incentivized. Hence, it is possible that we will not be able to detect a shift in perceptions even if it is present.

The webpage hosting the experiment is designed such that students can log in and view the information provided only once. This is meant to limit spillovers of the treatment to students in the control group.

About 850 students did enroll for the study programm Economics and Business Administration for the fall semester of 2019. We randomly assigned half of the students to treatment and the remaining half to control. The mailing of the invitation took place in the third week of the semester. The email invitations contained a link to a web page. Students could log in to the web page using their student ID. The web page features the surveys and information (only in treatment) as described above. During the first (fall) semester, we track students' activities on the platform containing e-learning materials for some (but not all) of the courses which are part of the curriculum of the first semester. After the end of the exam period (April 2020), we will collect the individual data on exam performance. We plan to extend the experimental design to cover the second study semester. However, in case the results from the first semester suggest that students did not respond to the intervention, we may decide to stop the trial after the first semester. In case we continue into the second semester, the default plan is to split both treatment and control into subtreatments for a second round of random treatment assignment: Half of the first-semester treatment group is supposed to receive the treatment again in the second semester. The other half will not be treated again. The same split will be implemented in the first-semester control group, resulting in a total of four treatment arms (treatment status in first semester - treatment status in second semester: treatment - treatment; treatment - control; control - treatment; control - control). However, if the results from the first semester suggest that splitting the treatment arms will result in too little power, we might also choose a randomization scheme where we simply continue the first-semester treatment assignment into the second semester. This would enable us to test how effective the information treatment is if applied in the first two semesters of the study program. The treatment is supposed to be similar to the one in the first semester. Again, students will be invited to log in to the webpage, featuring the surveys as described for the first semester, and the treatment (in case student is assigned to treatment in the second semester). The information provided in the treatment group (performance and effort-performance link) will refer to the second semester, though. After the end of the second semester, the effort data (from the e-learning platform) and the exam data will be collected, and we will start the data analysis.

The information presented to students in the treatment is collected as follows. We implemented a survey among the last baseline cohort of students starting with the program before the beginning of the intervention. The survey collects detailed information on hours worked in the first semester. This data is linked to the administrative data to derive correlations between effort and performance. For the intervention in the second semester, we plan to repeat the survey with the same baseline cohort to derive corresponding correlations for the program's second semester. This survey is planned to take place shortly before the beginning of the intervention in the fall of 2020.

We have mentioned the caveat that we might stop the trial after the first semester if the evidence suggests that the intervention does not have any effect. There is another caveat regarding the continuation of the intervention in the second semester. This caveat refers to the correlations between study effort and performance that we plan to derive from the second survey described in the previous paragraph. Given that the results of the first survey show a strong correlation between study effort and performance, we expect also to see a similar correlation in the second semester. However, if it turns out that the survey results do not display such a correlation, we might decide to stop the trial after the first semester.
Randomization Method
The randomization was done in office by a computer. We used gender, high-school GPA (above vs. below median), type of high school completed (Gymnasium vs. other), and type of university enrollment (first-time enrollment vs. other) to construct a total of 16 strata.
Randomization Unit
Individual student
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Treatment is not clustered
Sample size: planned number of observations
The overall sample size is given by the number of students enrolled in the first semester of the study program Economics and Business Administration in the fall semester of 2019. This number is about 850. We will exclude from the experiment students who have been enrolled at the university before (or elsewhere) and who have already completed courses from the first or second semester of the study program. This exclusion can only be done ex post, because the respective credits show up in the administrative data only a few months after enrollment. Based on historic records, we expect that this restriction will leave us with about 600 students. The effective sample size is, however, smaller. This is because not all students invited to log in to the webpage will do so. We expect about 60 percent of the overall sample to respond to the invitation, giving us about 360 students taking part in the experiment.
Sample size (or number of clusters) by treatment arms
Note that the following figures refer to the overall sample of about 600 students.

First semester:
- Treatment: one half of overall sample, i.e., about 300 students
- Control: one half of overall sample, i.e., about 300 students

If we choose to split the first-semester treatment arms in the second semester:
number of students: treatment status first semester - treatment status second semester
- 150 students: treatment - treatment
- 150 students: treatment - no treatment
- 150 students: no treatment - treatment
- 150 students: no treatment - no treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Note that the following figures refer to the sample of about 600 students without credits earned earlier or elsewhere. First semester: Our most relevant outcome is the total number of credit points earned. For the first semester, the baseline data from earlier cohorts show a mean of 26.4 credit points (SD 8.6). With a sample of 600 students, the minimum detectable effect size is 2 credit points, or 7.6 percent. Second semester: The baseline data from earlier cohorts show a mean of 18.8 credit points (SD 9.6). With a sample of 300 students (i.e., after splitting up the treatment arms in the second semester), the minimum detectable effect size for performance in the second semester alone is 3.1 credit points, or 16.5 percent. Overall performance: Another relevant test is that for an effect of receiving the treatment for two consecutive semesters, relative to not receiving the treatment in any semester. The mean overall performance over both semesters is 44.6 credit points (SD 17.9). The minimum detectable effect size with a sample of 300 students (i.e., after splitting treatment arms in the second semester) is 5.8 ECTS, or 13.0 percent. Based on a sample of 600 students (i.e., conditional on not splitting up the treatment arms), the minimum detectable effect size is 4.1 credit points, or 9.1 percent.
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Commission of the School of Business, Economics and Society at the University of Erlangen-Nuremberg
IRB Approval Date
2019-06-28
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials