Back to History Current Version

Feedback in University Education

Last registered on July 19, 2019

Pre-Trial

Trial Information

General Information

Title
Feedback in University Education
RCT ID
AEARCTR-0004457
Initial registration date
July 17, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 19, 2019, 11:57 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Erlangen-Nuremberg

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2019-07-17
End date
2019-12-31
Secondary IDs
Abstract
It is of public interest that students achieve high returns from public education and perform well in exams. In a field experiment, I study the impact of randomized pre-exam test feedback on exam performance in an undergraduate macroeconomic class. Before and after the test participants reveal beliefs regarding their individual effort-performance relationship. Only students in the treatment group receive feedback on their performance in the test before the second belief elicitation. In addition, I elicit students’ self-confidence regarding their performance in the test. I can measure each student’s consecutively exerted study effort by tracking her activity on the e-study platform. Finally, I link the experiment and platform data to the students’ exam grades. In doing so, I can examine to what extent the exogenous feedback treatment shifted students’ believed effort-performance relationship, their exerted study effort and their final exam performance.
External Link(s)

Registration Citation

Citation
Hardt, David. 2019. "Feedback in University Education." AEA RCT Registry. July 19. https://doi.org/10.1257/rct.4457-1.0
Former Citation
Hardt, David. 2019. "Feedback in University Education." AEA RCT Registry. July 19. https://www.socialscienceregistry.org/trials/4457/history/50376
Experimental Details

Interventions

Intervention(s)
I provide randomized feedback on a pre-exam test to undergraduate university students. I compare shifts in beliefs about expected exam performance (for a constant level of effort), actual exerted study effort and actual exam performance between students that received feedback on the pre-exam test to those that did not receive any feedback. The feedback has a debiasing character regarding participants’ believed individual effort-performance relationship and/or their self-confidence and should therefore influence their study decisions (i.e. exerted effort) and outcomes (i.e. performance in the exam).
Intervention Start Date
2019-07-17
Intervention End Date
2019-08-01

Primary Outcomes

Primary Outcomes (end points)
Belief elicitation of each student’s self-confidence.
Elicitation of beliefs about individual effort-performance relationship and shifts therein.
Difference in actually exerted study effort on the platform (between treatment and control group).
Difference in actual exam performance (between treatment and control group).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Heterogeneity analyses (with respect to personality traits, A-levels GPA etc.)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design

All students that signed up for an undergraduate macroeconomics exam are invited to visit a newly implemented e-study-platform with exam preparation material. In order to access the platform, students need to participate in a short test on exam relevant material. Before that test, students reveal beliefs about their assumed individual effort-performance relationship, e.g. how much they believe they need to study in order to achieve their desired grade.
After the test, students guess how well they performed in it. Next, only students in the treatment group receive feedback on their test performance before once more beliefs on individual effort-performance relationship are elicited. Next, students get access to the e-study platform. The platform consists of an overview page with some instructions and hyperlinks to the lessons 1 to 11. Higher lessons are only unlocked if all respectively lower lessons have been completed by providing answers to all the containing exercises. I monitor each student’s activity on the platform closely. In doing so, not only the number of completed lessons but also the intensity of the use of the platform can serve as an objective measure of study effort in the analyses. Finally, I link the experiment and platform data to the students’ exam grades. I can therefore examine to what extent the feedback treatment shifted students’ believed effort-performance relationship, their exerted study effort and their final exam performance. In addition, I can determine students’ self-confidence regarding test performance.
Experimental Design Details
The experiment takes place at a German university. Subjects are undergraduate students taking an introductory macroeconomics class. According to study plan, the class takes part in the second semester. Subjects are not informed about the fact that they participate in an experiment. The experiment is framed as the pilot of a new approach at the School of Business and Economics to offer new e-study infrastructure. The e-study platform is preceded by a compulsory test and survey questions which seem natural in the context of a pilot. Test and questions take about 10-15 minutes. Immediately afterwards, students get access to the e-study-platform. The welcome page is followed by a first block of survey questions. First, I aim at identifying different “types” of motivated students by asking them about their goals regarding the exam. The next block of questions aims at eliciting a student’s believed effort-performance relationship and approximating the functional form of a student’s believed required effort for different outcome levels in the exam. I also try to elicit the point on the effort-performance curve where a student sees herself. On the next page, students start the test as soon as they click on a “start test” button. The test consists of 12 questions and there is a maximum working time per question before the next question is shown. However, participants can always proceed to the next question before the time has elapsed. The order of the questions is random. The test covers exam material of all lessons. After finishing the test (but before the treatment), students are asked how many questions they believe to have answered correctly. It is plausible that feedback, which is likely to have a debiasing character, has a different effect on students with different levels of ex-ante self-confidence (i.e. over- vs. under-confidence). On the next page, students of the treatment group are informed about the number of questions they answered correctly in the test. Then, while holding effort constant, I once more elicit students’ beliefs about their individual effort-performance relationship for the exam. It is crucial to hold the level of effort fixed as expected exam performance and planned exerted effort are endogenous. Because only the treatment group received information about their actual performance, I can measure to what extent the treatment shifted beliefs about participants’ marginal return on education (i.e. their effort-performance relationship) and their subsequent exerted study effort. Next, students get access to the e-study platform. The platform consists of an overview page with some instructions and hyperlinks to the lessons 1 to 11. Similar to the structure of the corresponding lecture, the content of the online lessons runs through 1 to 11 in a chronological manner. Lesson 1 is unlocked by default. Higher lessons are only unlocked if all respectively lower lessons have been completed by providing answers to all the containing exercises. Unlocked lessons can be repeated unlimited times. I can track each student’s effort on the platform closely. Therefore, not only the number of completed lessons but also the intensity of the use of the platform can serve as an objective measure of study effort in the analyses. Students do not get feedback on their performance in the lessons at any time.
Randomization Method
Randomization done in office using Stata. Randomization into treatment and control takes place before students arrive on the platform.
Leading up to the exam, students have the option to hand in solved exercises and receive feedback afterwards. I make sure that the number of students who handed in material is balanced between treatment and control group. Furthermore, I also stratify on students’ A-levels grade point average.
Randomization Unit
Individual Student
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
400-800 students
Sample size: planned number of observations
400-800 students
Sample size (or number of clusters) by treatment arms
Symmetric sample sizes between treatment and control.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
It was not possible to run pilots. Therefore, standard deviations of outcome variables could not be estimated. Consequently, no power calculation is possible.
IRB

Institutional Review Boards (IRBs)

IRB Name
Ethics Commission of the School of Business and Economics of University of Erlangen-Nuremberg
IRB Approval Date
2019-06-28
IRB Approval Number
N/A
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials