x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
The Impact of Online Lectures on Student Learning
Last registered on October 26, 2020

Pre-Trial

Trial Information
General Information
Title
The Impact of Online Lectures on Student Learning
RCT ID
AEARCTR-0006538
Initial registration date
October 22, 2020
Last updated
October 26, 2020 8:20 AM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
University of Zurich
Other Primary Investigator(s)
Additional Trial Information
Status
On going
Start date
2020-09-14
End date
2021-04-01
Secondary IDs
Abstract
Digitization of higher education is fundamentally changing students’ academic experience and their learning process. Traditional live lectures are increasingly being replaced with online lectures. We conduct a field experiment at a major research university where students are randomly assigned to live lectures versus watching these exact same lectures online. We estimate whether assignment to online lecture affects students’ exam performance and academic experience.
External Link(s)
Registration Citation
Citation
Zölitz, Ulf. 2020. "The Impact of Online Lectures on Student Learning." AEA RCT Registry. October 26. https://doi.org/10.1257/rct.6538-1.0.
Experimental Details
Interventions
Intervention(s)
In the exceptional circumstances prompted by COVID-19, universities around the world have massively increased the number of courses taught online. Even before 2020, the digitization of higher education began to transform the academic experience and learning process for university students. However, very limited research exists on how students are impacted by the shift to online lectures.
In this paper, we investigate how the introduction of online lectures affects student performance and university experience. We study a European research university that was forced to introduce online lectures as part of social distancing measures in 2020. In this setting, we assign students to a rotating attendance schedule that determines whether students attend live lectures or watch the same lectures online. Assignment to live or online lectures is determined by a random number – the last digit of the student identification number.
We aim to answer the following research questions:

1. Do students perform better on exam questions when the content of the question was covered in an online lecture compared to a traditional in-person lecture? (Exam performance)
2. Do online lectures affect subsequent attendance, course dropout, course passing, and student grades? (Academic trajectory)
3. Do online lectures affect students’ social interactions, satisfaction with different course features, study habits, and overall learning experience? (Subjective outcomes)
4. Does assignment to online attendance affect study dropout, completion of the first year, elective course choices, and major switching? (Longer run outcomes)
Intervention Start Date
2020-09-14
Intervention End Date
2020-12-24
Primary Outcomes
Primary Outcomes (end points)
- Exam question answered correctly
- Subsequent lecture attendance
- Course dropout
- Course passing
- Student grades
Primary Outcomes (explanation)
The primary outcome of our exam question analysis is whether a student answered a given multiple choice question correctly. We link each exam question to lecture content to determine whether a given student-course-question observation is treated. This will allow us to directly estimate the impact of online lecture assignment on the probability of successfully answering a specific exam question.
The primary outcomes for our student-course level analysis are subsequent lecture attendance, course dropout, course passing, and student grades. These outcomes capture students’ academic trajectory. Course dropout and passing are binary outcomes, indicating if a student remained in the course or passed the course. The overall course grade is a continuous measure.
All primary outcomes will be measured through administrative data.
Secondary Outcomes
Secondary Outcomes (end points)
- Student subjective outcomes
- Study dropout
- Completion of first year
- Elective course choices
- Major switching
Secondary Outcomes (explanation)
Secondary outcomes are student subjective outcomes as well as longer run educational outcomes. Student subjective outcomes are reported social interactions, satisfaction with different course features, study habits, and their overall learning experience. These outcomes will be measured through an endline survey.
Longer run educational outcomes will be study dropout, completion of the first year, elective course choices, and major switching. These outcomes will be measured through administrative data – if we receive the necessary data access.
Experimental Design
Experimental Design
Our field experiment takes place at a European research university. All first-year bachelor and master students are randomly assigned into five attendance groups. The assignment is based on a random number – the last digit of the student ID number. Based on a rotating block schedule, each attendance group is permitted to attend some lectures in person, and others online via Zoom. More specifically, each attendance group is allowed to come to the university two days per week . The randomization into attendance groups implies that, for any given lecture, some students receive the content in person and others receive it via an online lecture. Our within-subject design means that a given student attends some lectures online and others in person. Therefore, our experimental design creates exogenous variation in whether a given lecture was attended online and in the share of lectures a student attended online.
By linking lecture content to exam questions, our design allows us to compare student performance based on whether they attended a lecture online or in person. In other words, we will evaluate the impact of online lecture assignment on student performance through exam questions covering material taught in the corresponding lecture.
Experimental Design Details
Not available
Randomization Method
All first-year students are randomly assigned into five attendance groups based on the last digit of their student ID number. Each week, each group can come to university for two full days, according to a rotating block schedule. This randomization implies that for any given lecture, some students receive the content in person and others receive it via an online lecture.
Randomization Unit
Randomization into live and online lectures takes place within students. A given student will attend some lectures live and some online.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
About 1000 students.
Sample size: planned number of observations
We expect to observe at least 1,000 students. For each of these students we expect to observe at least 50 responses to different multiple choice exam questions resulting in an overall sample of about 50,000 observations.
Sample size (or number of clusters) by treatment arms
We expect a sample size of 50,000 exam questions. So about 10,000 questions in the control group and 40,000 questions in the treated (online) group.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We expect to observe at least 1,000 students. For each of these students we expect to observe at least 50 responses to different multiple choice exam questions resulting in an overall sample of about 50,000 observations. A simple power calculation suggests that we have sufficient statistical power to detect fairly small effect sizes. We assume a sample size of 50,000 exam questions, power of .8, and treatment arm that is 80% of the entire sample. For the binary outcome, we assume a mean of 0.7 correct answers and a standard deviation of 0.45. We conduct a two-sample means test to calculate the minimum detectable effects (MDEs) of a treatment effect significant at the five percent level. Here, δ refers to the difference between the in-person group mean and the online group mean for a given exam question. With a power of .8 we can detect differences between the treatment and control group that are larger than 0.0141. This corresponds to a minimum detectable treatment effect of 0.031 standard deviations.
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information