Beyond Moving Pictures: The Added Value of Interactivity in Learning Economics

Last registered on April 29, 2026

Pre-Trial

Trial Information

General Information

Title
Beyond Moving Pictures: The Added Value of Interactivity in Learning Economics
RCT ID
AEARCTR-0018224
Initial registration date
April 26, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 29, 2026, 3:45 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Monash University

Other Primary Investigator(s)

Additional Trial Information

Status
On going
Start date
2026-02-24
End date
2026-11-06
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how different types of digital learning tools affect students’ understanding of introductory economics. In many courses, students learn using videos that show economic graphs and concepts. Newer tools allow students to interact with these graphs directly by adjusting values and seeing how outcomes change. However, it is not yet clear whether this interactivity improves learning or whether watching the same content in video form is equally effective.

The goal of the study is to determine whether interactive learning tools improve student outcomes compared with traditional video resources, and whether encouraging students to use these tools leads to greater engagement and performance. The findings may help inform how digital resources are designed and used in economics education
External Link(s)

Registration Citation

Citation
Balbeid, Luthfi Bin Muhsin. 2026. "Beyond Moving Pictures: The Added Value of Interactivity in Learning Economics." AEA RCT Registry. April 29. https://doi.org/10.1257/rct.18224-1.0
Experimental Details

Interventions

Intervention(s)
Students in an introductory microeconomics course at a large public university in Australia are offered supplementary online materials that present the same economic content in two formats: an interactive version where students change parameters and see graphs update, and a video version showing the same graphs and scenarios without student control. Content is held constant across formats. Materials are released in step with teaching across the semester. Students are also randomised so that some receive standard weekly course communications and others receive communications that more prominently direct them to their assigned version of the tool. All students retain access to their assigned format for the semester; nothing is withheld for experimental purposes. A general announcement on the learning platform early in the semester ensures everyone can find the resource
Intervention Start Date
2026-03-16
Intervention End Date
2026-11-06

Primary Outcomes

Primary Outcomes (end points)
Official in-class test scores and the final examination score, each from university course records, were analysed on each assessment’s recorded scale.
Primary Outcomes (explanation)
These endpoints are administrative. Any transformations for robustness or secondary analysis will be described in the analysis documentation.

The pre-registered hypotheses, in priority order, are as follows. Hypothesis 1 (format effect on learning): Students assigned to the interactive format achieve higher assessment scores than students assigned to the video format. This is the central policy question: whether assigning interactive visualisations rather than equivalent video demonstrations improves learning outcomes on average among those offered each format. Hypothesis 2 (usage intensity): whether greater usage intensity improves outcomes, with encouragement assignment used as an instrument. This tests whether the behavioural mechanism matters: first, how much encouragement increases usage (first stage), then how much usage affects outcomes (second stage). Hypothesis 3 (interaction of modality and usage): The effect of usage intensity on learning outcomes is larger for students assigned to the interactive format than for students assigned to the video format. This tests whether the marginal return to engagement differs between the two modalities.

Secondary Outcomes

Secondary Outcomes (end points)
Engagement with the online materials, summarised from automated usage data: for example, time spent, frequency of use, breadth of topics accessed, and simple interaction or playback measures for each format. These support the interpretation of mechanisms and uptake; learning is judged on the administrative assessments listed as primary outcomes.
Secondary Outcomes (explanation)
Usage measures are derived from system logs according to rules specified in the pre-analysis plan (sessions and format-specific events). They describe observed behaviour, not latent traits. Pre-randomisation regressors are controls, not outcomes; high-missing covariates are dropped as specified.

Where usage is treated as endogenous, encouragement assignment instruments it. The main specification uses log (1 + total minutes); other predefined aggregates (e.g., sessions, modules visited, revisit-related summaries) or cumulative usage to each assessment date may appear in pre-specified robustness checks or extensions, each with the same instrument. First-stage strength is checked before emphasising instrumental estimates.

Experimental Design

Experimental Design
Randomised field experiment in a large credit-bearing introductory microeconomics unit at a public university in Australia, conducted over two semesters (Semester 1 and Semester 2, 2026). Each student is randomly assigned to an interactive versus a video format and, independently, to an encouraged versus a standard messaging about the tool, yielding four groups. Assignment is stratified by teaching group (workshop). Randomisation is performed by computer in Stata on the weekend at the end of Week 2, after the deadline for withdrawing without academic penalty, so the assigned roster aligns with students who remain enrolled at that census point; assignments are then fixed for the remainder of each semester. The same design, timing structure, and procedures are implemented independently in each semester.

The supplementary tool and the weekly encouragement contrast begin from Week 3 (see intervention text). The main analysis compares outcomes by assigned format (intention to treat), regardless of actual use. Encouragement is also used, as pre-specified, to study how usage relates to outcomes for students whose use is shifted by the messages. Pre-specified heterogeneity includes prior mathematical preparation and gender where administrative data allow.
Experimental Design Details
Not available
Randomization Method
Randomisation done by computer using Stata on the weekend of Week 2 (since this is after the last date to withdraw without penalties), with stratification by workshop group.
Randomization Unit
Individual students. Each student receives two independent random assignments: format (interactive or video) and encouragement (encouraged or control). Workshop groups are strata, not the unit of assignment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
Approximately 1,800 to 2,200 students enrolled across the two semesters (consistent with recent cohorts).
Sample size (or number of clusters) by treatment arms
Four factorial cells from independent randomisation of format (interactive vs video, roughly 50/50) and encouragement (encouraged vs control, roughly 50/50). Intended allocation: about one quarter of the analysis sample per cell (e.g. roughly 450–550 students per cell if total N is about 1,800–2,200), subject to realised enrolment and the Week 2 sample definition.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Outcome unit: Assessment scores are measured on each component's official raw scale (that is, the points recorded for the unit). For the purpose of illustrating statistical power, minimum detectable effects (MDEs) below are expressed in standard deviation units. Main continuous outcomes (format and encouragement effects, intention to treat): The power calculations follow the pre-analysis plan: a two-sided test at the 5 per cent significance level, 80 per cent power, a balanced 2x2 factorial design, and individual-level randomisation. Main effects are estimated by pooling across the other factor, so each arm contains roughly half of the total sample. Without covariates, the minimum detectable difference between arms is about 0.13 standard deviations at total N = 1,800, 0.13 at N = 2,000, and 0.12 at N = 2,200. For the interaction effect (format x encouragement), which is estimated using roughly one quarter of the sample in each cell, the corresponding MDEs are about 0.26 at N = 1,800, 0.25 at N = 2,000, and 0.24 at N = 2,200. Including pre-randomisation covariates (such as baseline assessment performance) should reduce residual variance. If these controls explain around 20 to 30 per cent of the variation in outcomes, the MDEs fall proportionally, implying reductions of roughly 10 to 15 per cent. This would place the main-effect MDEs at around 0.10 to 0.12, and the interaction MDEs at around 0.21 to 0.23. Usage (instrumental): If encouragement shifts the usage measure (for example, log of one plus total minutes) by around 0.3 standard deviations in the first stage, the minimum detectable local average effect of usage on assessment scores is roughly three times the intention-to-treat MDE expressed in the same units. With main-effect MDEs around 0.12 to 0.13, this corresponds to about 0.35 to 0.40 of standard deviation. A weaker first stage would increase this considerably and may leave the instrumental estimates imprecise.
IRB

Institutional Review Boards (IRBs)

IRB Name
Monash University Human Research Ethics Committee (MUHREC)
IRB Approval Date
2026-02-13
IRB Approval Number
50712