Assess@Learning

Last registered on April 28, 2022

Pre-Trial

Trial Information

General Information

Title
Assess@Learning
RCT ID
AEARCTR-0009314
Initial registration date
April 28, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 28, 2022, 6:20 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region
Region
Region
Region
Region

Primary Investigator

Affiliation
FBK-IRVAPP

Other Primary Investigator(s)

PI Affiliation
FBK-IRVAPP
PI Affiliation
FBK-IRVAPP, Università di Padova
PI Affiliation
FBK-IRVAPP

Additional Trial Information

Status
On going
Start date
2021-03-01
End date
2022-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A growing number of studies highlight the potential of formative assessment for improving student achievement and increasing students' self-regulated learning. New technologies offer the possibility to enhance the scope and effective employment of formative assessment practices in schools. However, little is known about what interventions effectively increase whole-schools' systematic and purposeful adoption of digital formative assessment (DFA) practices. The A@L trial evaluates a newly developed online toolkit aimed at increasing lower secondary education schools' readiness to implement digital formative assessment. The toolkit is made available to a sample of school heads, teachers and students across five European Union Member States (Estonia, Finland, Greece, Portugal and Spain). A sample of 208 schools have been sampled and recruited across the five countries and randomly assigned to either the treatment or the control group. Participants in the treatment group are invited to use the toolkit between December 2021 and April 2022. The trial evaluates the effects of the intervention on a set of short-term primary outcomes collected on teachers and related to their awareness, attitudes and knowledge about DFA. Secondary outcomes analyzed include school heads’ awareness, attitudes and intention to support DFA practices; teachers’ actual use of DFA; students’ attitudes toward DFA, learning experience, motivation and self-perceived performance. Outcome data are collected through online surveys administered at baseline and at follow-up (May 2022).

External Link(s)

Registration Citation

Citation
Azzolini, Davide et al. 2022. "Assess@Learning." AEA RCT Registry. April 28. https://doi.org/10.1257/rct.9314-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The treatment consists of an online toolkit (A@L Systemic Toolkit), which is made available to school heads, teachers, students in lower secondary schools, as well as to their parents, according to their different needs. The toolkit is also made available to policy makers in participating countries, if they requested to explore its content. The goal of the Toolkit is to show how digital formative assessment (DFA) can be integrated in school, teaching and learning practices in order to empower students to self-direct their learning. For each target group, a specific landing page helps to get introduced to what DFA is and why one should use it and what the role of each target group in DFA is (including policy makers). These landing pages are meant to be concise and engaging, they include interview videos, animations, infographics and cartoons. Teachers can find inspiration in the teaching scenarios section, a gallery with a wide range of DFA practices, ranging from online quizzes/classroom polling to concept maps and e-portfolios and instructional design models for a given subject and a given situation. School heads are invited to check out the Case Studies section where numerous challenges to DFA and the enablers that make it possible are illustrated. School heads can also browse the Theory of change section that is designed to help them in introducing new approaches to teaching and learning by helping them define goals and the steps to achieve them. Finally, there are glossaries for tools that support DFA and definitions with many terms related to that topic.
Intervention (Hidden)
Intervention Start Date
2021-12-01
Intervention End Date
2022-05-01

Primary Outcomes

Primary Outcomes (end points)
The following 8 teacher outcomes are considered as primary outcomes:

Dimension 1 Awareness about DFA
Outcome 1.1 Awareness about assessment approaches and roles in assessment
Outcome 1.1 Conception of assessment

Dimension 2 Attitudes toward DFA
Outcome 2.1 Attitudes toward FA
Outcome 2.2 Attitudes toward DFA

Dimension 3 Knowledge about DFA
Outcome 3.1 Self-reported DFA competence
Outcome 3.2 DFA-specific knowledge (quiz)

Dimension 4 Teachers' Use of DFA
Outcome 4.1 Adoption of (and intention to adopt) FA practices
Outcome 4.2 Use of (and intention to use) digital tools in assessment
Primary Outcomes (explanation)
The outcomes are collected through questionnaires filled out online and including single questions, multiple-item batteries (reduced to indices via principal component analysis), as well as a quiz (to capture some aspects of teachers’ DFA competence).

The outcomes are measured also in the pre-intervention survey and will be used to improve the statistical precision of the estimates.

Secondary Outcomes

Secondary Outcomes (end points)
The following secondary outcomes are analyzed on school heads, teachers and students:

School head outcomes (Awareness about assessment approaches and roles in assessment; Recognition of DFA potential; presence of a DFA strategy in the school; professional development opportunities in DFA offered by the school)

Teachers' participation (and intention to participate) in training on DFA

Student outcomes (Attitudes toward DFA, Attitudes toward FA, Learning experience, Study motivation, Self-perceived performance)
Secondary Outcomes (explanation)
The outcomes are collected through questionnaires filled out online and including single questions and multiple-item batteries (reduced to indices via principal component analysis).

Experimental Design

Experimental Design
The trial follows a parallel two-arms design.

Schools assigned to the treatment group receive access to the A@L Toolkit, while those assigned to the control condition do not receive access. However, the Toolkit will be accessible to them after the follow-up survey has taken place.

The unit of randomization is the single school. The allocation ratio is .49 (T= 103; C= 105).

Experimental Design Details
Randomization Method
Randomization has been done using the statistical package Stata.

To improve statistical precision, a stratified randomization design is implemented. Two variables, on top of country, are used in the stratified randomization procedure. Given that our primary focus is on teachers, we investigated what variables seem to be most correlated with our outcomes of interest at the teacher level: i.e., the share of teachers who make use of FA as indicated by the principal; and the number of devices available to students and teachers over the population of students and teachers in the school.

Hence, the randomization was performed for 4 groups within each country: above/below median of digital devices and above/below median of share of teachers who perform FA. Median values are calculated within countries.
Randomization Unit
The unit of randomization is the single school.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
208 schools, 188 with at least one teacher, 174 with at least one student.
Sample size: planned number of observations
208 school heads 894 teachers in 188 schools 2,589 students in 174 schools
Sample size (or number of clusters) by treatment arms
T= 103; C= 105
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The minimum detectable effect size for our primary outcomes on teachers is computed using the “3ie Sample size and minimum detectable effect calculator” (Djimeu & Houndolo 2016). The parameters used to calculate the minimum detectable effects of the experiment are: a) statistical significance level (p-value) = .05; b) statistical power: 80%; c) proportion of randomization units assigned to treatment: 49%. d) an intra-cluster correlation of 0.1 estimated on baseline data. The calculation also accounts for clustering (188 schools with at least one teacher enrolled in the project and with an average number of teachers of 4) and for a proportion of variance in the outcome explained by covariates (R-squared) of .3. This calculation assumes complete compliance with random assignment and no attrition. Also, it does not correct for multiple comparisons. The resulting minimum detectable effect size is .184 SD for teacher level outcomes. This value is obtained using a two-tailed test. If a one-tailed test is implemented–which would be reasonable as there is no reason to expect a negative effect of the intervention–the resulting minimum detectable effect size is lower (.163 SD).
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials