Arcipelago Educativo

Last registered on August 25, 2022

Pre-Trial

Trial Information

General Information

Title
Arcipelago Educativo
RCT ID
AEARCTR-0009958
Initial registration date
August 24, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 25, 2022, 2:11 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
FBK-IRVAPP

Other Primary Investigator(s)

PI Affiliation
FBK-IRVAPP
PI Affiliation
Fondazione Agnelli
PI Affiliation
Fondazione Agnelli
PI Affiliation
University of Padova, FBK-IRVAPP
PI Affiliation
FBK-IRVAPP

Additional Trial Information

Status
On going
Start date
2022-05-02
End date
2022-11-30
Secondary IDs
I21, I24
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In an effort to attenuate the learning loss that likely occurs during summer vacation and to sustain the learning achievements of the most vulnerable students, summer learning programs aim to provide students with targeted learning support in-between two school years. Experimental evidence backing these programs mainly comes from studies conducted in the United States. Our study contributes to this literature by evaluating the 2022 edition of a summer learning program carried out in Italy. The program is called Arcipelago Educativo (trad. Educational Archipelago) and is run by Save the Children Italia with the support of Fondazione Agnelli. The program consists of two components – educational labs (88 hours) and small-group tutoring (12 hours), which are delivered from the second half of June through the first week of September 2022 in 11 sites located in 10 Italian cities in the North (Milano, Torino, Marghera), Center (Aprilia, Ancona) and South (Bari, Rosarno, Napoli, Palermo). Between May and June 2022, 1,035 students (of which 423 primary and 612 secondary students) were signaled by their schools as in need of learning support and agreed to participate in the study. After filling out a baseline survey, the students were randomly assigned to either the treatment or the control group. While the treatment group hase received access to the above described summer learning program, control group students receive access to a learning support program offered in fall after the beginning of the school year 2022/2023. The trial evaluates the effects of the program on student achievement, as measured through four standardized tests on text comprehension, grammar, arithmetic and geometry. The study also investigates the effects of the program on secondary outcomes, such as non-cognitive dimensions related to learning. Outcome data are collected through surveys administered at baseline and at follow-up (beginning of September 2022) in controlled environments.
External Link(s)

Registration Citation

Citation
Azzolini, Davide et al. 2022. "Arcipelago Educativo." AEA RCT Registry. August 25. https://doi.org/10.1257/rct.9958-1.0
Experimental Details

Interventions

Intervention(s)
The 11 archipelagos were physically located in dedicated educational places that are known to students, in most cases the schools’ premises. The activities were arranged on a daily schedule and started on June 20 and lasted until September 9, with a break in August.

Regardless of some slightly different schedules across sites (three sites started on June 27 while one did not offer any intervention after August) all students assigned to an archipelago received two types of interventions, following a centrally defined protocol: educational labs (88 hours) and small-group tutoring (12 hours).

The educational labs, also called isole (trad. islands), were organized in groups of 10 students. The composition of the groups was established by the local educational staff, with the aim of supporting students’ learning and social development through peer education and cooperative learning. Whenever possible, the islands were built to ensure internal homogeneity in terms of age and educational level attended. The labs followed a ‘learning by playing’ methodology. Children were also able to participate in two educational trips in the area.

The small-group personalized support consisted of a tutoring activity for groups of two students and focused on the subjects or learning domains in which they reported major gaps (e.g., math skills, literacy skills, Italian L2). The study-pairs were constructed by matching children with similar specific educational needs.
Intervention Start Date
2022-06-20
Intervention End Date
2022-09-09

Primary Outcomes

Primary Outcomes (end points)
By means of education level specific tests, we observe cognitive achievements in the following areas:
reading (comprehension of written texts);
grammar;
arithmetic;
geometry.
The tests are administered with the supervision of the local educational staff involved in the project. In 10 experimental sites, the follow-up tests are administered in the last week of the summer program for treated students and the week after (i.e. the first week of the new school year) for control group students and treated students who dropped out. In one experimental site, the followup tests are administered jointly for all students during the first two weeks of school. In both cases, the educational spaces and the staff involved are the same, in order not to alter the environmental conditions, which may affect the way students respond to the tests. Students with ‘special needs’ (as referred by school teachers) take slightly simplified versions of the tests.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We use a self-reported questionnaire with validated scales to observe non-cognitive dimensions related to learning:
Ability self-concepts and Intrinsic values (Math, Language);
Subjective task values (all subjects);
Grit: Consistency of interest, Perseverance of effort;
Adaptive learning: Mastery goal orientation, Performance-approach goal orientation, Performance-avoid goal orientation;
Intrinsic and extrinsic motivation towards studying;
Resilience.
The non-cognitive tests are administered following the same schedule and protocol of the cognitive tests.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
At the beginning of May 2022, teachers from 17 schools in the ten cities referred 1,634 primary and secondary students in need of learning support to participate in the intervention. Referrals were made using a common grid including relevant students characteristics such as school grades, school absences, educational needs and a qualitative assessment of their material, relational and educational deprivation.

1,035 of these students and their parents accepted the participation in the study--including the random allocation rule to either the summer or the fall editions of the program--and signed a written informed consent document about data collection and treatment.

These students were randomly allocated to the treatment and the control group. Students assigned to the treatment group received an invitation to take part in the summer program, while control group students were automatically entitled to take part in the fall edition of the program.

To avoid having siblings assigned to different treatment conditions and the related problems of contamination of the control group, randomization was performed at the household level: hence siblings are either all in the treatment or in the control group.

Randomization was performed within strata, which were identified within each experimental site by the specific school institution and the educational level (primary or secondary) attended by students. In Italy, these two factors do not coincide because primary and secondary education are catered by the so called “Istituti Comprensivi” (trad. comprehensive schools).

Overall, the allocation ratio is .60 (cluster level: T = 579, C = 380). The randomization was performed in order to ensure that this ratio is the same in each stratum. Because of the small sample size of some strata and the need of ensuring the possibility of activating ‘islands’ of 10 students each, the actual ratio could differ across strata.

In order to mitigate the problems of refusals to participate in the summer program, a list of 'reserves' was randomly chosen from the control group (15%) taking into account the educational level (i.e., reflecting the proportions of subjects belonging to the two levels). In total, the identified reserves amounted to 61 students. This number was increased by 33 units in the very last days before the launch of the interventions to meet the demands of the local operators in some of the experimental sites, which were experiencing unexpected high rates of refusals.
Experimental Design Details
Randomization Method
Randomization is done using the statistical package R. To improve statistical precision, a stratified randomization design is implemented. Two variables are used in the stratified randomization procedure: the specific school institution and the educational level (primary or secondary) attended by students.
Randomization Unit
The unit of randomization is the family, hence siblings are either all in the treatment or in the control group.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Clusters (families): 961 (T = 580, C = 381)
Sample size: planned number of observations
Students: 1,035 (T = 627; C= 408)
Sample size (or number of clusters) by treatment arms
Treatment = 580 families; Control = 381 families
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power analysis is calculated both in terms of minimum detectable effect and minimum detectable effect size. The calculations are performed using “3ie Sample size and minimum detectable effect calculator” (Djimeu & Houndolo 2016) and PowerUp! (Dong and Maynard 2013). The assumptions made to calculate the minimum detectable effects of the experiment are: a) statistical significance level (p-value) = .05; b) statistical power: 80%; c) proportion of randomization units assigned to treatment: 60%. The resulting MDE(S) for the four primary (continuous) outcomes are the following: reading .508 (.19 SD); grammar .584 (.19 SD); arithmetic .526 (.19 SD); geometry .379 (.19 SD). The MDE estimates are qualitatively the same when considering a two-level cluster randomized trial with individual level outcomes or a single level trial with continuous outcome variables (where the family is the unit of analysis). The above estimates assume zero attrition. Attrition would increase the MDE estimates. At the moment, it is not possible to predict how large it will be: if we assume a response rate as low as 60% the resulting MDES are around .24 SD. Also, the above estimates do not account for the fact that the statistical analysis of the treatment effects will be based on regression-adjusted estimation of the impact including randomization strata, covariates and pre-treatment outcomes. Assuming a R2 of .4 obtained by modeling randomization strata, covariates and pre-treatment outcomes, the resulting MDES are .15 SD for reading, grammar and arithmetic and .16 SD for geometry. Beyond estimating the intent-to-treat and local average treatment effects on these four outcomes, the analysis will also explore heterogeneity of the effects by key factors such as educational level and the pre-treatment level of the outcome variable. Multiple hypothesis tests will be performed using validated approaches in the literature (e.g. Young 2019). References: Young, A. (2019). Channeling Fisher: Randomization tests and the statistical insignificance of seemingly significant experimental results. The Quarterly Journal of Economics, 134(2), 557-598. Djimeu, E. W., & Houndolo, D. G. (2016). Power calculation for causal inference in social science: sample size and minimum detectable effect determination. Journal of Development Effectiveness, 8(4), 508-527. Dong, N. and Maynard, R. A. (2013). PowerUp!: A tool for calculating minimum detectable effect sizes and sample size requirements for experimental and quasi-experimental designs. Journal of Research on Educational Effectiveness, 6(1), 24-67. doi: 10.1080/19345747.2012.673143
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials