Back to History Current Version

Fighting the learning loss: Evaluating C-SEF for university students and staff.

Last registered on May 23, 2022

Pre-Trial

Trial Information

General Information

Title
Fighting the learning loss: Evaluating C-SEF for university students and staff.
RCT ID
AEARCTR-0009466
Initial registration date
May 19, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 23, 2022, 5:16 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
UNU-MERIT and Maastricht University

Other Primary Investigator(s)

PI Affiliation
Nuffield College, Department of Economics, Oxford University
PI Affiliation
Nuffield College, Department of Economics, Oxford University and University of Hagen
PI Affiliation
Harvard School of Engineering and Applied Sciences

Additional Trial Information

Status
In development
Start date
2022-06-01
End date
2022-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We introduce the COVID-19 safe education framework (C-SEF), a novel mechanism for monitoring and safeguarding a population in educational institutions. The mechanism aims to mitigate the effect of COVID-19 restrictions on economic, psychological and educational well-being, a phenomenon that has been referred to as ‘learning loss’ in other similar instances. We propose a utility-weighted algorithmic approach to pool testing individuals in a scarce-resource setting. The mechanism uses pooled testing to identify healthy individuals, who are permitted to return on-site, while guaranteeing that infected individuals continue to work off-site. The utility-based optimisation approach underlying our solution takes into account factors such as psychological well-being, need for academic resources and socio-economic status in order to optimally allocate testing. To evaluate the efficacy and robustness of our proposed solution, we conduct a randomised controlled
trial at the Potosinian Institute of Scientific and Technological Research (IPICYT). The outcome of our experiment will be measured in terms of performance and productivity, mental health and infection probabilities. Moreover, the evaluation results will allow us to make policy recommendations for better governance of educational institutions during pandemic times.
External Link(s)

Registration Citation

Citation
Finster, Simon et al. 2022. "Fighting the learning loss: Evaluating C-SEF for university students and staff.." AEA RCT Registry. May 23. https://doi.org/10.1257/rct.9466-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
To help educational institutions in their reopening stage for in-person learning safely and effectively, we have devised C-SEF, a monitoring protocol with an algorithmic base. The protocol's main objective is to make optimal use of scarce testing resources and provide a systematic approach to using the physical facilities of the institution. C-SEF uses a utility-based approach to group and test the relevant population in pools such that testing resources are used efficiently while addressing educational inequality and fairness considerations. The utility-based approach of the algorithm behind the C-SEF protocol also addresses equity concerns that may arise from random or human-based decision-making. We evaluate the efficacy of C-SEF on the full population of students, researchers, and staff at IPICYT in a two-group randomised controlled trial. The treatment group will follow the protocol, and the control group will continue with the current (remote work) institutional policy.
Intervention Start Date
2022-06-01
Intervention End Date
2022-06-30

Primary Outcomes

Primary Outcomes (end points)
Posterior probabilities of infection, Perceived stress scale, Subjective well-being, performance and productivity.
Primary Outcomes (explanation)
posterior probabilities of infection: For each individual, we hypothesise a uniform prior probability of infection and corresponding probability of being healthy. After each test, we update the probability of infection according to test results; perceived stress scale: measured via the validated 4-item Perceived Stress Scale by Sheldon Cohen (1994); subjective well-being: we use a variation of the European Quality of Life Survey measure of subjective well-being, using a life/subject evaluation approach; Performance and productivity: we use a composite score for the evaluation of performance and productivity, based on a 5 item self-reported approach from poor to high.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We implement a mechanism to ensuring the safety of an educational institution (C-SEF) on the academic population at IPICYT in San Luis Potosí, Mexico. We randomly assign individuals into a treatment and a control group. Subjects in the treatment group follow a pool-testing protocol and are allowed to freely use the facilities of the university so long as they test negative. Individuals in the control group are expected to work remotely and only use the facilities for emergencies [the definition of an emergency is at the discretion of the head of department to which the student/staff belongs]. The physical isolation of treatment group and control groups is essential for our protocol to work; both to avoid health spillovers and disentangle psychological dynamics. The IPICYT campus lends itself, for the most part, to a two-group randomisation approach. The campus consists of two similar buildings for research in natural sciences, two similar buildings for research in computer science and mathematics, one building for classes, and one building for administration. The individuals participating in our trial are randomly assigned to the treatment and control group. Note however that some individuals in the control group will still work on-site based on exceptions. Therefore, we use the following strategy: one of the natural science buildings is randomly assigned to be treated and one of the CS and math buildings is randomly assigned to be treated. Crucial for this approach to be effective is that individuals in both buildings are comparable. Given IPICYT's reports about their staff, we know that both research students and researchers are assigned to work in each of the buildings contingent only on their academic discipline, based on no individual characteristics important for the evaluation of our framework. Hence, we may consider the assignment as pseudo-random. Nevertheless, we further collect a number of covariates to conduct a balance analysis [Should there be a clear imbalance, we will subscribe to a post-experimental matching approach as proposed by Bruhn and McKenzie (2009)]. Within the staff building, we also need to physically separate the space allocated to treatment and control group. Our strategy here is to separate the four-storied building by floors. We randomly select either the top two or the bottom two floors for treatment, and the remaining two floors for control. Similar to the other two buildings, we accept that the assignment is pseudo-random. Students can be assigned to classrooms depending on availability and suitability for the course taught. Moreover, students in the control group will exclusively participate in online learning. Therefore, each individual in the student population is assigned randomly to either treatment or control.

Individuals assigned to the treatment group are invited to participate in the C-SEF protocol. All information about the institute-wide study is shared with the IPICYT population via an online communication campaign co-produced by the principal investigators and the communications department of the Institute. Consent is obtained from each individual via a form, which precedes a survey that collects baseline data. There exists no natural disincentive to participate in the study, and on the contrary, participation may indirectly give participants more freedom of movement, so we confidently estimate study compliance to be at nearly 100%. The C-SEF protocol uses a utility-weighted selection algorithm that groups individuals for pool testing. Individuals who test negative are allowed to attend their work or study activities on campus. Individuals who test positive are asked to isolate and work remotely until their next test. For a more detailed explanation about the testing mechanism please refer to the Pre-Analysis Plan document.
Experimental Design Details
Randomization Method
Cluster randomization (computerized).
Randomization Unit
Unit of cluster: academic discipline (buildings); unit of analysis: students and staff.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
1 university population with 5 clusters (disciplines).
Sample size: planned number of observations
500 students and staff
Sample size (or number of clusters) by treatment arms
250 students and staff per treatment arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
To perform the clustered-randomised power calculation we use an ICC of 0.22 and a cluster standard deviation of 0.5, as per the literature on educational institutions. Moreover, we assess power on the basis of odds ratio for the likert-type outcomes mental health and productivity, which can be thought of as follows: an odds ratio of 1.48 can be considered a small effect size or the equivalent to a Cohen's d of 0.2; an odds ratio of 3.45 can be considered a moderate effect size or the equivalent to a Cohen's d of 0.5; finally, an odds ratio of 9 can be considered a large effect size or the equivalent to a Cohen's d of 0.8. Our final results are as follows: probabilities of infection with n = 500 and alpha = 0.05, has a cohen's d of 0.47 and power of 0.80; mental health outcomes with n = 500 and alpha = 0.05, has an odds ratio of 2 and power of 0.80; productivity and performance measures with n = 500 and alpha = 0.05, has an odds ratio of 2 and power of 0.80. For a more detailed explanation please refer to the Pre-Analysis Plan.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

Analysis Plan Documents

Evaluating C-SEF for university students and staff

MD5: ffefc6e73a4cc84244b8aec483738120

SHA1: 9b92ad283d90f1009829fe6151cd6bcbbd24eac6

Uploaded At: May 19, 2022

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
October 15, 2022, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
October 15, 2022, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
57
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
122
Final Sample Size (or Number of Clusters) by Treatment Arms
Treated participants = 59; Control participants = 63.
Data Publication

Data Publication

Is public data available?
No

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Program Files

Program Files
Yes
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
Large-scale testing is crucial in pandemic containment, but resources are often prohibitively constrained. We study the optimal application of pooled testing for populations that are heterogeneous with respect to an individual's infection probability and utility that materializes if included in a negative test. We show that the welfare gain from overlapping testing over non-overlapping testing is bounded. Moreover, non-overlapping allocations, which are both conceptually and logistically simpler to implement, are empirically near-optimal, and we design a heuristic mechanism for finding these near-optimal test allocations. In numerical experiments, we highlight the efficacy and viability of our heuristic in practice. We also implement and provide experimental evidence on the benefits of utility-weighted pooled testing in a real-world setting. Our pilot study at a higher education research institute in Mexico finds no evidence that performance and mental health outcomes of participants in our testing regime are worse than under the first-best counterfactual of full access for individuals without testing.
Citation
Finster, Simon, Michelle González Amador, Edwin Lock, Francisco Marmolejo-Cossío, Evi Micha, and Ariel D. Procaccia. "Welfare-Maximizing Pooled Testing." arXiv preprint arXiv:2206.10660 (2022).

Reports & Other Materials