Effectiveness of Teacher Adaptations to an Online Science Content Literacy Intervention to Improve Third-Graders' Reading Comprehension: A Randomized Controlled Trial During COVID-19 Pandemic

Last registered on November 09, 2020

Pre-Trial

Trial Information

General Information

Title
Effectiveness of Teacher Adaptations to an Online Science Content Literacy Intervention to Improve Third-Graders' Reading Comprehension: A Randomized Controlled Trial During COVID-19 Pandemic
RCT ID
AEARCTR-0006718
Initial registration date
November 07, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 09, 2020, 10:43 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University, Graduate School of Education

Other Primary Investigator(s)

PI Affiliation
Harvard Graduate School of Education

Additional Trial Information

Status
In development
Start date
2020-10-29
End date
2022-08-01
Secondary IDs
Abstract
The purpose of this study is to test the effectiveness of allowing teachers to use local context to make structured adaptations to an evidence-based online science content literacy curriculum, Model of Reading Engagement (MORE). The MORE lessons are an in-person based curriculum organized thematically to provide a framework for helping student connect new learning to a meaningful domain-specific schema. We adapt the in-person MORE curriculum to an online version to be delivered both synchronously and asynchronously. To support the students at-home learning we also provide print books and reading comprehension activities. Then within our 26 schools, we randomize teachers into two conditions: a condition where we devise a core set of implementation procedures based upon the in-person MORE and common-sense adaptation needed for an online setting, or a condition with structured adaptations. In the adaptations condition, teachers can collaborate within their school or across schools to make practical improvements using their knowledge, experience, and local context. The randomized trial is designed to test the effectiveness of the adaptive implementation on third-graders' reading comprehension using researcher-developed transfer measures and standardized measures of broad reading comprehension.
External Link(s)

Registration Citation

Citation
Kim, James and james kim. 2020. "Effectiveness of Teacher Adaptations to an Online Science Content Literacy Intervention to Improve Third-Graders' Reading Comprehension: A Randomized Controlled Trial During COVID-19 Pandemic." AEA RCT Registry. November 09. https://doi.org/10.1257/rct.6718-1.0
Experimental Details

Interventions

Intervention(s)
The randomized controlled trial aims to test the efficacy of the allowing teachers to make structured adaptations to an evidence-based online science content literacy intervention, Model of Read Engagement (MORE). To prepare teachers to make these adaptations as well as increase their knowledge of the science of reading, the adaptive MORE teachers will participate in a series of 4 asynchronous modules in November. The modules are based on Team-Based Learning (TBL) which aims to tightly couple knowledge acquisition with knowledge application (Michaelsen & Sweet, 2008). The November modules focus on helping teachers acquire knowledge about the core components of MORE and the science of reading research on effective instruction to improve vocabulary, morphology, and science content comprehension.

The knowledge application phase occurs in December as teachers apply their learnings to make structured and productive adaptations to MORE for their contexts. Building upon the asynchronous work in November, the December sessions will work with teams of teachers synchronously to use data to address potential barrier to student and family engagement. These teachers will have the flexibility to make both procedural-based and content-based changes to implementation, while keeping the core components of the intervention unchanged. Then in early January all teachers will be provided training for the MOREOnline curriculum with a focus MORE’s theory of change, the details of the lessons, and some best practices based upon previous years in-person results.

The Online MORE curriculum provides teachers with 5-weeks of science lessons including 10 synchronous lessons (2x week) as well as 3 days per week of 30 minutes asynchronous lesson that the students are expected to complete on their own. Science lessons will focus on human body systems (muscular, skeletal, nervous systems). The Online version of the curriculum adapts the in-person version of the science MORE curriculum. The in-person curriculum (a) give students access to complex and conceptually-related science content through engaging texts, (b) extend content learning through integrated, standards-aligned reading and writing activities, (c) attend to student motivation and engagement through a variety of strategies (e.g., choice, collaboration), and (d) increase exposure to content vocabulary. Student families will be provided with access to the application, two print books, as well as approximately 10-15 print reading comprehension activities in mid-January to complete prior to the beginning of the teacher-led portion of the intervention. Teachers will implement the lessons during February and March of 2021, and students will be sent a second batch of books and print activities during this time tied to the school lessons.
Intervention (Hidden)
Intervention Start Date
2020-11-09
Intervention End Date
2021-05-03

Primary Outcomes

Primary Outcomes (end points)
There are two primary outcomes. First, we will administer a researcher-developed measures of students’ knowledge of science concepts taught in the MORE lessons. Second, we will use administrative data on a vertically scaled reading test – the MAP
Primary Outcomes (explanation)
Researcher Developed Measures
We will administer three researcher-developed measures to assess students’ vocabulary knowledge depth in isolation and in context (i.e., silent reading and listening).

Vocabulary Knowledge Depth in Science
We developed a 12-item measure to assess student’s science vocabulary knowledge depth. The 12-item semantic association task assesses students’ definitional knowledge of taught science words and their ability to identify relations between the target word and other known words (Collins & Loftus, 1975; Stahl & Fairbanks, 1986). We replicated a semantic association task (Kim et al., 2020) for our study to assess third-graders’ ability to identify semantically related words and their knowledge of how words are networked to each other. The task includes 7 domain specific words taught in the Grade 3 MORE science lessons (i.e., taught words): skeletal, muscular, nervous, diagnosis, structure, system, function. The task also includes 5 associated words that are not directly taught in the MORE lessons (i.e., untaught words): signal, repair, organ, fracture, sensory. The prompt asked students to “circle all of the words that go with the word signal” and the options included “metal, messenger, transmit, similar” Each item is scored 0 to 4, where students also get credit for not circling unrelated words. Cronbach alpha reliabilities were .85 for taught words and .77 for untaught words in our earlier efficacy study involving Grade 1 students (Kim et al., 2020).

Content Comprehension, Grade 3 Science (Silent Reading Comprehension)
Students will take a 29-item multiple choice and one open response test that assesses their ability to read a near, mid, and far transfer passages. Near and mid passages include taught words (i.e., word associations) in context whereas the far passage does not include taught words. The passages will focus on the skeletal, muscular, and nervous systems of living (primates, mammals) and non-living things (skyscrapers). In pilots of the science comprehension test, Cronbach’s alpha reliabilities was 0.79.

Content Comprehension, Grade 3 Science (Listening Comprehension)
Students will listen to a nonfiction passage about “studying the mystery of the great pandemic” and then answer 10 multiple choice questions. The passage includes target words that were part of the Grade 1, 2, and 3 MORE curriculum in science. The 500 word passage has a lexile of 700. The assessment will be administered to a whole class and the passage and questions items were all read aloud to students. The reliability (Cronbach’s alpha) for this measure was .35 (Kim et al., 2020).

Standardized Measures from Administrative Data
We will use three standardized measures from administrative data to assess improvement on broad, domain general reading comprehension outcomes.

Northwest Evaluation Association’s Measure of Academic Progress (MAP). MAP is a computer-adaptive, early literacy assessment that uses an interval scale, called the Rasch (RIT) unit scale score, to capture student growth in reading. The MAP yields a total reading score and subtest scores for each of the five strands that comprise the assessment. The literature and informational strand assess children’s understanding of both when they can read and comprehend literature, make inferences and predictions and draw conclusions, as well as analyze the structure of literacy texts and evaluate the author’s craft and purpose. There are also two corresponding categories for informational texts. The word meaning and vocabulary knowledge assess student’s ability to decode words and recognize and understand word relationships and structures. Performance on the five strands yields an overall RIT score which will be used for this analysis as a pretest covariate and posttest outcome measure.

North Carolina Statewide Assessments
In addition, the 3rd graders will be eligible to take the North Carolina beginning of grade (BOG) and end of grade (EOG) exam. The North CarolinaBeginning-of-Grade3 (BOG3) English Language Arts (ELA)/Reading Tests linked to the Read to Achieve Program and is aligned to the NC Standard Course of Study (NCSCS) (North Carolina Department of Instruction, 2017). Students read authentic selections comprised of literary and informational selections and then answer related questions. The North Carolina End-of-Grade Tests are designed to measure student performance on the goals, objectives, and grade-level competencies specified in the North Carolina Standard Course of Study. The BOG will be used as a pretest covariate and the EOG will be used as a posttest outcome measure.

Secondary Outcomes

Secondary Outcomes (end points)
There are four secondary measures of student/family engagement and teacher adaptation including: student engagement with educational apps, student engagement with paper books, parent and family use of apps and books, and teacher knowledge of MORE core components and science of reading research on vocabulary and morphology instruction.
Secondary Outcomes (explanation)
Student Engagement with Educational Apps: MOREOnline Data
The students will be completing their curriculum work in an online format and the progress, time, accuracy, and measures of affective engagement will be recorded. The backend data will provide basic measures of whether and when the student logged-in to the application, the current state of the student (how far they have progressed), and check-in quizzes within the curriculum. These measures can provide a measure both behavioural and cognitive engagement with the curriculum. However, in addition, we record three indicators of engagement. Cognitive engagement will include measures of accuracy and speed, i.e., the number of items students answer correctly and their response time. Motivational engagement will include students’ perceptions of the task, i.e., how much they liked doing the activities, and their perceptions of self-competence. Behavioural engagement will include total progress through the curriculum, the number of books that are completed, time on the app.

Student Engagement with Paper Books: Reading Comprehension Trifold Data
We developed MORE trifolds to guide students through the comprehension routine taught during the MORE lessons and work on independently. The trifold serves as a proxy for the amount of print-based home behavioural and cognitive engagement with the MORE books. Prior work for a successful summer intervention, READS for summer learning (READS), found that the trifold completion was a predictor of reading comprehension (Kim et al., 2016). We adapted the READS trifold for our study. Each trifold is tied to one of the five print books that we will send to the student’s home. The trifold consists of five parts and engages the students with the print book as well as assessing their reading comprehension. Student will be mailed approximately 20 trifolds to complete, 15 for one book and 1 or 2 for the four other books. To assess the student’s behavioural engagement, we count the number of trifolds returned. To assess the student’s cognitive engagement, we will assess the overall accuracy of the trifolds returned.

Parent and Family Survey: Use of Educational Apps and Paper Books
We will also survey the parents of the students involved in the intervention to better understand engagement with the print materials as well as changes parent activities and behaviours. Specifically, we will ask parents about students’ engagements with the books and perceptions of difficulty and enjoyment. In addition, we’ll inquire about parent’s beliefs and activities in the home as well as their experience communicating with our Lab, the district, school and their child’s specific teacher. The goal is to understand how families “use” both digital educational apps and paper books at home.

Teacher Survey: Knowledge of Science of Reading Research on Vocabulary and Morphology Instruction and MORE Theory of Change and. Teachers will be given a pre- and post-intervention survey to assess the curriculum and family engagement adaptations as well as changes to their teaching practices. The survey will specifically examine instructional practices and content. The survey will also explore how teachers engaged with parents and their student during the course of the intervention. The posttest survey will include measures that assess teachers’ knowledge of the intervention theory of change for MORE and science of reading-based practices. In particular, the 15-item posttest measure of teacher knowledge will include 20 items that assess (a) teachers’ knowledge of effective practices for teaching vocabulary and morphology (Davis et al., under review), and (b) teachers’ perceptions of their learning about the core components of MORE (Quinn & Kim, 2017).

Teacher Practice: Fidelity of Implementation of Synchronous Instruction. Teachers synchronous lessons will be recorded via zoom. A random sample of 40 to 60 teachers’ (both Core and Adaptive MORE) synchronous lessons will be zoom recorded to assess the following dimensions of fidelity of implementation, including their adherence to lesson scripts for the core components of MORE and program differentiation with respect to the openness of literacy tasks, the proportion of whole-group and teacher-led instructional time (Dane & Schneider, 1998).

Experimental Design

Experimental Design
Researchers at the READS Lab at the Harvard Graduate School of Education will conduct the randomized controlled trial. Grade 3 teachers will be blocked by school and randomly assigned to Adaptive MORE or Core MORE. The number of teachers within each cell varies, but there is a minimum of two teachers within each block. The treatment is clustered at the classroom-level. Grade 3 parents whose parents provide consent will be study participants and the students will all receive the MORE lessons, but some will be taught by adaptive MORE teachers while others will be taught by CORE MORE teachers.

Baseline tests on mean math and reading scores on Measure of Academic Progress and demographic comparisons revealed no statistically significant differences in. For all tests, we ran multilevel models nesting students in classrooms and classrooms in schools, block on the school, and show p-values greater than .05. The standardized mean differences of MAP were 0.06 RIT points (p > .05) comparing Adaptive MORE with CORE MORE. A slightly larger proportion of students was gifted, but all other demographic covariates were (p>0.05)
Experimental Design Details
Randomization Method
Randomization was conducted in an office by a computer and implemented using STATA code.
Randomization Unit
The unit of randomization is the classroom, blocked by school, in the 2020-21 school year.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
97 classroom clusters
Sample size: planned number of observations
Of the total of 1,748 students in participating classrooms, we expect approximately 80% of students to receive parental consent, yielding an analytic sample around 1,398 Grade 3 students.
Sample size (or number of clusters) by treatment arms
48 Classrooms with 855 students in the Adaptive MORE

49 Classrooms with 893 students in the CORE MORE
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We estimate our power using prior data on interventions within our site. To estimate the 80% power, we simulate randomizations within our data. For each randomization, we run regression on the simulated treatment assignments and construct the sampling distribution of the treatment estimates for each draw. We then use the sampling distribution to calculate the minimum detectable effect for iteration. From the 1,000 iterations, we construct a distribution of MDES and identify the 80th percentile of minimum detectable effects. Based upon these results, the minimum detectable effect for MAP is 0.08 standard deviations, for our vocabulary depth measure is 0.15 standard deviations, and for the domain specific reading comprehension is 0.14 standard deviations.
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University-Area Committee on the Use of Human Subjects
IRB Approval Date
2018-08-03
IRB Approval Number
IRB18-1094
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials