The Effect of Providing Joint Feedback to Students and Educators to Facilitate Active Learning

Last registered on July 29, 2024

Pre-Trial

Trial Information

General Information

Title
The Effect of Providing Joint Feedback to Students and Educators to Facilitate Active Learning
RCT ID
AEARCTR-0014050
Initial registration date
July 18, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 29, 2024, 4:25 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2024-04-29
End date
2024-07-17
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project builds on previous studies conducted in 2021 and 2023 on Code in Place, an online programming course where we found that automated feedback to instructors can improve their instruction and student satisfaction. The current study was conducted in the spring of 2024 on the Schoolhouse.world platform, and its goal is to understand whether providing feedback to students as well, in addition to providing feedback to instructors, influences the quality of their discourse and student outcomes. Feedback to teachers is known to be an effective way to improve their instruction, but few empirical studies have examined the effect of joint feedback to both teachers and students. To answer this question, the study leverages a randomized controlled trial design and computational natural language processing techniques.
External Link(s)

Registration Citation

Citation
Demszky, Dora. 2024. "The Effect of Providing Joint Feedback to Students and Educators to Facilitate Active Learning ." AEA RCT Registry. July 29. https://doi.org/10.1257/rct.14050-1.0
Experimental Details

Interventions

Intervention(s)
The study was conducted in a free, online 4-week long online peer SAT math tutoring bootcamp on the Schoolhouse.world platform. Anyone with an SAT subject score of 650 or above could apply to serve as a peer tutor for teaching that subject. As long as they complete the Schoolhouse asynchronous tutor training, they are then eligible to teach their first bootcamp. Our participant sample consists of all instructors and students in the April/May 2024 and the June 2024 bootcamps.

In the May bootcamp, tutors in the treatment arms of the RCT received an email prior to the start of the bootcamp informing them they would receive feedback and explaining the relevant parts of the feedback modal. For the June bootcamp, tutors were not primed to receive feedback.

Feedback to Instructors
Instructors in the TutorFeedback and TutorStudentFeedback conditions received automated feedback with the following components:
Introduction to the feedback
Summary statistics of the session and comparison to the previous session
tutor talk percentage
proportion of students engaged
Description of talk move in focus for that session
Their talk moves in action (list of talk moves from their transcript)
For first two sessions: link to relevant training module
GPT-4 Turbo generated actionable suggestions for next session
Reflection opportunity





The talk moves that were in focus changed over the course of the bootcamp, following a pre-defined curriculum of talk moves for each of the 8 sessions:
Session #
1: Eliciting ideas from students
344/363 tutors in the treatment conditions did not receive feedback for session 1 during June bootcamp due to an error
2: Eliciting ideas from students
3: Revoicing student ideas
4: Revoicing student ideas
5: No feedback
6: Prompting for reasoning
7: Prompting for reasoning
8: No feedback - received end-of-bootcamp survey on the AI feedback


The talk moves are defined as:
Inviting learner ideas
This talk move was identified by first filtering session transcripts with a fine-tuned question detection model, which isolates utterances resembling questions. The questions are then passed to an Electra-base model, fine-tuned to identify the “pressing for reasoning” and “pressing for accuracy” labels from the TalkMoves Dataset by Suresh et al. (2022)
Building on learner ideas
For identifying this talk move, we used the uptake model developed by Demszky et al. (2021). This model analyzes utterances from the session transcripts to pinpoint when tutors effectively engage with and extend student contributions.
Pressing for reasoning
For the sessions with this focus, the same Eliciting model was used as the “inviting learner ideas” sessions.

After a tutor taught their section on Zoom, their transcript was analyzed through our automated analysis pipeline. The analysis was usually completed within a few hours. Once the feedback for the most recent session becomes available, tutors receive an email notifying and encouraging them to log into the Schoolhouse.world platform to view it. The feedback appears as a pop-up modal the next time they log into Schoolhouse.world. All previous feedback can be accessed again from the tutor’s personal profile page.

Edge-cases: if a tutor misses a session, they do not receive any feedback. They will receive feedback after the next session they teach. Tutors did not substitute teaching other cohorts.

All instructors were required to complete training about Schoolhouse’s MARS rubric (Mastery, Active Learning, Respectful Community, and Safety), general SAT knowledge, and new information about the digital SAT. They also participate in a 1-hour live onboarding session.





Feedback to Students

Students in the TutorStudentFeedback groups received automated feedback on their engagement in the tutoring session. The feedback included the following components:
Student talk time ratio in section
Motivational message to encourage students to participate in the session

Students were randomized to receive one of two types of motivational messages. Students in the TutorStudentFeedbackSelf group received a message that encouraged them to participate in order to optimize their own learning:



Students in the TutorStudentFeedbackSocial group received a message that encouraged them to participate in order to help everyone else learn:



Students received the feedback from their previous session as a pop-up modal right before they join their next session.

If a student did not attend their section, they did not receive feedback. If a student attended a session of a different tutor, they received the treatment condition assigned to that new tutor (i.e. if they were in control but then dropped-in on another session in the treatment group, they received feedback for that session) -- this did not happen often, if at all.


At the end of the study

After the last session, automated feedback to tutors included a few survey questions to probe their perceptions of the automated feedback:



We also interviewed a sample of tutors and students to gauge their perception of the feedback. A random sample of tutors and students from each treatment arm of the study was emailed after the Bootcamp with an invitation to sign up for an interview in exchange for a $15 Amazon gift card. In total, 17 tutors were interviewed, with 5, 6, and 6 from each arm; 9 learners were interviewed, with 3, 2, and 4 from each arm. The interviews were conducted virtually over Zoom by a member of the Schoolhouse team. In the first phase of the interview, the interviewee was asked about their overall experience in the SAT Bootcamp; in the second phase, the interviewee was shown their automated feedback and asked a set of questions about how they felt about it. Finally, the interviews with tutors were different from the interviews with students; each was tailored to the specifics of the feedback they had received and their role in the Bootcamp.
Intervention Start Date
2024-05-01
Intervention End Date
2024-07-01

Primary Outcomes

Primary Outcomes (end points)
RQ1: instructor practice
tutor talk ratio
proportion of students participating
hourly rate of each of the 3 key talk moves (inviting, building, reasoning)
student talk percentage
potentially other discourse features (distal, not expecting impact)
number of questions asked by tutor
number of problems discussed
RQ2: student engagement in section
student attendance
student participation (via chat or spoken)
student number of questions asked
potentially other discourse features (distal, not expecting impact)
student reasoning
RQ2: student practice test scores
RQ2: student experience
section rating
NPS
RQ3: same as RQ1 and RQ2
RQ4: instructor and student perception of feedback
Final survey results for instructors
Qualitative interview responses
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Before the first tutoring session, tutors were randomized into one of four conditions:
Control = 30% were assigned to control condition, and conducted business as usual
TutorFeedback = 30% were assigned to receive automated feedback on their instruction
TutorStudentFeedbackSelf = 15% were assigned to receive automated feedback on their instruction AND their students also received automated feedback with self-oriented messaging related to the importance of engaging in section
TutorStudentFeedbackSocial = 15% were assigned to receive automated feedback on their instruction AND their students also received automated feedback with pro-social messaging related to the importance of engaging in section
Experimental Design Details
Randomization Method
Coin flip
Randomization Unit
Instructor (tutor)
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
N/A
Sample size (or number of clusters) by treatment arms
May Bootcamp: 697 tutors
June Bootcamp: 517 tutors

Control = 30%
TutorFeedback = 30%
TutorStudentFeedbackSelf = 15%
TutorStudentFeedbackSocial = 15%
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford IRB
IRB Approval Date
2023-12-18
IRB Approval Number
68376
Analysis Plan

Analysis Plan Documents

Schoolhouse May-June 2024 Study Pre-registration

MD5: bcf204bfacf12aed20d0c5c143aa0994

SHA1: 9bc2db50f45146837be52ea440955675fad4f960

Uploaded At: July 18, 2024

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials