Primary Outcomes (end points)
Most outcomes come from deidentified course gradebooks, course learning management system records (e.g., student time spent reading the online American Government textbook), and GSU administrative records, provided directly to the research team for analysis. We also aimed to understand the effect of chatbot communication on students’ class experiences and perceptions of the instructor. To do so, in Government we added questions in the following domains to an existing end-of-course survey that was directed to students in both experimental conditions: organizational support, self-efficacy, and belonging (adapted from PERTS Ascend and Elevate surveys, see Boucher et al., 2021; Paunesku & Farrington, 2020), instructor expectation (adapted from Smith, 2020), perception of achievable challenge (adapted from Mendes et al., 2007), and novel adaptive expectation scenario items developed for the current study. Appendix C reports the specific attitudinal questions we asked students.
Finally, we included a set of survey items to ask treatment participants specifically about their experience with the course chatbot, including the extent to which they found the communication helpful, whether they read the text messages, whether they knew about and/or used the #quizme function (where applicable), and whether they would recommend future use of the chatbot in this and other GSU courses. As we detail below, two limitations to our survey analysis are low response rates and differential survey participation by student characteristics. Other measures of engagement come from the Mainstay message logs. We code incoming student text messages to identify whether and how frequently students messaged the platform as well as characteristics of their messages (e.g., opt-outs vs. questions).