Back to History Current Version

Feasibility and effectiveness of an artificial intelligence enhanced application for student wellbeing: pilot trial of the Mind Tutor.

Last registered on February 03, 2022

Pre-Trial

Trial Information

General Information

Title
Feasibility and effectiveness of an artificial intelligence enhanced application for student wellbeing: pilot trial of the Mind Tutor.
RCT ID
AEARCTR-0008933
Initial registration date
February 03, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 03, 2022, 5:56 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Oxford Brookes University

Other Primary Investigator(s)

PI Affiliation
Oxford Brookes University
PI Affiliation
Oxford Brookes University

Additional Trial Information

Status
In development
Start date
2022-02-14
End date
2022-04-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
The current study aims to test a digital tool – The Mind Tutor - which was developed in conjunction with students and university wellbeing services, in order to provide students with tools to use to manage their wellbeing whist at university. Mind Tutor integrates academic support with wellbeing support based on feedback from students. Mind Tutor uses an artificial intelligence (AI) tool, whereby users interact with a chatbot. Mind Tutor was developed to address five key areas relating to student wellbeing and attainment.
We aim to recruit a minimum of 400 first year undergraduate students into the study
The primary objectives of this study are
a) To determine the impact of the Mind Tutor on subjective wellbeing over a 6 week period in comparison to an inactive control group and;
b) To determine the overall feasibility of delivering a 6 week RCT to assess the effectiveness of the Mind Tutor on subjective wellbeing in university students.
External Link(s)

Registration Citation

Citation
Davies, Emma, Christian Ehrlich and Sarah Hennelly. 2022. "Feasibility and effectiveness of an artificial intelligence enhanced application for student wellbeing: pilot trial of the Mind Tutor. ." AEA RCT Registry. February 03. https://doi.org/10.1257/rct.8933-1.1
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Mind Tutor integrates academic support with wellbeing support based on feedback from students. Mind Tutor uses an artificial intelligence (AI) tool, whereby users interact with a chatbot. Mind Tutor was developed to address five key areas relating to student wellbeing and attainment.
Intervention Start Date
2022-02-14
Intervention End Date
2022-04-01

Primary Outcomes

Primary Outcomes (end points)
a) Wellbeing as measured by the Short Warwick Edinburgh Mental Wellbeing Scale (SWEMWBS) (Ng Fat et al., 2017).
b) Feasibility of a 6 week RCT
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
a) Life satisfaction, PANAS, Mindfulness, self-efficacy
b) Recruitment, retention, engagement, acceptability
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This is a two arm randomised controlled trial.
Experimental Design Details
Randomization Method
Randomisation occurs automatically in Qualtrics
Randomization Unit
Individual.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
400 individual students
Sample size: planned number of observations
400 individual students
Sample size (or number of clusters) by treatment arms
200 in each condition
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
This sample size based on a sample size calculation conducted in GPower for a linear multiple regression analysis to address differences between the intervention and control group on the primary outcome measure with a small to medium effect size ( f2=0.1), 95% power and an alpha level of p=.001. This is for a model to include up to 6 predictors to allow for the time one score on the primary outcome measure, group (intervention/ control) and up to four other co-variates to be entered into the model (gender/institution/degree subject/age). This also allows for incomplete cases to be dropped from the analysis if needed.
IRB

Institutional Review Boards (IRBs)

IRB Name
Oxford Brookes University
IRB Approval Date
2021-09-21
IRB Approval Number
L21256

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
November 30, 2015, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
December 31, 2015, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
na
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
402 participants
Final Sample Size (or Number of Clusters) by Treatment Arms
One too many = 99; Imagery = 97; Drinks meter = 104; Control = 102
Data Publication

Data Publication

Is public data available?
No

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Program Files

Program Files
No
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
Aim: To assess the effectiveness of two personalised digital interventions (OneTooMany and Drinks Meter) compared to controls.
Method: Randomised controlled trial (AEARCTR-0001082). Volunteers for the study, aged 18-30, were randomly allocated to one of two interventions or one of two control groups and were followed up four weeks later. Primary outcomes were AUDIT-C, drinking harms and preloading.
Drinks Meter provided participants with brief screening and advice for alcohol in addition to normative feedback, information on calories consumed and money spent. OneTooMany presented a series of socially embarrassing scenarios that may occur when drinking, and participants were scored according to if/ how recently they had been experienced.
Results: The study failed to recruit and obtain sufficient follow-up data to reach a prior estimated power for detecting a difference between groups and there was no indication in the analysable sample of 402 subjects of a difference on the primary outcome measures (Drinks Meter; AUDIT-C IRR=0.98 (0.89-1.09); Pre-loading IRR=1.01 (0.95-1.07); Harms IRR=0.97 (0.79-1.20); OneTooMany; AUDIT-C IRR=0.96 (0.86-1.07); Pre-loading IRR=0.99 (0.93-1.06); Harms IRR=1.16 (0.94-1.43).
Conclusion: Further research is needed on the efficacy of such instruments and their ingredients. However, recruitment and follow-up are a challenge.
Citation
Emma L Davies, Adam J Lonsdale, Sarah E Hennelly, Adam R Winstock, David R Foxcroft, Personalized Digital Interventions Showed no Impact on Risky Drinking in Young Adults: A Pilot Randomized Controlled Trial, Alcohol and Alcoholism, Volume 52, Issue 6, November 2017, Pages 671–676, https://doi.org/10.1093/alcalc/agx051

Reports & Other Materials