x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Privacy in Mental Wellness Apps
Last registered on August 19, 2020

Pre-Trial

Trial Information
General Information
Title
Privacy in Mental Wellness Apps
RCT ID
AEARCTR-0005721
Initial registration date
April 14, 2020
Last updated
August 19, 2020 10:56 AM EDT
Location(s)
Region
Primary Investigator
Affiliation
Florida State University
Other Primary Investigator(s)
Additional Trial Information
Status
On going
Start date
2020-04-15
End date
2020-11-27
Secondary IDs
Abstract
Before the rise of covid-19, the US was already in the midst of health crises, which are largely suggestive of worsening mental health. Case and Deaton (2015, 2017) address the worsening mortality rates for segments of the US population, which seem to be driven by ``deaths of despair,'' caused by opioids, alcohol, or suicide. Unfortunately, under half of mentally ill people in the US typically get the treatment they need (National Institute of Mental Health, 2019). I consider a pandemic-appropriate strategy for improving wellbeing by broadening access to proven app-based mental health treatment (Firth et al., 2017a,b). Following Torous and Roberts (2017), I hypothesize that lack of clear information about data privacy distorts the market for mental health apps. I propose and test a simple five-star privacy-policy rating system as a possible solution to this problem.
External Link(s)
Registration Citation
Citation
Magee, Ellis. 2020. "Privacy in Mental Wellness Apps." AEA RCT Registry. August 19. https://doi.org/10.1257/rct.5721-2.0.
Experimental Details
Interventions
Intervention(s)
I have evaluated privacy policies for 16 popular mental wellness apps, and I have assigned them ratings on a scale of one to five stars. Control participants will make decisions between pairs of apps based only on the information from the app stores. Treatment participants will make decisions between pairs of apps with the information provided to the control participants plus the privacy-policy ratings. I will evaluate the impact of privacy-policy ratings on participant choices as part of an effort to evaluate the level of social welfare lost to difficult-to-interpret privacy policies.
Intervention Start Date
2020-04-15
Intervention End Date
2020-09-25
Primary Outcomes
Primary Outcomes (end points)
Which app is chosen from each pair, offer for access the privacy policy ratings, whether the selected app is installed (as evidenced by a screenshot), whether the individual plans to keep the app after 3 weeks have passed
Primary Outcomes (explanation)
None of the primary outcomes are constructed.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
I will recruit adults living in the United States to participate in a survey-type economic experiment. Participants will be randomly assigned to an experimental condition. All participants in all conditions will be asked to rank their interest in four categories of mental wellness apps. They will then be asked to choose between pairs of apps from their preferred categories. Participants in the treatment conditions will receive access to my privacy-policy ratings in addition to the information provided to the control conditions. One of the app choices will be selected for reimbursement of a one-month subscription. I will follow up over the next few weeks about the use of that app, and I will pay participants for their time. Data analysis will be based on the experimental condition, the app information visible to participants, and controls for the demographic characteristics participants provide.
Experimental Design Details
I have recruited just over 100 (of the target 400) adults living in the United States from mental-wellness-related communities on Reddit and Facebook to participate in my survey-type economic experiment. To enable answering questions of uptake and persistence, I will augment the sample by recruiting 250 participants from the xs/fs subject pool. In the initial sample, participants were randomly assigned to an experimental condition, with targeted 25% assigned to control, 25% assigned to treatment (for a total of 50% assigned to the randomized controlled trial (RCT) arm of the study), and 50% assigned to the willingness-to-pay (WTP) arm of the study. In the supplemental sample, participants will be randomly assigned to an experimental condition, with 50% assigned to control and 50% assigned to treatment (for a total of 100% assigned to the randomized controlled trial (RCT) arm of the study). All participants in all conditions will be asked to rank their interest in four categories of mental wellness apps. They will then be asked to choose between pairs of apps from their top two preferred categories. All participants in the treatment condition and a subset of participants in the WTP condition will receive access to my privacy-policy ratings in addition to the information provided to the control participants. One of the app choices will be selected for reimbursement of a one-month subscription. Finally, subjects will be asked to answer a few demographic questions. I will follow up over the next few weeks about the use of the selected app, and I will pay participants for their time. Data analysis will be based on the experimental condition, the app information visible to participants, and controls for the demographic characteristics participants provide.
Randomization Method
Randomization will be performed by Qualtrics at the beginning of a survey-type experiment
Randomization Unit
Randomization will be at the individual level
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
100 adults initial sample (out of 400 targeted) and 250 adults supplemental sample
Sample size: planned number of observations
100 adults initial sample (out of 400 targeted) and 250 adults supplemental sample
Sample size (or number of clusters) by treatment arms
125 adults control, 125 adults treatment in the supplemental sample
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
15 percentage points for paired-proportions test, 20 percentage points for two-proportions tests
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Florida State University Office of Human Subjects Protection (FSU OHSP)
IRB Approval Date
2020-01-27
IRB Approval Number
Determined Exempt. IRB Study ID: STUDY00000970
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS