x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Privacy in Mental Wellness Apps
Last registered on July 16, 2020

Pre-Trial

Trial Information
General Information
Title
Privacy in Mental Wellness Apps
RCT ID
AEARCTR-0005721
Initial registration date
April 14, 2020
Last updated
July 16, 2020 10:25 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
Florida State University
Other Primary Investigator(s)
Additional Trial Information
Status
In development
Start date
2020-04-15
End date
2020-10-26
Secondary IDs
Abstract
Before the rise of covid-19, the US was already in the midst of health crises, which are largely suggestive of worsening mental health. Case and Deaton (2015, 2017) address the worsening mortality rates for segments of the US population, which seem to be driven by ``deaths of despair,'' caused by opioids, alcohol, or suicide. Unfortunately, under half of mentally ill people in the US typically get the treatment they need (National Institute of Mental Health, 2019). I consider a pandemic-appropriate strategy for improving wellbeing by broadening access to proven app-based mental health treatment (Firth et al., 2017a,b). Following Torous and Roberts (2017), I hypothesize that lack of clear information about data privacy distorts the market for mental health apps. I propose and test a simple five-star privacy-policy rating system as a possible solution to this problem.
External Link(s)
Registration Citation
Citation
Magee, Ellis. 2020. "Privacy in Mental Wellness Apps." AEA RCT Registry. July 16. https://doi.org/10.1257/rct.5721-1.1.
Experimental Details
Interventions
Intervention(s)
I have evaluated privacy policies for 16 popular mental wellness apps, and I have assigned them ratings on a scale of one to five stars. Control participants will make decisions between pairs of apps based only on the information from the app stores. Treatment participants will make decisions between pairs of apps with the information provided to the control participants plus the privacy-policy ratings. I will evaluate the impact of privacy-policy ratings on participant choices as part of an effort to evaluate the level of social welfare lost to difficult-to-interpret privacy policies.
Intervention Start Date
2020-04-15
Intervention End Date
2020-08-24
Primary Outcomes
Primary Outcomes (end points)
Which app is chosen from each pair, offer for access the privacy policy ratings, whether the selected app is installed (as evidenced by a screenshot), whether the individual plans to keep the app after 3 weeks have passed
Primary Outcomes (explanation)
None of the primary outcomes are constructed.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
I will recruit adults living in the United States from mental-wellness-related communities to participate in a survey-type economic experiment. Participants will be randomly assigned to an experimental condition. All participants in all conditions will be asked to rank their interest in four categories of mental wellness apps. They will then be asked to choose between pairs of apps from their preferred categories. Participants in the treatment conditions will receive access to my privacy-policy ratings in addition to the information provided to the control conditions. One of the app choices will be selected for reimbursement of a one-month subscription. I will follow up over the next few weeks about the use of that app, and I will pay participants for their time. Data analysis will be based on the experimental condition, the app information visible to participants, and controls for the demographic characteristics participants provide.
Experimental Design Details
Not available
Randomization Method
Randomization will be performed by Qualtrics at the beginning of a survey-type experiment
Randomization Unit
Randomization will be at the individual level
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
400 adults
Sample size: planned number of observations
400 adults
Sample size (or number of clusters) by treatment arms
100 adults control, 100 adults treatment, 200 adults willingness-to-pay
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.2 standard deviations for one-mean tests, 15 to 20 percentage points for two-proportions tests (depending on attrition)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Florida State University Office of Human Subjects Protection (FSU OHSP)
IRB Approval Date
2020-01-27
IRB Approval Number
Determined Exempt. IRB Study ID: STUDY00000970
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information