The AEA RCT Registry will be down for maintenance on Tuesday, April 20th, from 9pm EDT to 11pm EDT to perform necessary upgrades. The site will be in maintenance mode during that time. We apologize for any inconvenience this may cause.
Back to History
Privacy in Mental Wellness Apps
Initial registration date
April 14, 2020
July 16, 2020 10:25 PM EDT
Florida State University
Other Primary Investigator(s)
Additional Trial Information
Before the rise of covid-19, the US was already in the midst of health crises, which are largely suggestive of worsening mental health. Case and Deaton (2015, 2017) address the worsening mortality rates for segments of the US population, which seem to be driven by ``deaths of despair,'' caused by opioids, alcohol, or suicide. Unfortunately, under half of mentally ill people in the US typically get the treatment they need (National Institute of Mental Health, 2019). I consider a pandemic-appropriate strategy for improving wellbeing by broadening access to proven app-based mental health treatment (Firth et al., 2017a,b). Following Torous and Roberts (2017), I hypothesize that lack of clear information about data privacy distorts the market for mental health apps. I propose and test a simple five-star privacy-policy rating system as a possible solution to this problem.
I have evaluated privacy policies for 16 popular mental wellness apps, and I have assigned them ratings on a scale of one to five stars. Control participants will make decisions between pairs of apps based only on the information from the app stores. Treatment participants will make decisions between pairs of apps with the information provided to the control participants plus the privacy-policy ratings. I will evaluate the impact of privacy-policy ratings on participant choices as part of an effort to evaluate the level of social welfare lost to difficult-to-interpret privacy policies.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Primary Outcomes (explanation)
None of the primary outcomes are constructed.
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
I will recruit adults living in the United States from mental-wellness-related communities to participate in a survey-type economic experiment. Participants will be randomly assigned to an experimental condition. All participants in all conditions will be asked to rank their interest in four categories of mental wellness apps. They will then be asked to choose between pairs of apps from their preferred categories. Participants in the treatment conditions will receive access to my privacy-policy ratings in addition to the information provided to the control conditions. One of the app choices will be selected for reimbursement of a one-month subscription. I will follow up over the next few weeks about the use of that app, and I will pay participants for their time. Data analysis will be based on the experimental condition, the app information visible to participants, and controls for the demographic characteristics participants provide.
Experimental Design Details
Randomization will be performed by Qualtrics at the beginning of a survey-type experiment
Randomization will be at the individual level
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
100 adults control, 100 adults treatment, 200 adults willingness-to-pay
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.2 standard deviations for one-mean tests, 15 to 20 percentage points for two-proportions tests (depending on attrition)
INSTITUTIONAL REVIEW BOARDS (IRBs)
Florida State University Office of Human Subjects Protection (FSU OHSP)
IRB Approval Date
IRB Approval Number
Determined Exempt. IRB Study ID: STUDY00000970