The AEA RCT Registry will be down for maintenance on Tuesday, April 20th, from 9pm EDT to 11pm EDT to perform necessary upgrades. The site will be in maintenance mode during that time. We apologize for any inconvenience this may cause.
Back to History Current Version
Privacy in Mental Wellness Apps
Last registered on April 17, 2020


Trial Information
General Information
Privacy in Mental Wellness Apps
Initial registration date
April 14, 2020
Last updated
April 17, 2020 1:02 PM EDT
Primary Investigator
Florida State University
Other Primary Investigator(s)
Additional Trial Information
In development
Start date
End date
Secondary IDs
Before the rise of covid-19, the US was already in the midst of health crises, which are largely suggestive of worsening mental health. Case and Deaton (2015, 2017) address the worsening mortality rates for segments of the US population, which seem to be driven by ``deaths of despair,'' caused by opioids, alcohol, or suicide. Unfortunately, under half of mentally ill people in the US typically get the treatment they need (National Institute of Mental Health, 2019). I consider a pandemic-appropriate strategy for improving wellbeing by broadening access to proven app-based mental health treatment (Firth et al., 2017a,b). Following Torous and Roberts (2017), I hypothesize that lack of clear information about data privacy distorts the market for mental health apps. I propose and test a simple five-star privacy-policy rating system as a possible solution to this problem.
External Link(s)
Registration Citation
Magee, Ellis. 2020. "Privacy in Mental Wellness Apps." AEA RCT Registry. April 17. https://doi.org/10.1257/rct.5721-1.0.
Experimental Details
I have evaluated privacy policies for 16 popular mental wellness apps, and I have assigned them ratings on a scale of one to five stars. Control participants will make decisions between pairs of apps based only on the information from the app stores. Treatment participants will make decisions between pairs of apps with the information provided to the control participants plus the privacy-policy ratings. I will evaluate the impact of privacy-policy ratings on participant choices as part of an effort to evaluate the level of social welfare lost to difficult-to-interpret privacy policies.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Which app is chosen from each pair, offer for access the privacy policy ratings, whether the selected app is installed (as evidenced by a screenshot), whether the individual plans to keep the app after 3 weeks have passed
Primary Outcomes (explanation)
None of the primary outcomes are constructed.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
I will recruit adults living in the United States from mental-wellness-related communities to participate in a survey-type economic experiment. Participants will be randomly assigned to an experimental condition. All participants in all conditions will be asked to rank their interest in four categories of mental wellness apps. They will then be asked to choose between pairs of apps from their preferred categories. Participants in the treatment conditions will receive access to my privacy-policy ratings in addition to the information provided to the control conditions. One of the app choices will be selected for reimbursement of a one-month subscription. I will follow up over the next few weeks about the use of that app, and I will pay participants for their time. Data analysis will be based on the experimental condition, the app information visible to participants, and controls for the demographic characteristics participants provide.
Experimental Design Details
Not available
Randomization Method
Randomization will be performed by Qualtrics at the beginning of a survey-type experiment
Randomization Unit
Randomization will be at the individual level
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
400 adults
Sample size: planned number of observations
400 adults
Sample size (or number of clusters) by treatment arms
100 adults control, 100 adults treatment, 200 adults willingness-to-pay
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.2 standard deviations for one-mean tests, 15 to 20 percentage points for two-proportions tests (depending on attrition)
IRB Name
Florida State University Office of Human Subjects Protection (FSU OHSP)
IRB Approval Date
IRB Approval Number
Determined Exempt. IRB Study ID: STUDY00000970
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information