The Behavioral Economics of Social Media: A Study of Self Commitment Devices

Last registered on August 09, 2019

Pre-Trial

Trial Information

General Information

Title
The Behavioral Economics of Social Media: A Study of Self Commitment Devices
RCT ID
AEARCTR-0004519
Initial registration date
August 08, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 09, 2019, 9:42 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2019-01-20
End date
2019-03-03
Secondary IDs
Abstract
Despite mounting public concern over the issue of social media overusage, there is limited evidence that the time individuals spend on social media platforms is excessive from a welfare standpoint. I implement a randomised intervention over 6 weeks, encouraging a subset of 629 participants recruited over Facebook to adopt a voluntary soft commitment device designed to help limit their phone, Facebook, and Instagram usage. Utilising data from 4 surveys and direct measurement of social media use, I find that: (i) individuals persistently underestimate how much time they actually spend on social media; (ii) they spend much more time on their phones and Facebook than they profess to ideally desire; (iii) users are willing to set application limits even in the absence of incentives to do so; (iv) and lastly, the adoption of such limits significantly reduces phone and Facebook use, with reductions in the latter persisting even after a month. These results all suggest that individuals spend more time on social media than is optimal as a result of their limited ability to exercise self-control.
Additionally, in a second experimental arm of the study, I investigate the effect of nudging individuals to review and report their Facebook privacy settings on their behaviour, finding that it results in limited changes to privacy-protective choices; instead, I find that individuals respond by reducing their usage of Facebook.
External Link(s)

Registration Citation

Citation
Hoong, Juan Ru. 2019. "The Behavioral Economics of Social Media: A Study of Self Commitment Devices." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.4519-1.0
Former Citation
Hoong, Juan Ru. 2019. "The Behavioral Economics of Social Media: A Study of Self Commitment Devices." AEA RCT Registry. August 09. https://www.socialscienceregistry.org/trials/4519/history/51478
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2019-01-27
Intervention End Date
2019-03-03

Primary Outcomes

Primary Outcomes (end points)
Actual usage of phone/Facebook/Instagram (mins), self-predicted usage, self-reported preferred usage
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment consisted of five parts: recruitment and pre-screen, baseline, app-treatment survey, midline, and endline. It was administered through four surveys on the Qualtrics platform, over a period of 6 weeks from January 20th to March 3rd 2019. The study period was intentionally timed to avoid any major holidays that might impact smartphone and social media usage.

I recruited participants over Facebook ads over the course of the first day of the experiment. The ad was targeted at users in the United States aged 18-34 -- the age range of the majority of Facebook users (eMarketer, 2018). In order to achieve a gender-balanced initial sample, I additionally targeted the ads by demographic cells, allocating triple the recruitment budget to recruiting males because females are roughly three to four times more likely to click-through on ads. During the campaign, 223,104 people were shown the ad and 3,617 people clicked on it, achieving a click-through rate of approximately 1.6%. The ad consisted of a generic picture of Stanford University and was captioned: ``Participate in a Stanford online research study and earn $20! Contribute to economic research at Stanford by participating in an online study on social media use." It made no mention of ``Screen Time" or self-control issues in order to minimise priming and sample selection bias.

Upon clicking on the ad, participants entered a background demographic pre-screen hosted on stanforduniversity.qualtrics.com. I screened participants for: (i) active usage of the Facebook platform on their mobile devices (more than 50% of Facebook usage spent on mobile, as well as more than 50% of that time spent within the Facebook app as opposed to web-based browsers); (ii) usage of iOS devices, as the system contains the ``Screen Time" feature, which is useful towards harnessing phone usage data; and (iii) age (above 18). A consent form was also shown as part of the pre-screen. 629 participants eventually consented to the study, passed the pre-screen, and completed the entirety of the baseline survey.

After the pre-screen, qualified participants were immediately prompted into the baseline survey. Contact information, additional demographics, and a range of outcome variables were recorded. During the baseline survey, I asked respondents to estimate the time they and their peers spent on their phones (Facebook/Instagram), before asking them to enter into their ``Screen Time" feature to access their time usage data. As is the case with every survey, participants were then asked to upload a screenshot of their phone, Facebook, and Instagram time usage for the last 7 days, if available. Of the 629 participants, 81% already had ``Screen Time" enabled on their phones, and the rest had yet to enable ``Screen Time". Since ``Screen Time" is only available on iOS 12.0 or later, I ask participants with earlier versions of the system to update their phones so they can provide ``Screen Time" data the following week. I also prompt participants to enable ``Screen Time" on their phones so as to facilitate data collection in subsequent surveys.

All baseline participants then answered questions about their ideal usage and predicted usage for the following week. Finally, respondents were asked to answer a series of questions traditionally used to measure ``self-control": the 13 questions of Tangney's Brief Self-Control Scale were displayed, as well as three questions aimed at eliciting participants' time-discounting parameters. The latter questions are hypothetical (e.g. ``If we paid you in one month, what's the lowest amount that you would be willing to accept, instead of receiving $\$20$ today?"). I considered implementing a second-price auction or other technique to elicit preferences in an incentivised manner; however, asking questions hypothetically is sufficient for my purposes as it tends to be less confusing than other techniques and operates without the added cost of having to pay participants.

629 participants eventually completed the baseline. Of those, 52 responses on time usage were either manually overridden or omitted from the experimental data on ``Screen Time" usage because of invalid data (for example, a series of `0's for time usage due to recent activation of Screen Time or a system glitch) or inaccurate responses (for example, discrepancy between ``Screen Time" screenshots and reported survey data). Where possible, I overrode inaccurate survey responses with accurate ``Screen Time" data harvested from participants' screenshots.

I then stratified participants into the control or app-intervention group for the second survey, which was administered via email exactly a week after the baseline on January 27th. Again, the survey was hosted on the Qualtrics platform and each individual received a unique link to the survey. I sorted participants into control and treatment groups within 16 strata defined by age, gender, education, and active Instagram usage. To minimise differential attrition between the treatment and control arms, control participants were still asked about their time usage data and other related questions to maintain similar survey length.

As with the previous survey, screenshotted time usage data was collected for the previous 7 days, and all respondents again answered questions about their ideal usage and predicted usage for the following week. For clarity, I call actual data collected from screenshots in the second survey ``Week 1 Actual" data. Note that questions about ideal and predicted usage answered in the second survey correspond to anticipations for the following week; thus, for expositional purposes, I refer to them loosely as ``Week 2 Ideal" and ``Week 2 Predicted" data.

I then nudged participants in the app-treatment group that did not already have existing app limits on their phones to adopt time limits equivalent (or lower than) the ideal times they specified for their phone, Facebook, and Instagram previously. I informed them that iOS 12's "Screen Time" had a feature called "App Limits" that allows them to set time limits for apps.

17 participants reported that they already had app limits adopted on their phones, whilst 83 participants declined the suggestion to adopt app limits. Setting a time limit on the iOS system is not binding: the option exists to ``Ignore Limit" for the next 15 minutes, or for the rest of the day. Respondents were made aware of this fact when they are asked to adopt time limits, and it was made clear that they did not have to agree to the adoption of limits in order to continue on with the study. Additionally, they were asked to provide screenshots of the app limits they had set, in order to hold them accountable to the limits. I double-checked a subsample of these screenshots and found that the majority of individuals set their app limits to be equivalent to the ideal times they had previously specified for their phone, Facebook, and Instagram.

In the third survey administered the following week on February 3rd, participants were further stratified by demographic characteristics into the privacy intervention groups. Basic screenshots of ``Screen Time" data for the previous 7 days (``Week 2 Actual") were collected as in previous surveys, and participants that were allocated into the app treatment were asked follow-up questions about whether they had ignored or switched off any of their app limits in the past week. Participants allocated to the privacy treatment were first asked about their current perception of the privacy settings of their own Facebook account through questions that can be easily mapped onto specific Facebook privacy settings. Secondly, I elicited their self-reported preferred settings by posing ``simulated actual" questions that bear resemblance in phrasing to actual privacy options on the Facebook platform. I phrased the questions in this manner in order to achieve as close as an ``active choice" as possible: as participants inevitably already had an incumbent privacy setting, it was not possible to elicit a real ``active choice" independent of status quo bias. As such, asking for self-reported preferences in a simulated manner before participants entered into their real Facebook interface was the best way to elicit privacy preferences without introducing the distortions of status-quo bias. Thirdly, respondents were then asked to log into their Facebook account and report their actual privacy and ad settings. Again, they were asked to screenshot their settings to ensure truthful reporting. Lastly, to identify if there is any preference reversal owing to attention bias or other factors, participants were asked for their self-reported preferred settings again after having reported their actual settings, in exactly the same manner as before. They were asked if their preferences over their privacy settings had changed, and if so, to qualitatively describe how their preferences had changed.

In order to measure the longer-term impact of the app treatment on participants, the endline was administered four weeks after the third survey, on March 3rd. Similar to the third survey, basic ``Screen Time" screenshot and time usage data was collected, and participants that were allocated into the app treatment were asked follow-up questions about whether they had ignored or switched off the app limits in the past week. Participants that were not previously allocated to the privacy treatment were then asked questions about their privacy settings that were identical to those in the privacy treatment group received in the previous survey. Respondents in the privacy treatment group were followed up on whether they had changed their privacy settings since the last survey. Additionally, all participants were asked if they (or their friends) had experienced any privacy violations on social media in the past.
Experimental Design Details
Randomization Method
Randomization done through computer (stratified by demographic characteristics)
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
629 Individuals
Sample size: planned number of observations
629 Individuals
Sample size (or number of clusters) by treatment arms
50% control, 50% treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford University IRB
IRB Approval Date
2018-10-18
IRB Approval Number
48060

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
Yes
Intervention Completion Date
March 03, 2019, 12:00 +00:00
Data Collection Complete
Yes
Data Collection Completion Date
March 03, 2019, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
No
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials