Measuring Competition in the Attention Economy: Evidence from Social Media

Last registered on May 09, 2021


Trial Information

General Information

Measuring Competition in the Attention Economy: Evidence from Social Media
Initial registration date
April 02, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 05, 2021, 10:19 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 09, 2021, 5:04 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

Columbia University

Other Primary Investigator(s)

Additional Trial Information

Start date
End date
Secondary IDs
We study the competition for consumer attention between social media platforms. We run a field experiment that collects comprehensive data on the time usage of individuals as well as periodically collects psychological measures of well-being. We randomize restrictions to several prominent social media applications on subject's phones and utilize this to measure changes in subject's well-being and time usage. This randomization allows us to overcome a primary challenge with measuring substitution patterns in such markets where firms compete for consumer attention and are characterized by zero monetary prices. We utilize our experiment to characterize such substitution patterns, define relevant markets, and estimate a demand system for consumer attention.
External Link(s)

Registration Citation

Aridor, Guy. 2021. "Measuring Competition in the Attention Economy: Evidence from Social Media." AEA RCT Registry. May 09.
Experimental Details


We use the parental control software installed on participants' phones to restrict their access to the applications on their phones. This disallows them both from accessing the application directly, but also querying the website associated with the application on any web browser within the application.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variable that we are interested is how participants substitute their time in response to the restrictions.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We restrict access to certain applications on participants' phones. The full details revealed at the completion of the study (i.e. in the hidden field).
Experimental Design Details
We do a block randomization as follows. We install parental control software on participants' phones as well as a chrome extension on their computer that logs how much they spend on different websites / applications. This installation occurs from 3/19 - 3/26. On 4/2, we pull the participants' usage of the different applications in the past week. We classify participants' usage as either being in the 0-25, 25-50, 50-75, or 75-100 percentile for the treated applications -- Instagram and YouTube. We then assign the blocks according to the YouTube percentile block x Instagram percentile block. Within each block, we allocate treatment groups uniformly at random across the the two treatment arms and control group.

Within each treatment arm, we randomize the timing of the restrictions between one week and two weeks which is also done uniformly at random. Our primary analysis compares the impact of the treatment restrictions after one week, but we allow for variation in timing in order to better understand any long-run impacts of the restrictions. After the treatments there is a 2-3 week period in order to identify any longer term effects of the restrictions.

Finally, the study has an additional week at the end for two randomly selected participants. We elicit valuations for prominent social media applications using a switching multiple price list procedure at the beginning of the study and at the end of the study (05/02). The valuation question asks how much the participant would be willing to accept for a week-long restriction of each of these applications. We randomly select two participants from the pool of participants. For these participants, we select one application and one offer at randomly. If they selected to take the offer and lose the application, we restrict the application from them. If they selected to reject the offer, then nothing happens.

In summary, the timing of the experiment is as follows:
Baseline period: 03/26 - 04/02
Restrictions: 04/03 - 04/17
Post-Restriction Period: 04/17 - 05/02
Incentive-compatible additional restrictions: 05/02-05/09
Randomization Method
Randomization done in office by a computer
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
No clustering, but we are using block randomization based on usage.
Sample size: planned number of observations
373 participants
Sample size (or number of clusters) by treatment arms
Control - 124 participants
Treatment Arm 1- 124 participants
Treatment Arm 2 - 125 participants
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Columbia University IRB
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials