x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Media and Motivation: The effect of performance pay on writers and content
Last registered on August 28, 2017

Pre-Trial

Trial Information
General Information
Title
Media and Motivation: The effect of performance pay on writers and content
RCT ID
AEARCTR-0002396
Initial registration date
August 25, 2017
Last updated
August 28, 2017 9:43 PM EDT
Location(s)
Region
Primary Investigator
Affiliation
University of Wisconsin - Madison
Other Primary Investigator(s)
PI Affiliation
University of Melbourne
PI Affiliation
University of Wisconsin - Madison
Additional Trial Information
Status
In development
Start date
2017-09-05
End date
2018-08-05
Secondary IDs
Abstract
Performance contracts have been implemented in a variety of settings, yet there is mixed empirical evidence on whether performance pay actually matters for overall firm profits. Despite a recent shift towards performance-based contracts for online journalists, and oft-cited concerns about the impacts of this business model on the quality of journalism, no studies to date investigate the relationship between performance contracts and firm profits or journalistic quality in the market for news. In this project, we use a natural field experiment to study the impacts of pay-per-click contracts for journalists on writer
performance, firm profits, and journalistic quality in an online news firm. We seek to provide evidence on the tradeoffs between the strength of incentives and the effectiveness of contracting on outputs in a context where agents have (or can gain) specific knowledge about the returns to various actions. Our research design focuses on two main contributions: First, by combining random allocation to contract type with individual-level productivity beliefs, we can test whether stronger incentives increase the acquisition and use of information about marginal returns to writer effort. Second, by classifying article-level political and ethnic bias, we will study the effects of contract type on media bias, in an attempt to gain insights into the importance of supply- and demand-side sources of media bias.
External Link(s)
Registration Citation
Citation
Balbuzanov, Ivan, Jared Gars and Emilia Tjernstrom. 2017. "Media and Motivation: The effect of performance pay on writers and content." AEA RCT Registry. August 28. https://doi.org/10.1257/rct.2396-3.0.
Former Citation
Balbuzanov, Ivan et al. 2017. "Media and Motivation: The effect of performance pay on writers and content." AEA RCT Registry. August 28. http://www.socialscienceregistry.org/trials/2396/history/20950.
Experimental Details
Interventions
Intervention(s)
The study will be conducted using within-firm data from a digital news platform. The website is an online newspaper that sources all its news stories from local reporters who are currently paid a fixed rate via mobile-money for each article they publish. The project aims to examine the effects of different types of incentives for writers on the quality and quantity of their output. We will randomize writers into three different types of contracts:
1) Control group: The control group contracts will remain the same. The current reward structure is a fixed fee of W per article that is published.
2) Treatment group A / Pay-per-view: This contract will reward writers with a fee structure composed by a smaller fixed fee, and a bonus for every click that the article receives. The variable fee is calibrated using the prior month’s views, such that the ex ante expected value returns for writers equal to the status quo.
3) Treatment group B / Contract choice: The third group will be allowed to select at the beginning of the treatment period whether they would like to stay with the pay-per-article contract (status quo), or if they would like to switch to a pay-per-view system. If they choose the pay-per-view contract, the contract will be identical to Treatment group A.
Intervention Start Date
2017-09-05
Intervention End Date
2018-03-05
Primary Outcomes
Primary Outcomes (end points)
1) Number of articles submitted, 2) Views/week, 3) Social media sharing (as an alternative measure of 'quality', 4) Writer earnings, 5) Article topic choice, 6) Article and writer bias (political and ethnic), 7) Prediction error, 8) Contract choice and 9) Writer effort
Primary Outcomes (explanation)
6) Article and writer bias: For each article, we will enlist auditors (local to the country in which the experiment takes place) on MTurk to, firstly, classify the articles into one of 10 potential categories. Then, auditors will be asked to identify the political parties mentioned in the article. For each party that is mentioned, auditors will answer the question "Is this article generally positive, neutral, or negative towards this political party?" using a 5-point scale. Answers will then be averaged across auditors and normalized to generate party-favorability scores for each article in the politics section. The overall negativity of an article could either be due to the writer’s ideological slant or to the true nature of the event/topic. However, since writers choose what topics to write about, a writer who chooses to write about topics that are consistently negative about a specific ethnicity/party is likely biased. Since negative news or events are time-specific, we will define writers as biased if their articles are consistently more negative about a party than the average article that week.
Specifically, we will define a party-bias dummy variable, party_bias_it, which equals 1 if writer i’s average article is more negative about [party] than the average article that week.

7) Prediction error: Prior to submitting each article, writers will be asked to state their level of effort and to predict the number of views that the article will receive in the following week. This will provide us with an estimate of the writer’s subjective mean about the ‘productivity’ of the current article. We will construct the prediction error by comparing the predictions to the article's actual views.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The study will be conducted using within-firm data from a digital news platform. The website is an online newspaper that sources all its news stories from local reporters who are currently paid a fixed rate via mobile-money for each article they publish. The project aims to examine the effects of different types of incentives for writers on the quality and quantity of their output.

We will randomize writers into three different types of contracts:
1) Control group: The control group contracts will remain the same. The current reward structure is a fixed fee of W per article that is published.
2) Treatment group A / Pay-per-view: This contract will reward writers with a fee structure composed by a smaller fixed fee, and a bonus for every click that the article receives. The variable fee is calibrated using the prior month’s views, such that the ex ante expected value returns for writers equal to the status quo.
3) Treatment group B / Contract choice: The third group will be allowed to select at the beginning of the treatment period whether they would like to stay with the pay-per-article contract (status quo), or if they would like to switch to a pay-per-view system. If they choose the pay-per-view contract, the contract will be identical to Treatment group A.
Experimental Design Details
Randomization Method
Stratified randomization (stratifying on average pageviews, whether or not the writer has published in the 6 weeks, and the number of articles published per week; 8 total strata). If new writers sign up after the launch of the experiment, they will be randomized into a one of the two treatment arms or into control based on simple randomization.
Randomization Unit
Individual writer-level randomization
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
200 individual writers
Sample size: planned number of observations
200 individual writers
Sample size (or number of clusters) by treatment arms
Roughly 67 writers per treatment arm. If new writers sign up after the launch of the experiment, they will be randomized into a one of the two treatment arms or into control based on simple randomization.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We have baseline covariates (such as lagged pageviews per article) that explain up to .8 of the variation of that outcome variable. For pageviews per article, we therefore have MDEs of around 0.31 of a standard deviation (calculations done with Optimal Design). For number of articles published, we have similar MDEs, although the lagged variables explain more like 0.7 of the variation, and so we can detect MDEs closer to 0.37 of a standard deviation.
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Amref Ethics and Scientific Review Committee
IRB Approval Date
2017-08-25
IRB Approval Number
P369-2017
IRB Name
University of Wisconsin - Madison Education and Social/Behavioral Science IRB
IRB Approval Date
2017-05-23
IRB Approval Number
2017-0527
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS