Back to History

Fields Changed

Registration

Field Before After
Abstract This study will add to the body of research on incentive-induced behavior by quantifying the effect of various incentives on output. While a wide range of incentives have been shown to be effective (see Campbell, 2006), comparisons between different incentive types have been rare. This study hopes to quantify some of the difference between types of incentives, particularly in non-monetary forms (such as goal setting, comparison to others, etc.). The result of this study will form the basis for the elicitation of expert forecasts as registered in "Expert Forecasts of Amazon Turk Treatments". This study will add to the body of research on incentive-induced behavior by quantifying the effect of various incentives on output. While a wide range of incentives have been shown to be effective (see Campbell, 2006), comparisons between different incentive types have been rare. This study hopes to quantify some of the difference between types of incentives, particularly in non-monetary forms (such as goal setting, comparison to others, etc.). The result of this study will form the basis for the elicitation of expert forecasts as registered in "Expert Forecasts of Amazon Turk Treatments" (AEARCTR-0000731).
Trial End Date July 14, 2015 September 30, 2015
Last Published April 05, 2017 10:15 AM April 07, 2017 10:14 AM
Study Withdrawn No
Intervention Completion Date June 14, 2015
Data Collection Complete Yes
Final Sample Size: Number of Clusters (Unit of Randomization) N/A
Was attrition correlated with treatment status? No
Final Sample Size: Total Number of Observations The final sample includes 9,861 subjects.
Final Sample Size (or Number of Clusters) by Treatment Arms Approximately 550 subjects for each treatment arm (with 18 treatment arms)
Is there a restricted access data set available on request? No
Program Files No
Data Collection Completion Date September 30, 2015
Is data available for public use? No
Intervention End Date July 14, 2015 June 14, 2015
Planned Number of Clusters The number of clusters is the same as the number of observations. 10,000 people is the ideal number of subjects planned for the study. The hope is to obtain at least 5500 subjects. The task will be kept open on Amazon Mechanical Turk until either (i) two weeks have passed or (ii) 10,000 subjects have completed the study, whichever comes first. If two weeks pass without 5500 subjects completing the task, then the task will be kept open (up to six weeks) until 5500 subjects are obtained. n/a
Planned Number of Observations Please see above. The number of clusters is the same as the number of observations. 10,000 people is the ideal number of subjects planned for the study. The hope is to obtain at least 5500 subjects. The task will be kept open on Amazon Mechanical Turk until either (i) two weeks have passed or (ii) 10,000 subjects have completed the study, whichever comes first. If two weeks pass without 5500 subjects completing the task, then the task will be kept open (up to six weeks) until 5500 subjects are obtained.
Back to top

Papers

Field Before After
Paper Abstract Academic experts frequently recommend policies and treatments. But how well do they anticipate the impact of different treatments? And how do their predictions compare to the predictions of non-experts? We analyze how 208 experts forecast the results of 15 treatments involving monetary and non-monetary motivators in a real-effort task. We compare these forecasts to those made by PhD students and non-experts: undergraduates, MBAs, and an online sample. We document seven main results. First, the average forecast of experts predicts quite well the experimental results. Second, there is a strong wisdom-of- crowds effect: the average forecast outperforms 96 percent of individual forecasts. Third, correlates of expertise–citations, academic rank, field, and contextual experience—do not improve forecasting accuracy. Fourth, experts as a group do better than non-experts, but not if accuracy is defined as rank ordering treatments. Fifth, measures of effort, confidence, and revealed ability are predictive of forecast accuracy to some extent, especially for non- experts. Sixth, using these measures we identify ‘superforecasters’ among the non-experts who outperform the experts out of sample. Seventh, we document that these results on forecasting accuracy surprise the forecasters themselves. We present a simple model that organizes several of these results and we stress the implications for the collection of forecasts of future experimental results.
Paper Citation "Predicting Experimental Results: Who Knows What?" This version: August 16, 2016
Back to top
Field Before After
Paper Abstract How much do different monetary and non-monetary motivators induce costly effort? Does the effectiveness line up with the expectations of researchers and with results in the literature? We conduct a large-scale real-effort experiment with 18 treatment arms. We examine the effect of (i) standard incentives; (ii) behavioral factors like social preferences and reference dependence; and (iii) non-monetary inducements from psychology. We find that (i) monetary incentives work largely as expected, including a very low piece rate treat- ment which does not crowd out effort; (ii) the evidence is partly consistent with standard behavioral models, including warm glow, though we do not find evidence of probability weighting; (iii) the psychological motivators are effective, but less so than incentives. We then compare the results to forecasts by 208 academic experts. On average, the experts an- ticipate several key features, like the effectiveness of psychological motivators. A sizeable share of experts, however, expects crowd-out, probability weighting, and pure altruism, counterfactually. As a further comparison, we present a meta-analysis of similar treat- ments in the literature. Overall, predictions based on the literature are correlated with, but underperform, the expert forecasts.
Paper Citation "What Motivates Effort? Evidence and Expert Forecasts." This version: March 15, 2017.
Back to top