Back to History

Fields Changed

Registration

Field Before After
Trial Status in_development completed
Abstract This study makes use of the data on the response of output to various behavioral conditions in a real-effort experiment, which was gathered as a part of the project registered as "Response of Output to Varying Incentive Structures on Amazon Turk". The behavioral treatments include examinations of the response to incentives, altruistic motives, loss aversion, and gift exchange, among others. A group of forecasters, including experts in economics, psychology, and decision-making, will be asked to predict the output resulting from each of the conditions. We then compare these forecasts to the actual results to examine the relevance of expertise to forecasting experimental results. This study makes use of the data on the response of output to various behavioral conditions in a real-effort experiment, which was gathered as a part of the project registered as "Response of Output to Varying Incentive Structures on Amazon Turk" (AEARCTR-0000714). The behavioral treatments include examinations of the response to incentives, altruistic motives, loss aversion, and gift exchange, among others. A group of forecasters, including experts in economics, psychology, and decision-making, will be asked to predict the output resulting from each of the conditions. We then compare these forecasts to the actual results to examine the relevance of expertise to forecasting experimental results.
Trial Start Date July 09, 2015 May 14, 2015
Last Published July 09, 2015 11:43 PM April 07, 2017 10:14 AM
Study Withdrawn No
Intervention Completion Date December 31, 2015
Data Collection Complete Yes
Final Sample Size: Number of Clusters (Unit of Randomization) N/A
Was attrition correlated with treatment status? No
Final Sample Size: Total Number of Observations The final sample includes 9,861 subjects.
Final Sample Size (or Number of Clusters) by Treatment Arms Approximately 550 subjects for each treatment arm (with 18 treatment arms)
Is there a restricted access data set available on request? No
Program Files No
Data Collection Completion Date September 30, 2015
Is data available for public use? No
Intervention Start Date July 09, 2015 May 14, 2015
Intervention End Date December 31, 2015 July 09, 2015
Back to top

External Links

Field Before After
External Link URL https://www.povertyactionlab.org/node/22136
External Link Description J-PAL evaluation summary
Back to top

Papers

Field Before After
Paper Abstract How much do different monetary and non-monetary motivators induce costly effort? Does the effectiveness line up with the expectations of researchers and with results in the literature? We conduct a large-scale real-effort experiment with 18 treatment arms. We examine the effect of (i) standard incentives; (ii) behavioral factors like social preferences and reference dependence; and (iii) non-monetary inducements from psychology. We find that (i) monetary incentives work largely as expected, including a very low piece rate treat- ment which does not crowd out effort; (ii) the evidence is partly consistent with standard behavioral models, including warm glow, though we do not find evidence of probability weighting; (iii) the psychological motivators are effective, but less so than incentives. We then compare the results to forecasts by 208 academic experts. On average, the experts an- ticipate several key features, like the effectiveness of psychological motivators. A sizeable share of experts, however, expects crowd-out, probability weighting, and pure altruism, counterfactually. As a further comparison, we present a meta-analysis of similar treat- ments in the literature. Overall, predictions based on the literature are correlated with, but underperform, the expert forecasts.
Paper Citation "What Motivates Effort? Evidence and Expert Forecasts." This version: March 15, 2017.
Paper URL https://eml.berkeley.edu/~sdellavi/wp/BehavioralForecastsMar17withOnlApp.pdf
Back to top
Field Before After
Paper Abstract Academic experts frequently recommend policies and treatments. But how well do they anticipate the impact of different treatments? And how do their predictions compare to the predictions of non-experts? We analyze how 208 experts forecast the results of 15 treatments involving monetary and non-monetary motivators in a real-effort task. We compare these forecasts to those made by PhD students and non-experts: undergraduates, MBAs, and an online sample. We document seven main results. First, the average forecast of experts predicts quite well the experimental results. Second, there is a strong wisdom-of- crowds effect: the average forecast outperforms 96 percent of individual forecasts. Third, correlates of expertise–citations, academic rank, field, and contextual experience—do not improve forecasting accuracy. Fourth, experts as a group do better than non-experts, but not if accuracy is defined as rank ordering treatments. Fifth, measures of effort, confidence, and revealed ability are predictive of forecast accuracy to some extent, especially for non- experts. Sixth, using these measures we identify ‘superforecasters’ among the non-experts who outperform the experts out of sample. Seventh, we document that these results on forecasting accuracy surprise the forecasters themselves. We present a simple model that organizes several of these results and we stress the implications for the collection of forecasts of future experimental results.
Paper Citation "Predicting Experimental Results: Who Knows What?" This version: August 16, 2016.
Paper URL https://eml.berkeley.edu/~sdellavi/wp/expertsJul16.pdf
Back to top