Field | Before | After |
---|---|---|
Field Study Withdrawn | Before | After No |
Field Intervention Completion Date | Before | After March 31, 2018 |
Field Data Collection Complete | Before | After Yes |
Field Final Sample Size: Number of Clusters (Unit of Randomization) | Before | After NA |
Field Was attrition correlated with treatment status? | Before | After No |
Field Final Sample Size: Total Number of Observations | Before | After 1,730 experiment participants |
Field Final Sample Size (or Number of Clusters) by Treatment Arms | Before | After 1,352 subjects completed the experiment online (Amazon MTurk), 378 subjects in university laboratory |
Field Public Data URL | Before | After https://github.com/MarcRagin/PredictingInsDemand |
Field Is there a restricted access data set available on request? | Before | After No |
Field Program Files | Before | After Yes |
Field Program Files URL | Before | After https://github.com/MarcRagin/PredictingInsDemand |
Field Data Collection Completion Date | Before | After March 31, 2018 |
Field Is data available for public use? | Before | After Yes |
Field | Before | After |
---|---|---|
Field Paper Abstract | Before | After We analyze an insurance demand experiment conducted in two different settings: in-person at a university laboratory and online using a crowdworking platform. Subject demographics differ across the samples, but average insurance demand is similar. However, choice patterns suggest online subjects are less cognitively engaged—they have more variation in their demand and react less to changes in exogenous factors of the insurance situation. Applying data quality filters does not lead to more comparable demand patterns between the samples. Additionally, while online subjects pass comprehension questions at the same rate as in-person subjects, they show more random behavior in other questions. We find that online subjects are more likely to engage in “coarse thinking,” choosing from a reduced set of options. Our results justify caution in using crowdsourced subjects for insurance demand experiments. We outline some best practices which may help improve data quality from experiments conducted via crowdworking platforms. |
Field Paper Citation | Before | After Jaspersen, J. G., Ragin, M. A., & Sydnor, J. R. (2022). Insurance demand experiments: Comparing crowdworking to the lab. Journal of Risk and Insurance, 89, 1077–1107. https://doi.org/10.1111/jori.12402 |
Field Paper URL | Before | After https://doi.org/10.1111/jori.12402 |
Field | Before | After |
---|---|---|
Field Paper Abstract | Before | After Can measured risk attitudes and associated structural models predict insurance demand? In an experiment (n = 1730), we elicit measures of utility curvature, probability weighting, loss aversion, and preference for certainty and use them to parameterize seventeen common structural models (e.g., expected utility, cumulative prospect theory). Subjects also make 12 insurance choices over different loss probabilities and prices. The insurance choices show coherence and some correlation with various risk-attitude measures. Yet all the structural models predict insurance poorly, often less accurately than random predictions. This is because established structural models predict opposite reactions to probability changes and more sensitivity to prices than people display. Approaches that temper the price responsiveness of structural models show more promise for predicting insurance choices across different conditions. |
Field Paper Citation | Before | After Jaspersen, J. G., Ragin, M. A., and Sydnor, J. R. (2022). Predicting insurance demand from risk attitudes. Journal of Risk and Insurance, 89, 63–96. https://doi.org/10.1111/jori.12342 |
Field Paper URL | Before | After https://doi.org/10.1111/jori.12342 |
Field | Before | After |
---|---|---|
Field Paper Abstract | Before | After The “general risk question” (GRQ) has been established as a quick way to meaningfully elicit subjective attitudes toward risk and correlates well with real-world behaviors involving risk. However, little is known about what aspects of attitudes toward financial risk are captured by the GRQ. We examine how answers to the GRQ correlate with different preference motives and biases toward financial risk using an incentivized choice task (n = 1,730). We find that the GRQ has meaningful correlation with loss aversion and attitudes toward variation in financial losses, but much weaker to non-existent correlations with attitudes toward variation in financial gains, likelihood insensitivity, and certainty preferences. These results suggest that practical applications using the GRQ as an index for financial risk preferences may be most appropriate in settings where decisions rest on attitudes toward financial losses. |
Field Paper Citation | Before | After Jaspersen, J.G., Ragin, M.A. & Sydnor, J.R. Linking subjective and incentivized risk attitudes: The importance of losses. J Risk Uncertain 60, 187–206 (2020). https://doi.org/10.1007/s11166-020-09327-4 |
Field Paper URL | Before | After https://doi.org/10.1007/s11166-020-09327-4 |