Forecasting the Results of Experiments: Piloting an Elicitation Strategy
Last registered on January 06, 2020

Pre-Trial

Trial Information
General Information
Title
Forecasting the Results of Experiments: Piloting an Elicitation Strategy
RCT ID
AEARCTR-0005211
Initial registration date
December 26, 2019
Last updated
January 06, 2020 7:13 PM EST
Location(s)
Primary Investigator
Affiliation
University of California, Berkeley
Other Primary Investigator(s)
PI Affiliation
University of California, Berkeley
PI Affiliation
Australian National University
Additional Trial Information
Status
On going
Start date
2019-12-18
End date
2020-01-09
Secondary IDs
Abstract
This study involves collecting forecasts of results of three field experiments. We randomly vary four features when eliciting forecasts: (1) small versus large reference values in an example preceding the predictions; (2) whether forecasts are in standard deviations or raw units; (3) slider versus text-entry responses; and (4) small versus large slider bounds.
External Link(s)
Registration Citation
Citation
DellaVigna, Stefano, Nicholas Otis and Eva Vivalt. 2020. "Forecasting the Results of Experiments: Piloting an Elicitation Strategy." AEA RCT Registry. January 06. https://doi.org/10.1257/rct.5211-1.1.
Experimental Details
Interventions
Intervention(s)
(1) small (0.1 SD) versus large (0.3 SD) reference values in an example preceding the predictions; (2) whether forecasts are in standard deviations or raw units; (3) slider versus text-entry responses; and (4) small (+-0.5 SD) versus large (+-1.0 SD) slider bounds.
Intervention Start Date
2019-12-18
Intervention End Date
2020-01-09
Primary Outcomes
Primary Outcomes (end points)
Forecasts of experimental results for three studies preliminarily accepted as Registered Reports to the Journal of Development Economics.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We collect forecasts of the results of three experiments preliminarily accepted (before October 2019) that publicly posted their proposals, but that had not yet released any experimental results. We draw on a pool of academics, researchers, and practitioners. Consenting participants provide predictions for results of up to three studies. At the start of the survey, participants are randomized across the four conditions described in the “Interventions” section above.
Experimental Design Details
Randomization Method
Randomization takes place automatically when the survey is initiated.
Randomization Unit
Randomization is at the individual level.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
We plan to collect forecasts from at least 80 individuals.
Sample size: planned number of observations
We plan to collect forecasts from at least 80 individuals.
Sample size (or number of clusters) by treatment arms
Our main treatments will have N/2 participants, where N is our total sample size.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Committee for the Protection of Human Subjects, University of California, Berkeley
IRB Approval Date
2019-11-20
IRB Approval Number
2019-10-12690
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers