Experimental Design
We conduct three surveys on the platform Prolific. We start with a benchmarks survey of participants’ own experiences, which we use to generate ‘ground truths’. Using an explicit and consistent definition of sexual harassment, this survey asks participants whether they have been sexually harassed at work. We ask harassment victims if they believe that it affected their career and mental health in specific ways, whether they reported the case, whether they were satisfied with the outcome if they did, and whether they were aware of their legal rights. We also ask participants if they would take a lower paying job to avoid sexual harassment risk. This survey is run on a sample of 800 participants (both male and female) who do not participate in subsequent surveys.
Our second, main survey elicits participants’ prior beliefs over prevalence and harms in the broader population and their perception of policy effectiveness. It randomizes information treatments across respondents, and then measures changes in policy support outcomes, and posterior beliefs. Our third survey is an obfuscated follow-up survey with the same participants as our main survey, conducted up to one month later, to measure outcomes again to assess persistence of any effects of the treatments. The questions in the follow-up are embedded in questions about unrelated matters in order to limit concerns about experimenter demand.
We also run our perceptions survey on policymakers. For this purpose, we define policymakers broadly to refer not only to legislators, but also to those who advise and prepare policy briefs for legislators (and therefore may influence the way they vote) as well as civil servants who may have leeway to determine policy implementation, monitoring and regulation. Our sample will be drawn from the survey pool that has recently been created at the Policymakers Lab at Warwick Business School .This includes approximately 250 policymakers, principally from the UK, US and Australia, who have previously expressed interest in taking surveys for academic research. In this survey, firstly, we will administer the same questions on prior beliefs about prevalence and harms of sexual harassment (in the policymakers’ own country) as for our Prolific sample. Secondly, we will ask policymakers to predict the proportion of people in our Prolific sample who indicated support for each of the policies described in 'Primary Outcomes' above. Policymakers will be asked to predict this quantity for those in the pure control group and, if not from the UK, asked if they expect the proportion in their own country to be substantially higher or lower. Thirdly, for UK-based policymakers we will elicit policymaker perceptions of the public’s knowledge of sexual harassment victims’ legal rights and the employment tribunal process. Specifically, policymakers will guess the percentage of true/false questions about the tribunal process which the average Prolific respondent answered correctly in our benchmarks survey. We also obtain a hypothetical measure of how policymakers expect information on sexual harassment to influence legislation. After asking policymakers to estimate sexual harassment prevalence, its harms, public knowledge of legal rights and policy support, we ask each policymaker what they think the largest each quantity could possibly be is. We then ask them to imagine that their country’s legislators learned credible information that this was the true value and, if so, whether they predict new legislation would be passed.
Further details on our experimental design are available in our pre-results registered report (attached).