Gender and expert advice 2

Last registered on August 28, 2024

Pre-Trial

Trial Information

General Information

Title
Gender and expert advice 2
RCT ID
AEARCTR-0014234
Initial registration date
August 21, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 28, 2024, 3:04 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Bristol

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

Status
In development
Start date
2024-09-01
End date
2024-09-30
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
We will conduct a survey experiment to test whether male and female experts have a different influence on the opinions of members of the public. Specifically, we will compare the influence of male and female experts on the opinions formed by members of the public on a set of topical economic policy issues. Our previous study (AEARCTR-0011764) found, contrary to expectation and previous evidence of discrimination against female expertise, that female experts were more persuasive. One hypothesis is that their credentials (title and institutional affiliation) may have been signals of expertise. This study will replicate the previous analysis but remove these signals in order to test this hypothesis.
External Link(s)

Registration Citation

Citation
Sievertsen, Hans and Sarah Smith. 2024. "Gender and expert advice 2." AEA RCT Registry. August 28. https://doi.org/10.1257/rct.14234-1.0
Experimental Details

Interventions

Intervention(s)
This is an information provision experiment (Haaland, Roth, and Wohlfart, 2021) that will test the effect of seeing an expert’s opinions on the opinions expressed by survey respondents. Specifically, survey respondents will be asked to give their opinion on a set of topical economic policy issues and will be shown the opinion of a (randomly selected) male or female expert economist. The study will test the effect of gender on whether the members of the public are influenced by the expert's opinion.

The design of the study builds on previous studies.
Sapienza and Zingales (2013) survey members of the public on a set of topical economic policy issues for which they have the opinions of the EEP members. They find that there is sizeable disagreement between expert and public opinions. They find that telling members of the public about the consensus expert opinion has very little effect. Following a very similar approach, Ballard and Johnston (2016) find that citizen views are affected by being told the consensus expert opinion when it comes to technical issues (eg the Gold Standard), but not when it comes to salient and politicized issues (eg immigration). Our study differs from these in that we will show the opinion of a single, expert, together with a name and photo. Showing the identity of a single expert follows the approach taken by D’Acunto, Fuster and Weber (2021). They vary the salience of the demographic diversity of the Fed’s monetary policy committee by including one of three photos of same-rank committee members at random (white man/ black man/ white woman). They test the effect on the formation of unemployment and inflation expectations among survey respondents, finding, first, that the information shifts expectations, and second, that seeing a female/ black photo increases the information effect among women/ethnic minority respondents. Our study is different in that it is interested in the effect of the identities of individual experts, rather than the effect of (perceptions of) the identity of an institution. A recent, closely related study, Greve-Poulsen, Larsen, Pedersen and Albæk, E. (2023) finds that the gender of an expert does not matter when people are exposed to a newspaper article on either euthanasia or entrepreneurship. Male and female experts were rated equally competent and had the same effect on opinions. Compared to this study, our design makes gender more salient by showing a photo of the expert.
Intervention Start Date
2024-09-01
Intervention End Date
2024-09-30

Primary Outcomes

Primary Outcomes (end points)
The design of the survey questions will follow that of the Chicago Booth Expert Economist Panel (EEP). Survey respondents will be shown statements on a set of topical economic policy issues and will be asked whether they agree/ disagree with the statements using a Likert scale (1 = strongly agree, 2 = agree, 3 = uncertain, 4 = disagree, 5 = strongly disagree). The opinions of EEP members on the same issues are (already) measured using the same scale.
The primary outcome will be a binary indicator Match_iq = 1 if the opinion of respondent i matches with the expert opinion on question q (= 0 otherwise).
Primary Outcomes (explanation)
We will construct the match variable, by comparing the response of the survey respondent (5-point likert scale) to the opinion of the expert (5 point likert scale).
Exact match: Likert_respondent = Likert_expert
We will also consider an alternative, broader definition, collapsing the likert, 5-point scale to a 3-point scale (agree, uncertain, disagree)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Experimental Design
The survey will be run through Prolific. It will be based on a sample representative of the US in terms of age, education and origin. We will survey 2,000 respondents.

Each respondent will be asked for their opinions on a range of ten, topical economic policy issues. Respondents will be shown a set of statements and asked, for each statement, whether they agree/ disagree, or are uncertain, using the Likert scale. This design, and the statements themselves, are taken from the Chicago Booth Economist Expert Panel (https://www.kentclarkcenter.org/us-economic-experts-panel/)

The ten statements have been chosen to reflect a range of different policy areas. They also represent a mix of technical/ non-technical issues (Ballard and Johnston find that citizens give more weight to expert opinion on more technical issues). They are also issues where there is a mix of opinions among the expert economists (agree/ uncertain/ disagree), enabling us to show different opinions on the same issue.
The ten statements are:
1. Use of artificial intelligence over the next ten years will lead to a substantial increase in the growth rates of real per capita income in the US and Western Europe over the subsequent two decades.
2. There needs to be more government regulation around Twitter’s content moderation and personal data protection.
3. It would serve the US economy well to make it unlawful for companies with revenues over $1 billion to offer goods or services for sale at an excessive price during an exceptional market shock.
4. Efforts to achieve the goal of reaching net-zero emissions of greenhouse gases by 2050 will be a major drag on global economic growth.
5. Given the centrality of semiconductors to the manufacturing of many products, securing reliable supplies should be a key strategic objective of national policy.
6. A significant factor behind today’s higher US inflation is dominant corporations in uncompetitive markets taking advantage of their market power to raise prices.
7. Financial regulators in the US and Europe lack the tools and authority to deter runs on banks by uninsured depositors.
8. When economic policy-makers are unable to commit credibly in advance to a specific decision rule, they will often follow a poor policy trajectory.
9. A windfall tax on the profits of large oil companies ‚ with the revenue rebated to households‚ would provide an efficient means to protect the average US household.
10. A ban on advertising junk foods (those that are high in sugar, salt, and fat) would be an effective policy to reduce child obesity.

The instructions to the respondents will be the following:
This is a survey to collect opinions from members of the public on economic policy issues. Everyone is affected by what is happening in the economy – so we want to know what you think about the issues. The real-world topics covered by the questions include climate change, working from home, twitter and artificial intelligence, so you don’t need a background in economics – just give us your opinion.
The answers will be used for research purposes. Your participation in the survey is voluntary and you can stop at any stage. You may withdraw your participation at any time by sending an email with your participant ID to [email protected] and we will delete all data recorded on this ID. We will ask you questions about your background, including age, ethnic origin, and gender. You can choose not to answer a specific question if you prefer not to. All data will be stored anonymously and securely at the University of Bristol, and only members of the research team will be able to access it. The legal basis for collecting and processing this information is legitimate interest (Article 6(1)(f)) of the General Data Protection Regulation 2018). We will never publish any information that could let people figure out who you are. In line with best practice for research, once our study is finished, we will securely archive your information (without any personal information) for other researchers to use in the future.

Before asking for their opinions, we will collect the following demographic information:
• Age (bands)
• Gender
• Ethnicity
• Highest education qualification
• Self-assessed economics knowledge.

Respondents will then be asked their opinions, with the following instructions:
You will now see ten statements about topical, economic policy issues and you will see the opinion of an expert economist who has been asked the same questions, for comparison. We would like to know your opinion on the issues. There are no right or wrong answers. Tell us, for each of the statements whether you agree/ disagree or are uncertain.

The respondents will see the opinion of a (randomly- selected male or female) expert from a sub-sample of the EEP panel. We will also provide the expert's name and a photo. We will not show the name of the institution.

Treatment
The expert opinions will not be selected at random from the entire panel – this would lead to relatively few female expert opinions being selected. Instead, we will pre-select a sub-sample of female and male expert opinions from which to randomly select opinions. This will be based on the following rules:
• First, select all the opinions from female experts who responded to the question
• Second, select male matches (by opinion) for each of the female expert views. This matched sample is gender-balanced by construction with male and female experts with the same views, on average.
From this, matched sample, we will randomly select 10 expert opinions (one per question) for each respondent to see.
Experimental Design Details


Experimental Design Details
Randomization Method
Randomization Method
Using random question selection on prolific.
For each question, there is a balanced male/female expert sample. Respondents will be shown one expert, drawn at random, per question.
Randomization Unit
Randomization Unit
randomization at the individual question level
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
2000 individuals
Sample size: planned number of observations
20000 observations
Sample size (or number of clusters) by treatment arms
10 questions per respondent
approx equal split treatment (see a female expert) and control (see a male expert)
Within respondent variation
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculations indicate that we need a sample size of 1,920 respondents (19,200 observations) to detect an effect size of 1.0 percentage points (i.e. seeing a female expert opinion increases the probability of a match by 1.0 percentage points). in our previous study, the estimated effect size was 1.1 ppts. This assumes a baseline match rate of 33% and an intra-cluster correlation rate of 0.01. The estimated ICC in our previous study was 0.005.
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Bristol, School of Economics Ethics Committee
IRB Approval Date
2023-06-29
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials