Back to History Current Version

Gender and the demand and supply of advice

Last registered on August 25, 2022

Pre-Trial

Trial Information

General Information

Title
Gender and the demand and supply of advice
RCT ID
AEARCTR-0009638
Initial registration date
August 24, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 25, 2022, 2:19 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Bocconi University

Other Primary Investigator(s)

PI Affiliation
University of Virginia
PI Affiliation
Lahore School of Economics

Additional Trial Information

Status
In development
Start date
2022-07-20
End date
2024-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Individuals often improve on their performances significantly by accessing knowledge provided by others. However, individuals can also be deterred from seeking help due to social image concerns or (mis)perceptions about others' willingness to help. At the same time, individuals may differ in the extent to which they are willing to provide help to others, and such willingness may depend on the characteristics of the advice-seeker or the knowledge area of knowledge. The purpose of this research is understanding how people’s gender and related factors explain the ease of asking for and/or receiving help to achieve a better performance. We explore this in a setting where participants face a task for which they do not have full knowledge or information, but which has an objectively correct answer. In particular, we want to understand how the gender of the participant and gender of those providing help affect both the demand and supply of advice.
This study seeks to show whether:
- there is a gap in performance when a participant is helped by a (randomly matched) person of the same vs different gender
- on the supply side, people differentially provide advice to people of a different gender (and this contributes to explaining the performance gap)
- on the demand side, people differentially ask for advice to people of a different gender (and this contributes to explaining the performance gap)
- whether people hold misperceptions on the supply of advice by people of different genders
Thus the study aims to uncover the existence of gender gaps in performance and information acquisition in gender-mixed teams, and to distinguish whether those gaps come from demand-side frictions (e.g., one gender not asking enough for information or expecting little help) or supply-side frictions (e.g., one gender giving actual little help). We will experimentally keep fixed the quality of advice provided (the helper has no control on this), and we will only focus on the quantity of advice supplied vs demanded.
Additional exercises will help us:
- quantify the extent to which aligning incentives between advice-seekers and advice-suppliers helps mitigate some of these gaps by improving the supply of advice
- provide evidence on whether choosing your own advice-supplier (male or female) instead of getting a random one mitigates some of the gaps in performance and information acquisition
- provide evidence on whether gender gaps in performance and information acquisition are related to the gender stereotype associated with a particular knowledge area
- quantify participants' willingness-to-pay for avoiding being seen asking for advice by the advice-supplier
- the role of expectations of own and others' performance - conditionally and unconditionally of getting advice - in explaining the supply and demand for advice
-the role of expectation of advice needed vs. advice supplied in explaining gender gap in performance
- explore heterogeneity in the effects by baseline ability, gender attitudes and stereotypes, personality traits and economic preferences such as risk aversion, trust, altruism/pro-sociality, over-confidence
- categorise participants into types, such as i) always asking for advice, ii) never asking for advice and iii) reacting to the environment
- the role of experience and information on others' behaviour in the supply of advice
This knowledge shall become useful to explain previously found gender differences in the benefits of working in a group. Gender and group performance are an important topic to study, especially as workforces become more integrated and diverse.

External Link(s)

Registration Citation

Citation
Aman-Rana, Shan, Alexia Delfino and Shamyla Shamyla. 2022. "Gender and the demand and supply of advice ." AEA RCT Registry. August 25. https://doi.org/10.1257/rct.9638-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention (Hidden)
Intervention Start Date
2022-08-25
Intervention End Date
2023-04-30

Primary Outcomes

Primary Outcomes (end points)
Our first overall outcome variable of interest is the performance by participants in knowledge questions: whether a person provided the right answer to a question (as well as the number and % of correct answers to knowledge questions), across and within knowledge areas.
For the demand for advice, our main outcome variable is hint-taking: whether a person requested a hint on a question (as well as the total number and % of hints asked by the test-taker to reply to knowledge questions), across and within knowledge areas.
For the supply of advice, our main outcome variable is the total number of hints and the % number of hints that a helper decided to provide to a test-taker, across and within knowledge areas.
All of these variables will be analysed for the pooled sample, by gender of the test-taker and by gender of the helper, as well as by experimental condition.
Primary Outcomes (explanation)
For the variables on performance and demand of advice, we will proceed as follows. In each round, a test-taker is matched with one partner (the helper) and has to reply to 12 knowledge questions (4 in each subject: cooking, sports and subject of study). First, we will assign an indicator variable equal to one to each question answered correctly (for performance) and a second indicator for whether the person asked for a hint (if the hint was available, for demand for advice). Second, to construct the % of correct answers in a knowledge area, we will divide the number of correct answers in that area over 4. Similarly, to construct the number or % of hints asked by the test-taker overall , we will count the number of hints the test-taker asked for across all the subjects.
To elicit the supply of hints, we use a strategy method: we show to each helper a set of 4 test-takers they could be matched with during the game. For each knowledge area (e.g., cooking), we ask them to allocate a maximum of 10 hints across the test-takers. In some cases if they are matched with less than 4 test-takers we ask them to allocate hints as follows: a max of 7 hints if they are matched with 3 test-takers, a max of 5 hints if they are matched with 2 test-takers and a max of 2 hints if they are matched with just 1. Helpers can allocate from a minimum of 0 to a maximum of 3 hints per person in that particular knowledge areas, and can also decide to allocate less than 10 hints in total. We will count the percentage of hints given to a certain test-taker over all the hints available to distribute (overall and within a certain knowledge area). We will also count the number of hints as well as the share of hints given to a certain test-taker over the maximum that could be given to him/her (overall and within a certain knowledge area). We will further explore whether helpers are willing to distribute less than the total for the sake of distributing an equal number of hints to everyone (or other distribution criteria).

Secondary Outcomes

Secondary Outcomes (end points)
We have two families of secondary outcomes:

- Expectations of supply: we will compare the expected supply of hints with actual supply, to understand whether there are misperceptions related to the helpers’ behaviour. We will tell test-takers that the helper had to allocate 10 hints across the given 4 test-takers and ask them to predict how they distributed the hints, within each knowledge area. We incentivize a correct answer with a monetary bonus.

- Willingness to pay for avoiding being seen asking for advice. We have two questions on this. One question uses a BDM mechanism and another is a multiple price list to elicit the willingness to pay for avoiding being seen asking for advice, in order to quantify frictions in knowledge flows within teams.

To look into mechanisms, we will also measure and do heterogeneity on:

- Expectations of own and others' performance - conditionally and unconditionally of getting advice: we will look into whether performance expectations explain the supply of advice as well as the expected supply of advice from others

- Expectations of hint taking: we will ask test-takers how many hints they expect to take, and helpers how many hints they expect the test-takers – the ones they are matched with - to ask for.

- Confidence in own performance: this will be measured by asking “I think my answer as a ___ % of being right. A randomly drawn computer should answer for me if its accuracy is greater than that.”

- Confidence in the expectations of supply: this will be measured by asking directly the confidence level in the expected supply question.

- Stereotypical associations between a knowledge area and gender

- People’s preferences for helpers and test-takers to be matched with during the game

As confounding mechanisms, we will measure:
- How useful people thought the hints were

Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The experiment consists of a within-subject design, with three conditions. One condition is whether a person is matched with a male or female helper. Another condition is whether the helper is random or among the preferred by the test-takers (see below). The third condition is whether there is uncertainty on the availability of the hints. Furthermore, the helpers also play in two conditions: whether they get a fixed payment for helping or whether their payment depends on the test-taker’s performance.
Experimental Design Details
The experiment consists of a within-subject design, with three conditions. One condition is whether a person is matched with a male or female helper. Another condition is whether the helper is random or among the preferred by the test-takers (see below). The third condition is whether there is uncertainty on the availability of the hints . Furthermore, the helpers also play in two conditions: whether they get a fixed payment for helping or whether their payment depends on the test-taker’s performance.

The following paragraphs describe the design in more details.

Participants in the study will be allocated to two different roles: test-takers and helpers. Everyone plays in both roles. Test-takers have to reply to multiple-choice knowledge questions and helpers have to provide hints. The knowledge questions are divided in three knowledge areas: Economics/International Relations (IR)/Political Science, Cooking and Sports. The former knowledge area can be economics, IR or political science depending on the major the participants are enrolled in in the university. Helpers can decide whether to give a hint or not, but have no control on what the hint says: all the hints are taken from a pool prepared by the researchers, in order to guarantee the same average quality of advice across questions.

The experiment is implemented through tablets without any physical face-to-face interaction between participants.

Individuals who participate in this study will go through five parts, after a first round of practice. Each parts of the experiment is described as follows:

1. In part 1, participants are test-takers and solve 12 multiple-choice questions (4 in each knowledge area). In this part, participants are helped by the computer. Three out of four questions in each knowledge area have a hint available, one will not . For all the questions with hints available, participants are sure to get a hint if they ask for it. The order of the questions and knowledge area will be block-randomized.

2. In part 2, participants are test-takers and solve 12 multiple-choice questions (4 in each knowledge area). In this part, participants are helped by the computer. Three out of four questions in each knowledge area have a hint available, one will not. For all the questions with hints available, participants are NOT sure to get a hint if they ask for it, and they know there is a 66% chance that the computer will provide a hint across knowledge areas. The order of the questions and knowledge area will be block-randomized.

3. In part 3, participants will be helpers and will be randomly matched with 4 different test-takers. Using a strategy-method, for each match we ask the helper to choose how many hints to provide in each knowledge area to each test-taker (between 0 and 3). They have a maximum of 10 hints to allocate across the four test-takers in a given knowledge area, but they can choose to allocate less than that amount. We tell the helper that their choice will be implemented should the match realize in one of the following rounds of the game, and that they will be able to see the number of hints asked for by the test-takers. If the match is realized, the helper will be paid a flat rate of 450 PKR. We additionally ask the helper to predict i) the performance of the test taker in each knowledge area without hints available, ii) the number of hints the test-taker will ask for, iii) the performance of the test taker in each knowledge area when hints are available. The order of the questions and knowledge areas will be block-randomized.

3. Part 4 is exactly like part 3, but with one main difference. If a given match is realized, the helper will be paid 75 PKR for each correct answer given by the test-taker.

The order of parts 3 and 4 will be randomized.

4. In part 5, participants will be test-takers and will be matched with 4 different helpers whose choices have been elicited in rounds 3 or 4 (we will pick round 3 or 4 randomly). While matched with a given helper, test-takers will again solve 12 multiple-choice questions (4 in each knowledge area). Thus in total they will solve 48 questions in this part. Three questions in each knowledge area will have a hint available, one will not. If the test-taker wants to ask for a hint, he/she has to press a button. The test-taker knows that the helper will know the number of hints s/he asked for. In this part, the hint is always available if the person asks for it, so there is no uncertainty on the supply of the hint. The order of the questions and knowledge areas will be block-randomized.

5. Part 6 of the game is exactly like part 5, but with one main difference. In this part, the hint may not be released even when the test-taker asks for it. This depends on the choice made by the helper in parts 3 or 4. For instance, suppose that A is a test-taker and B is the helper. B declared in part 3 that he/she wants to give 0 hints in Cooking to A. This means that when A presses the button to ask for a hint in cooking, the system will tell him/her that the helper has not released the hint for him/her. Thus there is uncertainty on the supply of hints coming from the helper’s choices. The order of the questions and knowledge areas will be block-randomized.

The order of parts 5 and 6 will be randomized.

In parts 5 and 6 the matches will be such that each 'test taker' will be paired with four types of helpers: a random woman, a random man, a preferred female helper and a preferred male helper. All the helpers will be selected from the same class where the experiment happens. "Preferred female helpers” and “preferred male helpers” are determined in the survey conducted prior to the experiment, where we ask participants to rank 10 classmates they would like to have has helpers.

A single part and pairing is randomly drawn for the final payment.

Randomization Method
The matches between test-takers and helpers are implemented as soon as people express their preferences for helpers in the survey conducted prior to the experiment, through a code programmed in Python . The order of questions, knowledge areas and parts is randomized through O-Tree.
Randomization Unit
Individual. The design is within-subject, so every person goes through every condition.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
NA
Sample size: planned number of observations
With our current partner, we plan to reach 300 observations. Upon agreement with another university, we target a final sample size of 550 observations.
Sample size (or number of clusters) by treatment arms
300 with current partner.
Target of 550.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board for the Social and Behavioral Sciences at the University of Virginia
IRB Approval Date
2022-06-13
IRB Approval Number
5183
Analysis Plan

Analysis Plan Documents

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials