Discrimination against the mentally ill

Last registered on May 04, 2022


Trial Information

General Information

Discrimination against the mentally ill
Initial registration date
February 10, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 11, 2021, 11:59 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 04, 2022, 11:07 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

University of Warwick

Other Primary Investigator(s)

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
People with depression or anxiety, the most common mental illnesses, often deal with prejudice and unfavorable treatment at work. This could have economic costs if they are discriminated against relative to other equally-productive workers, or treated differently by coworkers in ways that harm productivity. How much does this happen, as opposed to discrimination just reflecting these disorders' direct effects on productivity? This project uses an online experiment to investigate discriminatory behavior towards depressed or anxious coworkers in a collaborative problem-solving task. I estimate discrimination in the preference for working with a person and in in-task behaviors, investigate the mechanisms behind such discrimination and its effects on earnings, and finally consider how this relates to the willingness of participants to reveal information about their mental illness.
External Link(s)

Registration Citation

Ridley, Matthew. 2022. "Discrimination against the mentally ill." AEA RCT Registry. May 04. https://doi.org/10.1257/rct.7100-6.0
Experimental Details


There are two roles in the experimental task: tourist and guide. The main intervention is whether I reveal a tourist's depression or anxiety symptoms to the guide, when the tourist in fact has these symptoms). Additional, I randomize whether I reveal that a tourist lacks these symptoms, when they do not in fact have them.

A secondary intervention is whether, when choosing roles in the task, participants are told (truthfully) that information on these same symptoms might be revealed to their coworker if they choose one of the roles.

I also randomize what other demographic information alongside mental health is revealed. Guides and tourists are also matched on a first-come first-serve basis as they enter the study, so this is quasi-random in that sense.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
- Willingness to pay to work with a given coworker, rather than receive a random new one, both before and after actually working with this person
- Earnings per hour in the task
- Willingness to pay to reveal/hide symptoms of mental illness
Primary Outcomes (explanation)
Willingness to pay is measured using a Becker-deGroot-Marschak procedure in which the guide is asked the minimum bonus payment that would persuade them to get a new tourist rather than work with this one, or vice versa, depending which option they intrinsically prefer.

Secondary Outcomes

Secondary Outcomes (end points)
- Effort in the task
- Politeness/kindness to the other person
- Beliefs about the other tourists' ability in the task
- Altruism toward the tourist
- Enjoyment of the task
Secondary Outcomes (explanation)
I measure effort in the task using measures such as the number, length and frequency of messages that the guide sends to the tourist. To measure politeness I use the occurrence of key words like please and thank you as well as sentiment analysis.

Beliefs on the likelihood that other participants succeeded or failed are elicited using a log-scoring rule.

Altruism is measured through a brief simple dictator game at the end of the experiment in which the guide can send some of their earnings to their tourist as a 'thank you'. Enjoyment of the task is measured through asking the guide directly to rate this at the end of the task.

Experimental Design

Experimental Design
My experiment revolves around an online, collaborative navigation task. Players on Amazon’s Mechanical Turk (MTurk) complete the task of guiding a tourist to a destination. They do four rounds of the navigation task divided into two batches of two rounds. In every round, guides see information about the tourist, with the key experimental treatment being whether this includes mental health. I measure this information in a baseline survey.

I elicit guides' willingness to pay to work with specific tourists before and after working with them and investigate how signals of mental health affect this demand. I also estimate how showing tourist mental health information to the guide, conditional on the tourist's actual mental health status, affects the guide's behavior in the task and ultimate earnings in the task.

I also elicit willingness to pay to hide or reveal signals of mental health to potential guides, alongside willingness to pay to reveal other information. This is done in between the first and second batch of two rounds.
Experimental Design Details
Participants first complete a baseline survey including screening questionnaires for mental health, several days before the rest of the experiment. In the main experiment, they do four rounds of the navigation task divided into two batches of two rounds. In between these rounds, they answer a survey on their beliefs about other participants in the task and their willingness to pay to be the guide versus the tourist in subsequent rounds.

In each round of the task, participants are matched in pairs and assigned to be guides or tourists. The guide then sees a profile of the tourist, which may or may not include a mental health signal, and reports their willingness to pay to work with the tourist they're matched to. They then complete the task. Afterwards, if both players have another round to play in this batch, the guide is again asked their willingness to pay to continue with this tourist. Willingness to pay is elicited via a Becker-deGroot-Marschak (BDM) mechanism which is implemented with 20% probability.

In the task itself, adapted from de Vries et al. (2018), the tourist is (virtually) dropped at an unknown New York intersection and can talk to the guide via a chat window. The guide has a simplified map of the tourist’s immediate area, with a destination marked, but does not know where the tourist is on the map. To earn a payoff, the tourist must get to the destination and the guide must confirm their arrival. The challenge of the task is finding where the tourist is on the map to start with as well as keeping track of the tourist's orientation relative to the map; the task thus requires careful, clear communication. Both participants also have an opportunity to give up.

I measure mental health in the baseline survey using two standard short-form survey instruments, the PHQ-8 for depression (specifically, major depressive disorder) and the GAD-7 for anxiety (specifically, generalized anxiety disorder). These instruments contain the PHQ-2 and GAD-2, very short questionnaires that are also used as screening tools in their own right and which I use as my signals of mental health that are shown to the guide.

In this design, I test the following hypotheses:
1. Do people discriminate against those with depression/anxiety symptoms when choosing who to work with? To answer this question, I estimate how the inclusion of these symptoms in a tourist's profile affects the guide's willingness to pay for the tourist before starting the task.

2. Is this discrimination statistical? To answer this, I compare the willingness to pay effect above with the average effect on actual earnings in the task of being assigned a tourist with vs. without symptoms in their profile. To estimate the earnings effect I use the 80% subsample for which the BDM mechanism is not implemented and tourists are thus exogenously assigned.

3. Do guides effectively discriminate in their in-task behavior, in ways that affect earnings? The earnings penalty from working with someone with revealed depression or anxiety symptoms may reflect both the direct correlation of depression or anxiety with ability at the task, and the effect of seeing these symptoms on the guide's behavior in the task. For instance, a belief that such people are more likely to give up may mean guides choose to give up sooner themselves. I therefore estimate how the appearance of depression/anxiety symptoms in the tourist profile affects earnings in the task, conditional on whether the tourist actually does have these symptoms. Additionally, I can look into how these earnings effects might come about by looking at the same effect on guide behavior, such as number and frequency of messages, and whether they click 'give up'.

4. What are the dynamics of discrimination -- do guides discriminate against those with depression/anxiety symptoms conditional on past performance in the task? To investigate this, I look at the willingness to pay to keep working with a given tourist after a round of the task. I ask whether guides are willing to pay less for tourists with a symptom of depression/anxiety conditional on having just failed at the task with that tourist. As performance is endogenous, I leverage randomized features of the task which affect its difficulty (in ways that could be misattributed to the tourist) such as the number of easily recognizable landmarks. I use these to create instrumental variables for success or failure.

5. What is the willingness to reveal mental health information in this task? A major goal of many awareness campaigns is to increase openness around mental health at work, but fear of discrimination could be a major barrier to 'coming out' as feeling depressed/anxious. I therefore investigate in my setting the willingness to pay to reveal mental illness symptoms. To measure this, I ask participants halfway through to choose via a BDM mechanism what information they would like or not like to be in their 'profile' that their guide will see if they are in fact the tourist in later rounds. This BDM mechanism is also only implemented for 20% of the sample.

I use additional features of the experiment to investigate mechanisms behind any discrimination result I might find. In particular, if discrimination is not wholly statistical, is this because people have on average inaccurate beliefs about depression/anxiety and performance at this task? To investigate this I elicit participants' beliefs about the performance of other participants in a separate survey mid-way through the experiment. I show profiles of actual participants, use a log-scoring rule to elicit subjective probabilities that these participants succeeded, and estimate how including depression/anxiety symptoms in the profile changes these beliefs.

Another potential mechanism is that guides feel less (or more) altruistic towards tourists showing symptoms of depression or anxiety. As the task is collaborative, altruism towards one's partner should motivate higher effort in the task. I measure the effect of revealing symptoms on altruism using a simple dictator game: at the end of the experiment, the guide is asked if they would like to send any of the bonus payment they have earned to the tourist, 'as a thank you'. The guide is told truthfully that the tourist will not find out if they send nothing.
Randomization Method
All randomization is done by computer using a random number generator.
Randomization Unit
Randomization is at the tourist-batch level. That is, in each batch of two rounds I randomize what information is included in the profile of the tourist that will be shown to all potential guides. Thus, randomization is within guide as guides may see multiple tourists during a batch. Treatment is clustered insofar as a pair of guide and tourist may play multiple rounds within a batch, and across these rounds the treatment assignment will not vary.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
~880 pairings of participants. There will be 800 participants, who are twice (once at the start of each batch) put into 400 pairs of two, making 800 initial pair-ups. In some of these pair-ups the WTP elicitation mechanism will be implemented and this may lead to the guide and tourist rematching, likely in about 10% of cases (this depends on guides' answers in the mechanism). This implies I will observe about 880 different pairings of participants.
Sample size: planned number of observations
1600 rounds of the task (each person plays two rounds per batch).
Sample size (or number of clusters) by treatment arms
I will recruit participants and assign them to roles such that about 512 (64%) of the pairs have tourists with depression or anxiety symptoms. In 50% of the pairs I will reveal this information to the guide. This gives:
- ~256 pairs in which the tourist has depression or anxiety symptoms and this is revealed (main treatment)
- ~256 pairs in which the tourist has depression or anxiety symptoms and this is not revealed (main control)
- ~144 pairs in which the tourist does not have depression or anxiety symptoms and this is revealed
- ~144 pairs in which the tourist does not have depression or anxiety symptoms and this is not revealed
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials


Document Name
Second Experiment
Document Type
Document Description
This document pre-registers a second, follow-up experiment designed to investigate potential mechanisms behind the results from the main trial.
Second Experiment

MD5: 95b27253c498404979d6f517a7a03caf

SHA1: 7884e56909a89422a4df02bb1f9fae29f0efb256

Uploaded At: October 04, 2021


Institutional Review Boards (IRBs)

IRB Name
MIT Committee on the Use of Humans as Experimental Subjects (COUHES)
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials