Understanding discrimination on Facebook Marketplace: A field experiment

Last registered on March 15, 2024


Trial Information

General Information

Understanding discrimination on Facebook Marketplace: A field experiment
Initial registration date
March 07, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 15, 2024, 5:39 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.


There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Middle Tennessee State University

Other Primary Investigator(s)

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Facebook Marketplace was created in 2016 to compete with Craigslist in the market for in-person sales of used goods. Within two years, Facebook Marketplace had over 800 million active users worldwide. One of the main advantages of Marketplace over Craigslist is that buyers can attempt to infer product quality and safety of in-person meetings from sellers' Facebook profiles. Pictures, gender, age, race, marriage status, among others can be seen or inferred from the profile of sellers. This information can be used when deciding to buy and negotiate for a product. This research examines the extent, and potential causes, of discrimination on Marketplace via a field experiment. We build fictitious Facebook profiles that are identical except for race, gender, or age. We then make offers for a set of commonly sold products through these profiles and compare the effect of both sellers' and buyers' demographic characteristics on the number and size of offers.
External Link(s)

Registration Citation

Jensen, Nicholas. 2024. "Understanding discrimination on Facebook Marketplace: A field experiment." AEA RCT Registry. March 15. https://doi.org/10.1257/rct.13094-1.0
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
We test for the existence and the degree of racial discrimination present on Facebook Marketplace. Our experimental design mirrors the correspondence studies commonly used to test for discrimination in labor markets, and consists of using created profiles to make offers on classified advertisements on Facebook Marketplace throughout the US as a potential buyer. We start by posing as potential buyers, where within a specific market and time period, a we respond to a set of sellers' listings for a set of comparable products for sale. For each inquiry, we utilize a matched pair of two nearly identical fictitious Facebook profiles. The primary difference between the buyer profiles within a matched pair is the buyer's race (black or white), which will be implied using fictitious names and photos. We also utilize a set of matched pairs for each distinct combination of age, sex, and race. We then compare the number of offers accepted and value of the best offer accepted to our paired fictitious buyers, and estimate whether there are significant differences in these responses between the otherwise identical black and white buyers . While all seller characteristics besides race are the same within each matched pair in a given market and time period, other seller characteristics such as educational attainment, marital status, age, etc. are varied across matched seller pairs and over time.
Any observed racial discrimination against sellers could be the result statistical discrimination in which buyers are using race as a signal of unobserved product quality and safety associated with meeting in-person. It is also possible that buyers are engaging in taste-based discrimination in which differences in buyer responses are a result of animus toward a particular race, regardless of how they perceive the trustworthiness of the seller. To better understand the nature of this discrimination, we also use our fictitious profiles to pose as potential buyers, where within a specific market and time period, equivalent offers below the asking price are made to multiple sellers by a matched pair of two identical profiles (with the sole exception of race). Significant differences in the number of accepted offers between the buyers within a matched pair are indicative of racial discrimination against the buyers. An important difference in observed discrimination against buyers, however, is that sellers might use race as a signal of safety associated with meeting in-person, but not as a signal of product quality. Thus, differences in discrimination against buyers and sellers may be informative in analyzing the use of race as a signal of product quality. For example, if there is no difference in the magnitude of discrimination against sellers and buyers, any observed discrimination is likely not due to buyers using seller race as a signal of product quality. If, however, there is no significant discrimination against buyers, then perhaps most or all discrimination against sellers results from statistical discrimination in which buyers are using a seller's race to signal product quality.

To test for discrimination, we conduct a field experiment in which fictitious buyer profiles inquire about the availability of a set of products listed by sellers. These products include furniture, and identifiable brand name products, including the Xbox Series X, Kitchenaid Mixers, the iPhone 13 Pro, and Bowflex weight systems. We plan to collect data on the availability of seller listings from the following major U.S. city areas: Atlanta, Georgia; Chicago, Illinois; Denver, Colorado; San Francisco & Los Angeles, California; New York City & Buffalo, New York; Charlotte, North Carolina; Dallas, Texas; Kansas City, Missouri; Washington D.C.; Miami, Florida; Saint Louis, Missouri; Phoenix, Arizona; Salt Lake City, Utah; Seattle, Washington; Portland, Oregon; Las Vegas, Nevada; Nashville, Tennessee; and Birmingham, Alabama.

For each of the fictitious Facebook profiles, we assign distinctively white and distinctively black names to each of the matched pairs of potential buyers, along with a photo with a clearly perceptible race corresponding to the profile's assigned race. To select the names and photos, We base our methodology on prior economic field experiments, notably those conducted by Bertrand & Mullainathan (2004) and Edelman et al. (2017).

We collect information for each listing across a number of available related variables of interest. For information on the sellers contacted, we recorded the number of listings each seller has listed at the time of inquiry, and their review rating on Facebook Marketplace. In addition, we record the price of the listing, the condition (used - good, used - fair, and used - like new) and the type of product listed. We restrict our search radius for sellers' listings systematically across products based on their availability.

For our primary treatment groups used in the analysis, we use four main treatment groups based on each prospective buyers' perceived race and gender; these categories include: African American males, African American females, white males, and white females. Our name choice methodology is listed in the experimental choice section. For photos, we use a series of public domain images corresponding to each category. Here, we also include their perceived age category and listed education level as additional secondary components of variation.
We create a set of profiles for each of these categories. Each profile sends a series of inquiries to buyers within the experiment's time window. We track all sellers' responses over the week following the initial buyer inquiry. We subsequently classify each set of responses based on whether a final agreement is reached, if the seller responded within the time window, their willingness to accept a price lower than the listed price, the final agreed-upon price, and the type of exchange location (pickup, dropoff, et cetera).

Upon completion of data collection we report the frequency of each response by race. Our initial primary effect utilizes a probit model to estimate the effect of a buyer's race - in our specification, the effect of being white - on the probability of the seller responding to a buyers' inquiry in the experiment. We report the coefficients as average marginal effects using probit models, comparing response rates between Black and white buyers. The outcome is defined as whether the seller responded (=1) or not (=0). Preliminary item categories include electronics, furniture, and exercise equipment. Demographics of seller include perceived age, perceived gender, and perceived race. The number of observations is lower in some specifications due to missing values for some covariates.In a similar manner, we plan to report the average marginal effects using probit models for sex as a binary outcome, across relevant subsamples in the experiment. For sex, we define the outcome of interest as the effect of being male on the probability of a seller responding to the inquiry.

In addition to the above intervention process, we plan to check whether our findings are dependent upon the identity of the sellers. In doing so, we hope to account for any potential same-race preferences of sellers, which might contribute the differences in sellers' response rates across groups.

In addition to this primary outcome, we plan to expand on sellers' responsiveness by measuring and reporting differences in the mode of transaction (listed previously). In addition, we plan to collect information about final agreed-upon 'sales' (including verbal agreements to sell, and any possible information about whether the item was confirmed to be sold to a different buyer, where possible). We do not intend to make actual purchases for all items, simply to record the verbal agreements for exchange as a sale agreement. Variations in such agreements due to factors such as locations, seller and buyer characteristics, and listing (product) characteristics will be examined, where data completeness allows.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We collect data on the responses of sellers to prospective buyers' profiles, to measure differences in sellers' responses based on race, sex, and education, among other demographic characteristics. For each profile we create, we assign an identifiable race, sex, and age in their profile photos, along with information on their education level.

We collect data on the responses of sellers to prospective buyers' profiles, in order to measure differences in sellers' responses based on race, sex and education, among other demographic characteristics. For each profile we create, we assign an identifiable race, sex, and age in their profile photos, along with information on their education level. Each profile's inquiry is systematically varied and matched with another profile whose race is different. We vary race across two main categories - white and black - across the categories of buyer profiles. Similarly, we vary their sex between male and female profiles, and education varies between college and no college education across profiles. For each marketplace ad each buyer profile responds to, we plan to collect and report available data on:
1) the type of product
2) product's listed price
3) seller's listed location (individuals' information is not publicly shared)
4) item's condition
5) willingness to haggle for a lower price
6) whether the seller responded to the initial inquiry about the Facebook marketplace advertisement
7) the gender, age category, and perceived race of the seller
8) the type of exchange arrangement listed in the sellers' listing. For instance, whether the item is pickup only, dropoff only, shipping only, et cetera.
9) Whether sellers responded within 24 hours to the message.
10) The generated names of the generated profiles used for the experiment.
11) The age of the advertisement, by category listed on Facebook Marketplace. These include the options of below 24 hours, or the number of days if less than seven. We do not collect observations that were older than a week at the time of first inquiry.

We plan to use some of the additional information provided in our regression analysis to control for both individual sellers' characteristics and the characteristics of the advertisement. Our ability to implement these controls partially depends on the completeness of buyers' information, information on the listing itself, the location of the product listing, and the type of product in question. Consequently, the sample size for some regression forms included in our primary outcomes may be smaller when some of these model specifications have covariates with missing values.

Name Choice Methodology information:
For first names, we used the 13 most popular boys and girls names given to babies in the 1990s (as reported by the Social Security Administration), after excluding any names (e.g., Emily) that appear in the list from Gaddis (2017) of distinctively Black or White names commonly used in audit studies.

For last names, we used the top 26 last names from the 2000 Census (from Crabtree et al. (2023) and posted on GitHub here), after excluding any names (e.g., Hernandez and Lee) that were more than 10% Hispanic/Latino, 10% Asian/Pacific Islander, or 10% American Indian/Native Alaskan. I also decided to exclude the last name “White” to prevent mixed signals from names.

[ Reference information for above:
Crabtree, C., Kim, J. Y., Gaddis, S. M., Holbein, J. B., Guage, C., & Marx, W. W. (2023). Validated names for experimental studies on race and ethnicity. Scientific Data, 10(1), 130.

Gaddis, S. M. (2017). How black are Lakisha and Jamal? Racial perceptions from names used in correspondence audit studies. Sociological Science, 4, 469-489.]

The full information on the names used for the study will be provided in a subsequent document. In addition, the random names for buyers we chose will be provided in the final version of the experiment's results.

Education Methodology:

In constructing our education variable, we create a list of names of universities that approximate the 'average' public university education. For school choice for postsecondary education, we exclude schools associated with religious or Ivy League affiliations, and limit our choice of public universities and colleges to those near the median in terms of tuition and national rankings. A full description of the school name sample selection process will be available upon request in supplemental files upon completion of the experiment. We derive our initial selection of university names from the U.S. News & World Report's 2017 National Universities Rankings, a comprehensive dataset of over 1,800 schools across the United States, which provides data on admissions criteria, cost of tuition and fees, enrollment numbers, and overall rankings.
Experimental Design Details
Not available
Randomization Method
We randomize which prospective buyer messages the seller first using matched pairs of buyers who respond to the same advertisement. Buyers alternate who responds to each ad throughout the experiment. We make each pair vary systematically across only one of the major categories. For instance we match a young, white man with the same education background as a profile with identical characteristics, except for race, name, and Facebook profile picture for a given set of buyer inquiries. We repeat this process systematically across a set of matched observations across multiple locations across the United States.
Our order of inquiries to sellers is based on the matches provided by Facebook marketplace in their search queries; our choice of buyers to message is driven by search filters related to the categorical variables listed in the experimental design. For instance, limiting item conditions to used, only listings within the maximum distance applied to the search range for each city, the age of the listing, et cetera. We plan to message the listings according to the order returned by the search query, given the filters applied. If necessary, we may exclude buyers where identifying buyers' perceived sex and race is unclear - for instance, where buyers list profile pictures including multiple individuals of different races or genders, if the profile picture is missing, or the profile picture does not include an adult person altogether.
Randomization Unit
We plan to randomly assign education and names to profiles at the individual level. We also vary the timing components for buyer inquiries, such that each matched pair of buyers will contact the seller for the same listing first half of the time. Details of the planned method of randomization for buyer education and names are listed in the experimental design. For buyer inquiry orders, we plan to vary which buyer inquires about a listing first such that each individual in a matched pair of buyers will go first half of the time. In other words, in a pair containing a black woman and a white woman whose profile information is near-identical aside from the racial characteristics listed, we have each fictitious buyer make the first offer on the set of sellers' listings half the time.
Consequently, each seller should be contacted by only one of a matched pair of buyers for a given response observation, but we record the order in each case. Our final sample should have each individual in the set of matched pairs of buyers be the first to message the seller approximately half the time.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
There are no planned clusters for this experiment.
Sample size: planned number of observations
The planned number of observations is a minimum of 2,000 to 10,000 observations. The unit of observation is the seller response to buyers' inquiries.
Sample size (or number of clusters) by treatment arms
The planned number of observations is a minimum of 2,000 to 10,000 observations.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
SUU Institutional Review Board
IRB Approval Date
IRB Approval Number