Eliciting thresholds for collective behavior

Last registered on March 30, 2023


Trial Information

General Information

Eliciting thresholds for collective behavior
Initial registration date
March 03, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 30, 2023, 2:54 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

NYU Abu Dhabi

Other Primary Investigator(s)

PI Affiliation
University of Texas at Dallas
PI Affiliation
NYU Abu Dhabi

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We run an incentivized representative survey in the US. In the survey we elicit individual thresholds for switching from one decision to another decision. Eliciting covariates on attitudes and personality traits allows us to test and estimate our model to predict individual thresholds.
External Link(s)

Registration Citation

Janas, Moritz, Nikos Nikiforakis and Simon Siegenthaler. 2023. "Eliciting thresholds for collective behavior." AEA RCT Registry. March 30. https://doi.org/10.1257/rct.10895-1.0
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
The thresholds participants report. Participants enter two thresholds, one broad one and one for a more narrow reference group.
Primary Outcomes (explanation)
we do not have to construct this variable, we directly elicit it.

Secondary Outcomes

Secondary Outcomes (end points)
we elicit many covariates. The hypothesis section of this preregistration document lists them one by one.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
see Experimental Design (Hidden) for details on the experimental design.
Experimental Design Details
See the uploaded survey for exact wordings on all the questions.

The basis of this project is an incentivized survey. We hired the survey company IPSOS to carry out the data collection covering 4,000 US residents. This sample is divided into 500 participants from each of the eight gender-race combinations we study: Asian Male, Asian Female, Black Male, Black Female, Hispanic Male, Hispanic Female, White Male, White Female. Participants are recruited as representative samples of the respective groups with respect to age, gender, US region and education.

We attach the full survey as a separate document of this pre-registration. The centerpiece of the study are two threshold questions, which elicit people's willingness to support or oppose affirmative action conditional on other participants' behavior. These questions are incentivized through donations we make on behalf of the participants. Eighty percent of participants, those in treatment Public, have their email address posted on a public website if they choose to determine themselves to which organization the donation is made (i.e., they change it from a default, which we vary exogenously). The remaining twenty percent of participants are assigned to treatment Private, where all choices remain anonymous. Further, we randomly distribute participants in equal shares to the following three treatments: Gender, Race, and Similar. In all treatments, the participants can first condition their donation choice on a representative sample of US residents. In all treatments, there is a second question where the participants can condition their donation choice on a group that is more similar to them. In treatment Gender, the other group members have the same gender as the participants (i.e., a representative sample of US residents within gender). In treatment Race, all group members same race/ethnicity as the participant. In treatment Similar, the other group members are US residents who are similar to the participant with respect to gender, race/ethnicity, age group, education, and residence area.
Randomization Method
The computer randomizes the participants into the different treatments.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
4000 individuals in the U.S.. Sample is stratified: 500 observations for each of the 8 groups (male, female) X (asian,black,hispanic,white).
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
500 individuals in each of the eight groups. Male Asian; Female Asian; Male Black; Female Black; Male Hispanic; Female Hispanic; Male White; Female White.
Individuals in each group are U.S. representative for each of the groups on the following dimensions: Age, Region in U.S., Education. 50% of participants are in the DefaultPro treatment, the other 50% in the DefaultAnti treatment. 80% are in the Public treatment, 20% in the Private treatment. 33% are in each of the three reference group treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Institutional Review Boards (IRBs)

IRB Name
NYUAD Institutional Review Board
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials