NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Understanding Employee Sorting between Startups
Last registered on August 30, 2019


Trial Information
General Information
Understanding Employee Sorting between Startups
Initial registration date
August 24, 2019
Last updated
August 30, 2019 10:25 AM EDT
Primary Investigator
Other Primary Investigator(s)
PI Affiliation
Rotman School of Management
PI Affiliation
Rotman School of Management
Additional Trial Information
On going
Start date
End date
Secondary IDs
This project examines why workers sort between different startups, focusing on the role of expert information on different aspects of
firm quality. Unlike for established firms, where perspective workers have various sources of information (e.g., best places
to work rankings or online reviews), workers may have relatively little information about different startups. In addition, workers may have a hard
time evaluating the technology or business model of startups, either because they lack very specialized technical training or because of firm
secrecy. In this project, we examine how expert opinions about the business and science quality of startups affect worker demand for working at particular startups. Beyond the overall treatment effects of expert information, we are interested in the mechanisms for the treatment effects. Our partner is a North American science-based entrepreneurship program (SEP).
External Link(s)
Registration Citation
Bryan, Kevin, Mitchell Hoffman and Amir Sariri. 2019. "Understanding Employee Sorting between Startups." AEA RCT Registry. August 30. https://doi.org/10.1257/rct.4242-1.0.
Experimental Details
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
Our main outcome is interest in working for particular firms. This will be measured on a 1-5 scale of interest (unincentivized) and by ranking companies by level of interest (incentivized).
Primary Outcomes (explanation)

Secondary Outcomes
Secondary Outcomes (end points)
We will have a number of secondary outcomes. First, we will examine perceptions of the science and business quality of the startups, each rate on a 1-5 scale. These are unincentivized. Second, we will examine perceived probabilities of firm success, including the probability of raising money at a $1 million valuation and the probability of achieving either an initial public offering (IPO) or acquisition at a price of more than $50 million. For the MBA RCT, we will also obtain and examine the perceived probability of a firm being selected to graduate from the science-based entrepreneurship program (SEP). The believed firm success probabilities are incentivized using a quadratic scoring rule.
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
The main idea of the experimental design is to take workers who are potentially interested in working for startups, and then shock them with
information about the science quality and/or business quality of firms. The experiment is being carried out by a leading science-based entrepreneurship program (SEP) that focuses on scalable pre-seed startups. The experiment will use a 2x2 experimental design, where the arms are expert information about science quality and expert opinion about business quality. Business and science experts rate science-based startups in terms of business quality or science quality. Simple information from their ratings (e.g., above-average or not) is then communicated to job candidates, depending on the treatment group. That is workers are assigned to one of four groups: Control, Business Quality Only, Science Quality Only, or Both.

Experimental Design Details
The first round of the RCT takes MBA students who are applying to an entrepreneurship course. The second round of the RCT takes business program alum. Alum are invited to participate in a job board focused on business positions in startups. Some alum observe expert opinion information about the different startups, whereas others do not.
Randomization Method
Randomization done in office by a computer.
Randomization Unit
The unit of randomization is an individual worker.
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
We expect that there will be around 500 individual workers who participate. The sample size is limited by the size of the populations of MBAs and business program alums, and by who decides to participate.
Sample size: planned number of observations
Note that the 500 workers will evaluate multiple firms.
Sample size (or number of clusters) by treatment arms
Each of the 4 elements of the 2x2 will have about one quarter of the total sample.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB Name
University of Toronto
IRB Approval Date
IRB Approval Number
IRB Name
University of Toronto
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post Trial Information
Study Withdrawal
Is the intervention completed?
Is data collection complete?
Data Publication
Data Publication
Is public data available?
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)