x

Please fill out this short user survey of only 3 questions in order to help us improve the site. We appreciate your feedback!
Blinded by the Person: Field Experimental Evidence from Idea Evaluation in a Multinational Company
Last registered on February 07, 2020

Pre-Trial

Trial Information
General Information
Title
Blinded by the Person: Field Experimental Evidence from Idea Evaluation in a Multinational Company
RCT ID
AEARCTR-0005439
Initial registration date
February 07, 2020
Last updated
February 07, 2020 1:34 PM EST
Location(s)
Primary Investigator
Affiliation
ESMT Berlin
Other Primary Investigator(s)
PI Affiliation
ESMT Berlin
PI Affiliation
Chalmers University of Technology
PI Affiliation
Stockholm School of Economics
Additional Trial Information
Status
On going
Start date
2019-12-18
End date
2020-02-22
Secondary IDs
Abstract
To study idea evaluation and selection within firms, we conduct a field experiment in cooperation with a large multinational company in the information and communication technology (ICT) sector. The field experiment aims to provide causal evidence on how the relationship between idea proposers and idea evaluators affects evaluation and ultimately innovation outcomes. To do so, we manipulate the idea evaluation process where innovation managers rate ideas proposed by their colleagues. Specifically, we experimentally vary the amount of information that innovation managers receive on the idea proposer. There are two conditions: (1) non-blind evaluation, where innovation managers receive information on the name, organizational unit, and location of the idea proposer, and (2) blind evaluation, where innovation managers receive no information on the idea proposer. We intend to analyze how blind evaluation changes evaluation scores (H1 - Rating) and how the effect depends on order (H2 - Order). Moreover, we want to test how blind evaluation changes evaluation scores when the idea proposer is female (H3 - Gender), when the idea proposer comes from the same organizational unit as the evaluator (H4 - Subunit), and when the idea proposer is located in the same country as the evaluator (H5 - Location).
External Link(s)
Registration Citation
Citation
Dahlander, Linus et al. 2020. "Blinded by the Person: Field Experimental Evidence from Idea Evaluation in a Multinational Company ." AEA RCT Registry. February 07. https://doi.org/10.1257/rct.5439-1.0.
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2019-12-19
Intervention End Date
2020-02-21
Primary Outcomes
Primary Outcomes (end points)
Dependent variables. We will employ Evaluation score as our main dependent variable to test all hypotheses. For each idea, innovation managers are asked to rate the overall quality of an idea on a 7-point scale (1=very low to 7=very high; “On a scale of 1 to 7 (1 lowest to 7 highest), please assess the overall quality of the idea.”).
In addition, we ask innovation managers to rate the ideas along three specific dimensions, which are regularly used at our partner firm:
1. Desirability (defined as “The degree to which a solution to a problem addresses someone’s needs.”),
2. Feasibility (defined as “The degree to which a solution is possible and suitable for [name of partner firm] to implement.”),
3. Viability (defined as “The degree to which an idea makes business sense for [name of partner firm].”).
Innovation managers will be asked to rate the ideas along those three dimensions on a 7-point scale (1=very low to 7=very high; “On a scale of 1 to 7 (1 lowest to 7 highest), please rate different aspects of this idea. You can scroll over the items to see a short definition.”).
Finally, we will ask innovation managers whether an idea should be promoted to the next round, which they can answer with yes or no (“Would you like to promote this idea to proceed to the next round?”). The answer to this question will be recorded in the dummy variable Next round.
We will use the variables Desirability, Feasibility, Viability and Next Round as alternative dependent variables in stacked regressions, e.g. to investigate whether their systematic differences on these dimensions with respect to H3 to H5 (see “Additional Analyses”).
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
In this field experiment, we focus on the evaluation of ideas within companies. Therefore, we recruited innovation managers in our partner company and asked them to evaluate real ideas that other employees of our partner firm had proposed. We designed an online evaluation interface with the survey tool Qualtrics, where we will also collect the data.
Experimental Design Details
Experimental Set-up In this field experiment, we focus on the evaluation of ideas within companies. Therefore, we recruited innovation managers in our partner company and asked them to evaluate real ideas that other employees of our partner firm had proposed. We designed an online evaluation interface with the survey tool Qualtrics, where we will also collect the data. Qualtrics offers several ways to customize the functionality and appearance of a survey, which we used to adjust the survey flow to our needs and the visual appearance to the corporate design of our industrial partner. Innovation managers come from all over the world and can access the evaluation tool online through a personal link sent to their corporate email address. Each innovation manager is asked to evaluate 48 ideas. The online evaluation consists of three parts: a short introductory welcome screen, the main part for idea evaluation (48 evaluation screens; one for each idea), and a short exit survey of 16 questions. The text of the welcome screen, an example of the evaluation screen and the text of exit questionnaire are given in the appendix. Ideas are retrieved from the company’s idea management system and their content (idea title and description) is not changed. Each idea is presented on an individual evaluation screen that contains the following: - Short request to evaluate the idea (“Please evaluate the following idea.”) - Information on the idea proposer depending on the treatment condition (see “Treatment Conditions”) - Idea title - Idea description (between 40 and 999 characters) - Five questions to rate the ideas (see “Variables and Measurement”) - Text field for open comments The innovation managers who evaluate the ideas are not aware that they are experimental subjects. Moreover, idea proposers are not aware that the ideas they submitted to the company’s idea management system are used in our field experiment. Finally, our contact persons in the partner firm are also not aware of the experimental manipulation. In our communication, we took great care not to reveal our research questions (the effect of blind evaluation and the effect of the relationship between idea proposer and evaluator), our research approach (conducting a field experiment) or the experimental manipulation (blinding the idea proposer). Instead, we communicated that we were conducting a joint study (company and academic researchers) to unlock the intrapreneurial spirit and improve our industrial partner’s idea evaluation. Treatment Conditions Innovation managers evaluate ideas under two conditions: (1) Blind evaluation: The innovation manager receives no information about the idea proposer. Before the idea title and the idea description, no information on the idea proposer is provided and instead “N/A” is displayed. (2) Non-blind evaluation: The innovation manager receives information about the idea proposer. Before the idea title and the idea description, the idea proposer’s name (first name and last name), organizational unit (abbreviation and full name) and location (city and country) are displayed. We use a within-subject design implying that each innovation manager evaluates ideas under both conditions (blind and non-blind). Ideas are randomly assigned to one of the conditions with the help of the built-in randomization function in Qualtrics. Idea Selection and Randomization Each innovation manager evaluates 48 ideas, which are randomly picked from a larger pool of 412 ideas. The ideas were submitted by employees through a dedicated idea management platform used by our industrial partner. In total, we retrieved 570 ideas that were submitted between February 6, 2019 and October 7, 2019 to the idea management system. We eliminated 15 duplicate ideas and 5 ideas with blank idea descriptions. For our experimental manipulation (blinding) to work credibly, we need early-stage ideas that have not passed through our partner firm and ideally have not be seen by the innovation managers before. To arrive at an appropriate set of ideas, we took the following steps: (1) We restricted the set of ideas to ideas from the past six months (from April, 8 2019 to October 7, 2019). (2) We excluded ideas that have already progressed in the internal implementation process, e.g. by having a coach assigned to develop the idea further (6 ideas). (3) We excluded ideas that contained links or other additional material, e.g. a text document or presentation slides (42 ideas). (4) We excluded ideas with missing information on the proposer’s gender, organizational unit or location. (5) We excluded ideas that were proposed by one of the innovation managers who evaluate the ideas. To ensure that each innovation manager evaluates ideas from proposers with diverse backgrounds, we rely on stratified random assignment of the ideas. Each idea is assigned to one of 20 strata based on the idea proposer’s gender (2 strata) and organizational unit (10 strata). Note that we have not stratified based on the idea proposer’s location. We then randomly pick ideas from each stratum with a built-in function in Qualtrics. The number of ideas picked from each stratum is roughly proportional to the stratum size, although we slightly oversample small strata. For example, we oversample ideas that were proposed by women from organizational units from which only few ideas have been proposed (because a majority of the ideas is proposed by male employees).
Randomization Method
Randomization done by Qualtrics
Randomization Unit
Individual-idea pair
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
Each innovation manager evaluates 48 ideas
Sample size: planned number of observations
40 innovation managers * 48 ideas = 1920 observations at the evaluator-idea level
Sample size (or number of clusters) by treatment arms
See the full pre-analysis plan
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See the full pre-analysis plan
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS