Back to History Current Version

Understanding Collective Intelligence

Last registered on August 09, 2019

Pre-Trial

Trial Information

General Information

Title
Understanding Collective Intelligence
RCT ID
AEARCTR-0002896
Initial registration date
August 09, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 09, 2019, 2:40 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Harvard University

Other Primary Investigator(s)

PI Affiliation
Harvard University

Additional Trial Information

Status
In development
Start date
2019-08-19
End date
2020-05-31
Secondary IDs
Abstract
There is widespread recognition in the economics literature that ‘non-cognitive’ skills – including social skills and leadership skills – are strongly associated with labor-market outcomes (e.g. Heckman & Kautz, 2012). This research is strengthened by evidence showing that employers explicitly demand these skills (NACE, 2015), and that the value of social skills is growing (Deming, 2015)

The importance of teamwork is underscored by two widely-cited laboratory studies (Engel, Woolley, Jing, Chabris, & Malone, 2014; Woolley, Chabris, Pentland, Hashmi, & Malone, 2010). This research established that groups have a measurable ‘collective intelligence’ that is only weakly associated with the individual ability of team members. In contrast, collective intelligence is strongly associated with the Reading the Mind in the Eyes Test (RMET) an instrument that assesses whether people are adept at recognizing the emotions of others (Baron‐Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). These associations suggest that how well a team performs may be a function of whether or not people are 'Teamplayers', i.e. people who work well with others.

While these two strands of research both suggest that individual’s interpersonal skills are important to labor-market outcomes, scholars are yet to establish a clear causal link between measures of interpersonal skill and team performance. Such a causal link is most easily be demonstrated in a lab setting. Our experiment aims to build on the findings of Woolley et al. (2010) and Engel et al. (2014) by randomly assigning individuals to multiple groups. In doing so, we will answer two core questions. First, do some individuals have substantial impacts (positive or negative) on a group’s collective intelligence? If so, are these impacts associated with existing measures of social skills – such as the RMET?
External Link(s)

Registration Citation

Citation
Deming, David and Ben Weidmann. 2019. "Understanding Collective Intelligence." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.2896-1.0
Former Citation
Deming, David and Ben Weidmann. 2019. "Understanding Collective Intelligence." AEA RCT Registry. August 09. https://www.socialscienceregistry.org/trials/2896/history/51527
Experimental Details

Interventions

Intervention(s)
Our study explores the performance of teams. Why are some groups ‘more than the sum of their parts’ while others fail live up to expectations? Is this predictable based on the characteristics of individuals in each group? We study these questions in the context of group problem solving. We pay particular attention to the causal impact that individuals may have on groups. As such, we will have as many ‘treatments’ as we have individuals who complete the study.
Intervention Start Date
2019-08-19
Intervention End Date
2020-05-31

Primary Outcomes

Primary Outcomes (end points)
We define a new measure: the Teamplayer Index. For each participant, we estimate this index. For more details, see the attached Statistical Analysis Plan.
Primary Outcomes (explanation)
The Teamplayer Index is a novel measure of individual performance in the context of collective problem solving. There is a detailed description of how the index is conceptualized and calculated in the attached Statistical Analysis Plan. In brief: the Teamplayer Index for participant i is the average performance of the groups that i was allocated to, conditional on the individual skill of each of i's groups. Group performance is assessed on three tasks: a memory task; a numerical reasoning task; and a spatial reasoning task. These tasks are also described in the Statistical Analysis Plan.

Secondary Outcomes

Secondary Outcomes (end points)
We explore a group-level outcome: “group efficiency”. This can be conceptualized as whether a group over- or under-performed controlling for the individual skill level of its members. See the Statistical Analysis Plan for details.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Participants are asked to complete 6 batteries of tasks, labelled: A, B1,B2,B3,B4 & C. These batteries are described in detail in the Statistical Analysis Plan. We present them here in brief.

First, participants work as individuals to complete a set of online tasks (Battery A). Battery A consists of:
- Big5 inventory;
- Reading the Mind in the Eyes;
- Ravens Advanced Progressive Matrices
- Three short tests of short-term memory (images; words; stories);
- A numerical optimization task (developed for the purpose of this study)

After completing individual assessments, participants come into the Harvard Decision Science Lab (HDSL) at least twice. During these visits, they are randomly allocated into groups of three people. Each participant will be a member of 4 groups. The first three groups complete batteries B1, B2, B3. The final group completes B4 and C. Batteries B1, B2, B3 and B4 consist of the following:
- a CFIT form (OR an advanced Ravens Progressive Matrices form)
- a Group Memory task (developed for this study);
- the Group Optimization task (developed for this study).

Finally, participants complete a validation task in their final group (Battery C; a cryptography task).

Overall, we anticipate that participants will spend 3-4 hours on our experiment.
Experimental Design Details
Randomization Method
Participants are asked to come to the lab in groups. To run a study ‘session’ there must be at least n=6 participants. Ideally, there will be n=9 or n=12 participants at a time.

Allocation to groups is random, using a blocked design. The study managers (either Ben Weidmann or Research Assistants at the HDSL) will apply the following principles:
- Avoid allocations where people are in a group with the same person more than once. In cases where n=6, this will not be possible, but will be adjusted for the analysis.
- Avoid where people are working with people they know
- Attempt to balance groups so that the overall level (and variance) of skill in each group is as-similar-as-possible across groups

To achieve these aims, we apply the following procedures
- The n participants in each session are divided into three equal blocks of size n/3, based on their overall score in Battery A [if n is not a multiple of three, excess participants are paid for their time and asked to return to another session]. Blocks can loosely be thought of as ‘high’, ‘medium’, and ‘low’ in terms of individual skill.
- If people arrive at the lab with friends, they are automatically placed in the same block.
- There are three bags, one for each block. Each bag has n/3 balls, marked with a letter. Bags have consecutive letters. For example, in a session of 9 people, the bag for ‘high’ performance will have balls with A,B,C; the ‘medium’ bag will have D,E,F; and the final bag has G,H,I.
- Each participant draws a ball. This ball defines their 2 groups for that session. To return to the example of a 9-person session, allocations would be as follows: first set of groups: {ADG,BEH,CFI}; second set of groups: {AEI,BFG,CDH}.
Randomization Unit
We randomize individuals to groups.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Given the novelty of our design – coupled with the fact that we are exploring new measurement instruments – our power calculations are extremely uncertain. Moreover, recruitment may be challenging given the demands of our design in which people are required to come to the lab twice, and sessions require a minimum of 6 people to proceed.

As such, we have the following stopping rules. We will proceed until we meet one of the following conditions:
- Reached 450 participants [and exhausted our planned budget]; OR
- Exceeded our target minimum of 207 participants AND achieved a standard error for σ_β of 0.01, or less (see Statistical Analysis Plan); OR
- Exhaust the HDSL participant pool and are no longer able to schedule new groups
Sample size: planned number of observations
Our target is 207 individuals [which corresponds to 276 groups]
Sample size (or number of clusters) by treatment arms
Our target is 207 individuals
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard Human Research Protection
IRB Approval Date
2017-09-28
IRB Approval Number
IRB17-1294
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials