Crowdsourcing Job Improvements from Employees

Last registered on August 22, 2022

Pre-Trial

Trial Information

General Information

Title
Crowdsourcing Job Improvements from Employees
RCT ID
AEARCTR-0009888
Initial registration date
August 10, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 18, 2022, 2:32 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 22, 2022, 10:40 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Harvard Business School

Other Primary Investigator(s)

PI Affiliation
PI Affiliation
PI Affiliation

Additional Trial Information

Status
In development
Start date
2022-05-19
End date
2023-12-07
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Firms often solicit ideas from their employees through suggestion boxes, focus groups, surveys, and staff meetings. Famously, Toyota’s production system pulls ideas from their employees that are then implemented to improve quality. Despite the wide adoption of these programs and publicized successes, fundamental questions remain about why firms adopt these programs and through what channels they improve production. One leading explanation is that soliciting ideas is a form of crowdsourcing that leverages the knowledge held by employees within a firm. Certainly, there are many anecdotes of simple improvements that were easily seen from the assembly line that were unobservable from higher levels in the organization. Another leading explanation is that soliciting ideas increases engagement and morale. There are recent examples in other contexts showing that engagement and productivity increase when firms give employees a voice. For these programs to be successfully implemented, it is important to know through which channels they work.

We are working with the management team of a pediatric allied health services business, where speech, physical, and occupational therapists go to patients' homes to deliver care. This project is designed to understand if there are practice changes that can make their jobs better. The project consists of crowdsourcing ideas to improve health providers’ jobs from the providers themselves and the management team, via an initial survey. In a second survey, we will have providers and managers rank ideas that might be implemented, with the goal of comparing rankings of ideas from therapists versus those from management. We will test whether employee generated ideas are more likely to be the top ranked ones, whether the ability to submit ideas changes engagement and job satisfaction, and whether prompts to some employees that management is considering employee-generated ideas improves engagement and job satisfaction.
External Link(s)

Registration Citation

Citation
Sandvik, Jason et al. 2022. "Crowdsourcing Job Improvements from Employees." AEA RCT Registry. August 22. https://doi.org/10.1257/rct.9888-2.0
Experimental Details

Interventions

Intervention(s)
The intervention has two steps. In a first survey, all respondents at the beginning of the survey were asked about sentiment towards the firm, willingness to refer friends, and job satisfaction. Half of the respondents were then asked to provide an idea that will improve their jobs. We then collected these ideas, grouped them, and have prepared a second survey to elicit feedback on the ideas. In this second survey, we will ask both frontline employees and managers to rank the quality of ideas. The second survey will randomize whether a respondent will observe any labels for the origin of ideas as coming from frontline-employees. The subset of labeled ideas will change across respondents who are treated, allowing us to separate the effect of labels from differences in idea quality. This design will allow us to test whether labeling of ideas as employee-generated causes employees to feel valued, appreciated, or heard, which may alter sentiment toward the firm or perceptions of job or employer quality. We can also test whether the idea labels change the perception of idea quality, which may be captured by a change in the ranking of labeled ideas. Finally, for the unlabeled data, we can test whether employee-generated ideas (or the best employee generated ideas) are perceived as superior to ideas that come from managers.
Intervention Start Date
2022-08-11
Intervention End Date
2022-08-31

Primary Outcomes

Primary Outcomes (end points)
Sentiment towards the firm; change in sentiment towards the firm; perceptions of being heard; idea quality; and productivity metrics.
Primary Outcomes (explanation)
Sentiment toward the firm is measured in two ways. The first is by survey responses to questions: "You would likely encourage other therapists or employees to work for [X]" and "You enjoy working for [X]". The second is based on RA coding of free text responses about why their answers differ or not from their projection of choices that management would make. Perceptions of being heard come from the survey questions "When therapists propose ideas or solutions that would improve their job, [X]'s corporate team is receptive" and "Most corporate team managers understand the day-to-day job of therapists." Idea quality is an indicator that an idea is ranked by respondents as one of the top two ideas in a list of 7 or 8 ideas or a ranking of the worst or least helpful idea of the ideas. Productivity metrics are indicators for meeting or exceeding productivity goals for a given week. These goals take into account expected number of home visits and on-time documentation. For each category of primary outcome, we will form an equally weighted index of questions where respondents mark "Agree." For changes in sentiment, we will compute the index change for repeat survey respondents.

For analysis of text responses, we will have 2 research assistants independently code these measures. To get a text-based sentiment measure, we will ask the RA to code whether any part of the text response displays negative feelings, attitudes, or sentiment toward the firm/management. We will also ask the RA to code whether any part of the text response displays positive feelings, attitudes, or sentiment toward the firm/management. Finally, the RA will code the net sentiment position as either strongly positive, positive, neutral, negative, or strongly negative.

Secondary Outcomes

Secondary Outcomes (end points)
Willingness to supply ideas in survey 1; retention.
Secondary Outcomes (explanation)
Willingness to supply ideas when given the opportunity in survey 1. Retention is staying with the firm.

Experimental Design

Experimental Design
The experiment consists of two surveys with randomization into different treatments. In the first survey, treatment provides the ability to submit ideas for job improvement. In the second survey respondents are asked to rank order ideas for job improvement and to anticipate what ideas others in their job will rank highly, and to project which changes management will consider implementing. Treatment in the second survey consists of having the origin of ideas labeled as employee-generated, allowing us to test how this changes perception of the firm and of idea quality.
Experimental Design Details
Randomization Method
Done by computer in the survey, with sequential respondents allocated to treatment and control.
Randomization Unit
Individuals.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
There are around 300 therapists at the firm. We expect about 180 to participate in the second survey.
Sample size: planned number of observations
Around 180 survey respondents. We will have secondary data linked to non-participants.
Sample size (or number of clusters) by treatment arms
About 90 participants per arm.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Harvard University
IRB Approval Date
2022-07-25
IRB Approval Number
22-063101
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials