The experimental design consists of 4 parts.
Part I: Workers are randomly assigned to the luck, routine work, or complex work treatment. They are then randomly allocated to a group of five. Each worker within the group has been allocated to the same treatment. The randomly chosen worker (in the luck treatment) or the best performer (in the routine and complex work treatments) is allocated an initial bonus of $5. This amount will, however, only be paid after the decisions in part II and III.
Part II: Workers are provided with the payoff information for their group (i.e., which group member was allocated the $5). They then have the option to redistribute the $5 allocated to the top earner, to be equally distributed across the other group members. Given there is a 50% chance the decision of one of the five workers will be implemented and that worker is chosen at random, there is a 10% chance an individual worker’s decision will be implemented.
Part III: Spectators each make three allocation decisions, one for each treatment. The order in which they make decisions across the three treatments is randomised. For the routine and complex work decisions, spectators are asked to participate in the respective task themselves for one minute without being informed of their own performance. This stage aims to provide spectators with a better idea of the complexity of each task and allows us to compare spectator and worker decisions while holding task experience constant. For each treatment, spectators are then provided with the payoff information for a group and have the option to redistribute the $5 allocated to the top earner, to be equally distributed across the other group members. There are three spectators for each group and a 50% chance the decision of one of the three spectators will be implemented. As that spectator is chosen at random, there is a 17% chance an individual spectator’s decision will be implemented. Figure 1 illustrates a potential scenario spectators may face for a group in the routine work treatment condition. Spectators receive no information on the preferences expressed by the workers in part II.
Part IV: To determine the underlying mechanism for potential differences in redistributive choices across treatments, we elicit spectator as well as worker beliefs at the end of the experiment. Apart from demographics and distributive and political questions, we specifically include three incentivised belief elicitations that allow us to directly test three potential competing mechanisms:
Perceived Cognitive Cost: Subjects are asked whether they would be willing to perform the task (again) within their treatment condition and, if so, what the minimum amount of payment would be they would want to receive for their participation. To put restrictions on subjects’ required minimum payments, we inform them that 10 subjects with the lowest suggested amount will be selected to actually complete the task at their proposed rate. Although this introduces a strategic element, it will be held constant across treatments. A higher average required minimum payment in the complex work treatment as opposed to the routine work treatment would suggest that the perceived cognitive cost of the complex problems task is higher. Alternatively, if the required minimum payment in the complex work treatment is lower, it would suggest that the cognitive cost is perceived to be lower. This may be the case because the intrinsic motivation to perform the complex problems task is perceived to be higher. This belief elicitation will allow us to test for these possibilities.
Perceived Agency: Subjects (within treatment conditions) are asked to provide incentivised estimates of the average performance of workers in two previous studies that differed (only) in the size of the prize given to the best performer. If subjects believe there to be a larger difference in performance for the slider task than for the complex problems task under different prizes, it would suggest that they perceive workers to have more agency over their effort level in the slider task as compared to the complex problems task. While the absolute estimates for these questions will, of course, be affected by their own performance in the task, the difference between the two prize scenarios should still capture their perception of agency. If subjects are within +/-5 percentage points of the correct answer for each estimate, they will receive an additional payment of 20 cents.
Perceived Uniqueness of Skill: We ask subjects (within treatment conditions) to provide incentivised estimates of the number of workers out of 100 randomly selected ones who were able to perform the task above a certain performance threshold. If subjects are within +/-5 percentage points of the correct answer, they will receive an additional payment of 20 cents. If subjects estimate the number of workers being able to perform very well in the complex problems task to be lower than in the slider task, that would suggest that skills needed for the complex problems task are perceived to be more unique than those needed to perform well in the slider task. The performance threshold is set based on worker performance in the pilot study and corresponds to the number of sliders/complex problems only the top 20% of workers in the pilot were able to complete within 3 minutes.