Experimental Design
This study investigates employment decision-making when individuals choose between human workers and generative artificial intelligence (AI). We examine whether and how group identity shapes task allocation behavior in employment-like settings. Specifically, we test whether individuals exhibit in-group favoritism when allocating tasks to human workers, and whether and how such favoritism extends to AI workers that share group identity with the decision-maker.
Design Overview
We implement a within-subject design where the first worker is fixed as an in-group human, and the second worker varies along two dimensions: worker type (human versus AI) and group affiliation (in-group versus out-group).
Participants assume the role of employers and make a series of task allocation decisions between two workers who differ along these dimensions. Each participant completes four allocation sessions:
# Control: Allocation between two in-group human workers
# Treatment 1: Allocation between an in-group human worker and an out-group human worker
# Treatment 2: Allocation between an in-group human worker and an in-group AI
# Treatment 3: Allocation between an in-group human worker and an out-group AI
The order of environments is randomized at the individual level.
Task environment and allocation decisions
Within each of the four sessions, participants complete 22 decision rounds. In each round, the employer (participant) is given a fixed budget of tasks to allocate between two workers, along with worker-specific revenue rates that determine how many experimental tokens are earned per task performed by each worker. The implicit "price" of assigning a task to a worker is inversely related to that worker's productivity. Across the 22 rounds per session, we systematically vary these relative prices to trace out participants' revealed preferences over worker types. The order of decision rounds within each session are fully randomized.
Before making allocation decisions, participants complete comprehension checks to ensure understanding of the task structure and payoff rules.
Identity Manipulation
We conduct two separate studies using different methods to establish group identity.
Study 1: Political Identity.
Prior to the main task, participants complete a questionnaire measuring their partisan leanings toward the Democratic and Republican parties.
Worker identities are established through parallel procedures. We recruit two types of human workers from a separate U.S. sample on Prolific: "Democrat Human" workers who identify as Democrats or lean toward the Democratic Party, and "Republican Human" workers who identify as Republicans or lean toward the Republican Party.
For AI workers, we draw on a study by political scientists at Dartmouth College and Stanford University that systematically analyzed the political orientation of major generative AI models. We designate as "Democrat AI" the model rated as leaning most toward the Democratic Party, and as "Republican AI" the model rated as leaning most toward the Republican Party.
A participant who identifies as a Democrat, for example, would allocate tasks between: (i) two Democratic human workers, (ii) a Democratic human worker and a Republican human worker, (iii) a Democratic human worker and a Democratic AI, and (iv) a Democratic human worker and a Republican AI.
Study 2: Minimal Group Identity.
Following the minimal group paradigm of Tajfel et al. (1971), we induce group identity based on aesthetic preferences. Participants are assigned to groups according to their preferences for paintings by Klee versus Kandinsky.
Worker identities are established using the same aesthetic preference criterion. We recruit two types of human workers from a separate U.S. sample on Prolific: "Klee Human" workers who prefer paintings by Klee, and "Kandinsky Human" workers who prefer paintings by Kandinsky. Both worker types possess comparable skills and capabilities for completing the assigned tasks.
For AI workers, we elicit painting preferences from a set of major generative AI models using the same preference elicitation procedure. Based on their responses, we designate "Klee AI" as a model that prefers paintings by Klee and "Kandinsky AI" as a model that prefers paintings by Kandinsky.
A participant assigned to the Klee group, for example, would allocate tasks between: (i) two Klee human workers, (ii) a Klee human worker and a Kandinsky human worker, (iii) a Klee human worker and a Klee AI, and (iv) a Klee human worker and a Kandinsky AI.
Post-Experiment Measures
After completing the allocation tasks, participants respond to a survey measuring perceived closeness to different worker types by group affiliation, along with demographic and attitudinal variables. These measures allow us to examine heterogeneity in decision-making and to explore mechanisms underlying observed allocation patterns.
Incentives
All participants receive a fixed participation fee plus a performance-based bonus. The bonus is determined by one randomly selected decision round, ensuring that each decision is potentially payoff-relevant and incentive-compatible.
We implement the participant's allocation from this selected round: If the selected round involves human workers, we recruit actual workers with the corresponding group affiliation (e.g., from Prolific) to perform the assigned tasks. If the selected round involves AI workers, we execute the tasks using the corresponding AI models via their official APIs, consuming the necessary tokens to generate the output.