Gettin' 'Em Through the Door: A Conjoint Experiment on Bureaucratic Selection

Last registered on August 09, 2023

Pre-Trial

Trial Information

General Information

Title
Gettin' 'Em Through the Door: A Conjoint Experiment on Bureaucratic Selection
RCT ID
AEARCTR-0010060
Initial registration date
April 05, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 14, 2023, 11:36 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 09, 2023, 8:11 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Texas Tech University

Other Primary Investigator(s)

PI Affiliation
American University

Additional Trial Information

Status
Completed
Start date
2023-04-08
End date
2023-04-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Whether it is a parent deciding on what school district to send their child to, a daughter shopping between assisted living facilities for an elderly parent, or a Medicare patient choosing a primary care physician, citizens face myriad choices and tradeoffs in their interactions with bureaucracy. The once conventional depictions of citizens as passive participants in bureaucratic encounters are being replaced with depictions of citizens as active participants who express agency, preference, and choice. These recent theoretical and empirical developments emphasize that citizens are not monolithic, but rather that individual experiences, sociodemographic identities, and social constructions are all important characteristics that shape how citizens interact with bureaucracy, and who they choose to interact with. This research explores how citizens, when they have a choice, choose bureaucrats and the types of values, choices, and tradeoffs inherent to these decisions. Using a within-subjects conjoint survey experiment, this study examines how citizens weigh representation (e.g. race matching, gender matching) and efficiency (e.g. flexible modality, wait times) concerns when selecting a prospective mental health provider.
External Link(s)

Registration Citation

Citation
Favero, Nathan and Austin McCrea. 2023. "Gettin' 'Em Through the Door: A Conjoint Experiment on Bureaucratic Selection ." AEA RCT Registry. August 09. https://doi.org/10.1257/rct.10060-1.1
Experimental Details

Interventions

Intervention(s)
The interventions consist of a within-subject conjoint experiment embedded in an electronic survey. The design is a paired profiles conjoint in which profiles for two therapists—A and B—are presented next to each other in a conjoint table.

The first column of the conjoint table lists a total of seven therapist attributes. The second and third columns list the therapist attribute values for therapists A and B, respectively. All therapist attribute values are assigned at random with equal probability for each attribute (with the restriction that Specialty 2 cannot take on the same value as Specialty 1, when specialty is shown, as explained below). The order of attributes is randomized by respondent (the order will not change from one conjoint to the next for the same respondent); the only constraint on order randomization is that when specialty is shown Specialty 2 always occurs directly after Specialty 1.

Each respondent is randomly assigned (with weights shown in parentheses) to one of three groups determining which set of attributes they will see: no specialty (50% probability), no session info (25% probability), or no professional info (25% probability).

For this study, responses from those selected into the "no session info" group will be dropped from the sample.

The exact text for the eight therapist attributes and their respective attribute values appear below. Attributes 1-3 are shown to all respondents, while attributes 4-9 are only shown to certain respondents.

Attribute 1: Overall Rating
Attribute values (2):
- 4 stars
- 4.5 stars

Attribute 2: Sex/Gender
Attribute values (3):
- Female
- Male
- Non-Binary

Attribute 3: Race/Ethnicity
Attribute values (4):
- Black/African American
- White
- Hispanic/Latino
- Asian

Attribute 4 (not displayed for respondents selected into "no specialty" group): Specialty 1
Attribute values (6):
- Women's Issues
- Racial Identity
- LGBTQ+
- Life Transitions
- Anxiety
- Depression

Attribute 5 (not displayed for respondents selected into "no specialty" group): Specialty 2
Attribute values (6 - but cannot take on same value as Attribute 4):
- Women's Issues
- Racial Identity
- LGBTQ+
- Life Transitions
- Anxiety
- Depression


Attribute 6 (not displayed for respondents selected into "no session info" group): Next Available Session
Attribute values (2):
- One Week
- Four Weeks

Attribute 7 (not displayed for respondents selected into "no session info" group): Session Format
Attribute values (3):
- Secure Online Meetings
- In-Person Meetings
- Choice of Online or In-Person

Attribute 8 (not displayed for respondents selected into "no professional info" group): Years of Experience
Attribute values (2):
- 5 years
- 20 years

Attribute 9 (not displayed for respondents selected into "no professional info" group): Professional Degree
Attribute values (3):
- Licensed Professional Counselor (LPC)
- Master of Social Work (MSW)
- PhD, Psychology
Intervention (Hidden)
Intervention Start Date
2023-04-08
Intervention End Date
2023-04-30

Primary Outcomes

Primary Outcomes (end points)
The outcome is the forced choice made by the respondent between the two profiles in each conjoint. Effects of attributes on this choice are estimated as the average marginal component effect (AMCE) (Hainmuller et al. 2014). AMCEs are estimated using OLS with standard errors clustered at the level of the survey respondent (with data from three sets of paired profiles conjoints for each respondent).

We test H1a by estimating the moderating effect of being a woman respondent on the gender attribute for the counselor. Selection of a profile is estimated using the following predictors: gender of respondent, gender attribute values for the counselor, a woman match (coded as 1 if the respondent is a woman AND the gender attribute of the counselor is a woman), and a non-woman match (coded as 1 if the gender of the respondent is not a woman AND their gender matches the gender attribute of the counselor). The woman match variable is the variable of interest for H1a.

We test H1b by estimating the moderating effect of being a nonwhite respondent on the race attribute for the counselor, specifically focusing on a match with the respondent’s own racial identity. Selection of a profile is estimated using the following predictors: race of respondent, race attribute values for the counselor, a nonwhite race match (coded as 1 if the race attribute value of the counselor is not white AND the respondent self-identifies with the race of the counselor), and a white race match (coded as 1 if the race attribute value of the counselor is white AND the respondent self-identifies as white). Note: multiracial respondents can match with more than one race attribute; for example, a respondent who selects both Hispanic and Asian, will have the nonwhite race match variable coded as a 1 whenever the counselor race attribute value is either Hispanic or Asian. The nonwhite race match variable is the variable of interest for H1b.

We test H2a/H2b by considering how the key effects in H1a/H1b may be further moderated by the Next Available Session attribute. The following variables will be added as predictors to the models previously used to test H1a/H1b: the “Four Weeks” (vs. “One Week”) attribute value and an interaction of “Four Weeks” x the variable of interest for H1a/H1b.
• For H2a, the key variable of interest is “Four Weeks” x woman match.
• For H2b, the key variable of interest is “Four Weeks” x nonwhite race match.

Similarly, we test H3a/H3b and H4a/H4b by considering how the key effects in H1a/H1b may be further moderated by the Session Format attribute. The following variables will be added as predictors to the models previously used to test H1a/H1b: the Session Format attribute values (with “In-Person Meetings” serving as the omitted category) and interactions of the Session Format attribute values x the variable of interest for H1a/H1b.
• For H3a, the key variable of interest is “Secure Online Meetings” x woman match.
• For H3b, the key variable of interest is “Secure Online Meetings” x nonwhite race match.
• For H4a, the key variable of interest is “Choice of Online or In-Person” x woman match.
• For H4b, the key variable of interest is “Choice of Online or In-Person” x nonwhite race match.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The conjoint survey experiment is carried out among a non-probability sample of US residents. Participants are recruited via Prolific, with enrollment limited to individuals with current residence in the US (using Prolific’s prescreening feature). In order to ensure a racially diverse sample of respondents, we will recruit 750 White respondents and 750 non-White respondents. This quota sampling approach is accomplished using Prolific’s prescreening feature. Specifically, two identical studies are created in Prolific, except that one study limits enrollment to only White participants (using Prolific’s demographic category “Ethnicity (Simplified)”) and the other study limits enrollment to non-White participants.

All survey respondents are tasked with selecting a prospective therapist. First, respondents are exposed to an introductory text describing the task and providing basic information about a new community health clinic offering a free initial consultation for mental health counseling. Next, respondents are exposed to a paired profiles conjoint in which two specific therapist profiles—A and B—are presented next to each other in a conjoint table.

The first column of the conjoint table lists seven therapist attributes. The second and third column list the attribute values (for those seven therapist attributes) for therapists A and B, respectively. All therapist attribute values are assigned at random.

The exact therapist attributes and values appear under ‘intervention (public)’.

As our outcome measures, all respondents are asked to indicate their choice between the two therapist profiles (“Which of the two therapists would you personally prefer?”). Response options are “Therapist A” and “Therapist B.” They are then asked how closely “Therapist A” and “Therapist B” reflect their ideal counselor on a scale from 1-7 ("Definitely not ideal" to "Definitely ideal").

Respondents are presented with similar paired profiles conjoints (i.e., involving the same attributes and random assignment of new attribute values for both therapist profiles) two more times. Thus, each respondent will see a total of three pairs of counselors.

We measure respondents' gender and race through simple single-item survey measures.

We test the following hypotheses:

H1 Symbolic representation preference:
H1a: Women respondents are more likely to select a woman provider
H1b: Respondents of color are more likely to select a co-ethnic provider

H2 Workload interfering with representation:
H2a: Women's preference for gender congruence will be weaker when the provider has a high workload
H2b: Respondents' of color preference for coethnic congruence will be weaker when the provider has a high workload

H3/4 How interactive intensity and flexibility moderate representation:
H3a: Women's preference for gender congruence will be weaker when the provider is only available to meet virtually
H3b: Respondents' of color preference for coethnic congruence will be weaker when the provider is only available to meet virtually
H4a: Women's preference for gender congruence will be stronger when the provider offers both virtual and in-person meeting options
H4b: Respondents' of color preference for coethnic congruence will be stronger when the provider offers both virtual and in-person meeting options
Experimental Design Details
Randomization Method
Randomization is carried out by simple randomization by computer through code embedded in Qualtrics
Randomization Unit
The individual survey respondent
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0
Sample size: planned number of observations
6750 observations at the respondent-profile-conjoint level: 1,125 planned survey respondents, each providing an outcome for two paired profiles for three separate conjoints (1125x2x3=6750). As noted above, the survey collection will aim to enroll 750 White respondents and 750 non-White respondents, but each respondent will be selected with 25% probability to a version of the survey (the "no session info" version) not used for this study.
Sample size (or number of clusters) by treatment arms
As noted above, respondents are first randomly assigned to one of three groups (which determines which therapist attributes they will see in the conjoint experiments): a "no specialty" group (n=750 respondents), a "no session info" group n=375), and a "no professional info" group (n=375). Respondents selected into the "no session info" group are dropped from the sample for this study, leaving 1,125 respondents.

Providing an exact sample size estimate for all potential constellations of job attribute values across job attributes is not meaningful given our conjoint design and research focus.

Below is our expected sample size by treatment arms for each of our four key attributes of interest:

1): “Sex/Gender”
Attribute values: 3 with n = 2,250 in each.

2): “Race/Ethnicity”
Attribute values: 4 with n = 1,687.5 in each.

3): “Next Available Session”
Attribute values: 2 with n = 3,375 in each.

4): “Session Format”
Attribute values: 3 with n = 2,250 in each.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Texas Tech University Institutional Review Board
IRB Approval Date
2022-10-26
IRB Approval Number
IRB2022-879

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Yes
Data Collection Completion Date
April 14, 2023, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials