Experimental Design Details
I utilize a discrete choice (i.e. choice-based conjoint analysis) experiment, where 400 respondents complete an online survey in Qualtrics, choosing between three institutional options (plus a “none of the above” option) over 11 different menus (plus two attention check menus). Each institutional alternative is described by five attributes: admissions selectivity, student diversity rating, financial aid, merit scholarship, and net cost of attendance. The sample size was calculated using a rule of thumb (Johnson & Orme, 2003) commonly used for discrete choice experiments, which is as follows: N>500c/(ta). Where N = sample size, t = number of choice tasks, a = number of alternatives per choice task, and c = the largest product of levels of any two attributes when considering two-way interactions. For my calculation, N > (500 x 5 x 5)/(11 x 3) → N > 379. To account for participants who must be dropped from the results (e.g. for rushing through the survey, I recruit an additional 5% of respondents, so I need around 398, or 400, participants.
The survey company I am working with is College Pulse, a reputable online polling platform focused on surveying college students. They have built the largest database of college student opinion with over 850K student panelists from 1500 universities in all 50 states. College Pulse handles all participant compensation, and students are provided with variable incentives depending on survey length and drop-off rate.
Based on all possible levels for each attribute (minus a few prohibited combinations, to be as realistic as possible), there are 400 possible hypothetical institutions. For the survey, I create a multiple choice question in Qualtrics with 400 possible answer options, where each option consists of a different hypothetical institution graphic. There is a setting on Qualtrics that allows only three (randomly selected) of those 400 possible answer choices to be shown to each participant.
I include 11 real (non attention task) menus with 3 alternatives per choice task (plus "None of the Above" and a total number of levels across all attributes of 19. This includes 5 levels for selectivity, 4 levels for student diversity rating, 5 levels for aid composition, and 5 levels for net cost. All possible levels are listed below:
1. Admissions Selectivity: 10%, 30%, 50% 70%, 90%
2. Student Diversity Rating: 1 star, 2 star, 2 star, 4 star
3. Aid Offer Composition: No Financial Aid or Merit Scholarship, 100% Financial Aid + 0% Merit Scholarship, 90% Financial Aid + 10% 4. Merit Scholarship, 10% Financial Aid + 90% Merit Scholarship, 0% Financial Aid + 100% Merit Scholarship
4. Net Cost of Attendance: $14,700/year, $34,400/year, $45,600/year, $57,100/year, $84,200/year
I also include a way to check that the order in which attributes are presented in a menu does not affect decisions in a meaningful way (e.g. participants prioritizing the admissions selectivity of the three options simply because this attribute is displayed first). I randomize the order of admissions selectivity and student diversity rating, but these two attributes always appear before the financial aid offer and merit aid offer, the order for which is randomized as well. Net cost of attendance is always listed as the last attribute. These randomizations give four possible permutations, making up the four “treatment” groups in the study. Participants are randomly assigned to one of the four treatment groups at the beginning of the study. The sequence of blocks that respondents are presented with is as follows:
1. Informed Consent
2. Screening
3. College Admissions Portal Simulation
4. Learning Survey Interface
5. Attribute Descriptions
6. Attention Check
7. Choice-Based Conjoint
8. Demographics
9. Attitudes/Opinions Questions on a 7-point scale
10. Exit Survey