Online Survey to Test Whether Different Prompts Elicit Different Responses to Potential Job Applicants

Last registered on August 18, 2024

Pre-Trial

Trial Information

General Information

Title
Online Survey to Test Whether Different Prompts Elicit Different Responses to Potential Job Applicants
RCT ID
AEARCTR-0014133
Initial registration date
August 07, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 14, 2024, 2:22 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 18, 2024, 3:19 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Cornell University

Other Primary Investigator(s)

PI Affiliation
Columbia University
PI Affiliation
Tufts University

Additional Trial Information

Status
In development
Start date
2024-08-12
End date
2025-08-11
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In surveys, social desirability bias can impact how respondents answer questions. We intend to survey people to ask their responses to hypothetical scenarios where we randomize language about the scenario in which they are answering the question. We will use their responses to understand how different language and prompts about the situation in which they are responding to the questions may (or may not) change how they people answer questions.
External Link(s)

Registration Citation

Citation
Agan, Amanda, Bo Cowgill and Laura Gee. 2024. "Online Survey to Test Whether Different Prompts Elicit Different Responses to Potential Job Applicants." AEA RCT Registry. August 18. https://doi.org/10.1257/rct.14133-1.1
Experimental Details

Interventions

Intervention(s)
Respondents will be answering a hypothetical survey with vignette style hiring scenarios. There are several interventions:

Anonymity intervention: Some respondents will be randomized to receive a treatment that reminds them about their anonymity and the confidentiality of their responses. The other half will not receive this message.

One-time job intervention: Some respondents, when answering the vignette survey, will be told that the company they are hypothetically hiring for has stated this is a one-time job, while the other half will be told that they company may possibly hire them in the future for a similar task.

Academic study intervention: Some respondents, when answering the vignette survey, will be told the hypothetical company would review them based on their performance as a subject in an academic study. The others will be told their review would be based on performance as a hiring professional (even though they are participating in an academic study).

Generally the set of candidates that the respondents will review will also be randomized, to understand the effect this may have on results, but it is not considered by the researchers as an 'intervention'



Intervention Start Date
2024-08-12
Intervention End Date
2024-09-30

Primary Outcomes

Primary Outcomes (end points)
Each respondent is being asked to choose between candidate A or candidate B in several vignettes. Candidate B is usually has typically non-discriminated against characteristics (e.g. white male with no criminal record) while Candidate A may be female, Black, or have a criminal record.

For the anonymity intervention 1, one primary outcome is whether they choose candidate A or B.

For the anonymity intervention 2, one-time job and academic study interventions, the primary outcomes are: how comfortable would you feel choosing candidate B (on a sliding scale from very uncomfortable to very comfortable) and how much effort would you put into choosing between candidate A and B (on a sliding scale from very little effort to a lot of effort)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Time spent on making a decision (measured by how much time Qualtrics says they stayed on that page)
Secondary Outcomes (explanation)
This will be a direct measure of "effort"

Experimental Design

Experimental Design
For each intervention, survey respondents will be (independently) randomly assigned to receive one of the treatments listed in the intervention explanation above.
Experimental Design Details
Not available
Randomization Method
Randomization done by Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
No clusters
Sample size: planned number of observations
500
Sample size (or number of clusters) by treatment arms
250 receiving each intervention
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Cornell Institutional Review Board for Human Participants
IRB Approval Date
2024-08-02
IRB Approval Number
IRB0148859
IRB Name
Tufts University Social, Behavioral, and Educational Research IRB
IRB Approval Date
2024-08-07
IRB Approval Number
STUDY00005261