Likert versus Slider

Last registered on September 14, 2020

Pre-Trial

Trial Information

General Information

Title
Likert versus Slider
RCT ID
AEARCTR-0006444
Initial registration date
September 14, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 14, 2020, 1:46 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Johns Hopkins University

Other Primary Investigator(s)

PI Affiliation
Johns Hopkins University

Additional Trial Information

Status
In development
Start date
2020-09-14
End date
2020-12-31
Secondary IDs
Abstract
We randomize different survey formats (Likert, Slider) to understand how the survey format affects the estimation of continuous latent variables, such as attitude and perception. One half of survey participants will respond to a questionnaire using Likert scale and another half will respond to a questionnaire using Slider. Using survey responses from two different formats, we will estimate the nonparametric densities of latent variables and will discuss which survey method can provide responses that are more aligned with standard identification assumptions in the measurement error literature.
External Link(s)

Registration Citation

Citation
Deng, Huan and Yujung Hwang. 2020. "Likert versus Slider." AEA RCT Registry. September 14. https://doi.org/10.1257/rct.6444-1.0
Experimental Details

Interventions

Intervention(s)
We randomize survey formats between Likert and Slider.
Intervention Start Date
2020-09-14
Intervention End Date
2020-09-28

Primary Outcomes

Primary Outcomes (end points)
We have two primary sets of outcome variables.
The first set of outcome variables is about anti-Asian sentiments. The second set of outcome variables is about perception about racism in the US. We include one List randomization question to assess the degree of social desirability bias.
Primary Outcomes (explanation)
We collect multiple measurements for each set of outcome variables, not to cherry-pick the most responsive survey items later, but to apply nonparametric estimation techniques to estimate measurement errors. Having multiple measurements is one of the essential identifying assumptions.
The first set of outcome variables on anti-Asian sentiment is a shorter version of the realistic threat measure and prejudice measures used in Stephan et al. (1999). The second set of outcome variables on perception about racism in the US is measured by asking how much respondents agree to three statements. The statements are (1) whether a person who publicly claims to be a racist can maintain good social relationships with most people, (2) whether typical American citizens will tolerate racial bias to some degree, as long as it does not violate the law evidently, and (3) if a person refers to the novel corona virus as “China virus”, the person will face severe criticism.

Reference
Stephan, Walter G., Oscar Ybarra, and Guy Bachman. "Prejudice toward immigrants 1." Journal of Applied Social Psychology 29.11 (1999): 2221-2237.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
To screen out survey participants who pay little attention to the survey, we have inserted an attention check question just before starting to ask about sentiment and perception. In an attention check question, we ask respondents how they are currently feeling, but we tell them to check only “None of the above” option in the middle of the question. Survey participants who fail to pay attention to our message and thus choose any other choices will be screened out.
Experimental Design Details
To screen out survey participants who pay little attention to the survey, we have inserted an attention check question just before starting to ask about sentiment and perception. In an attention check question, we ask respondents how they are currently feeling, but we tell them to check only “None of the above” option in the middle of the question. Survey participants who fail to pay attention to our message and thus choose any other choices will be screened out.
Randomization Method
randomization is done using a randomizer in a Qualtrics program.
Randomization Unit
individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1300 individuals
Sample size: planned number of observations
1300 individuals
Sample size (or number of clusters) by treatment arms
650 (we do 50:50 randomization).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
The Homewood Institutional Review Board (Johns Hopkins University)
IRB Approval Date
2020-08-20
IRB Approval Number
HIRB00011674

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials