CSR and Job Selection II

Last registered on June 06, 2022

Pre-Trial

Trial Information

General Information

Title
CSR and Job Selection II
RCT ID
AEARCTR-0009495
Initial registration date
June 02, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 06, 2022, 5:52 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
WHU - Otto Beisheim School of Management

Other Primary Investigator(s)

PI Affiliation
WHU - Otto Beisheim School of Management
PI Affiliation
WHU - Otto Beisheim School of Management

Additional Trial Information

Status
In development
Start date
2022-06-06
End date
2022-06-14
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
In our study we focus on the job selection effect of corporate social responsibility (CSR) activities. We investigate if rather ethical or unethical people are attracted by employers that promote CSR. In a field experiment on Amazon’s Mechanical Turk (MTurk), we therefore act as an employer who is looking for workers to complete a job. Our experiment consists of two parts. In the first part we focus on finding out about the participant's general ethical predisposition by testing whether they have social preferences when confronted with a situation where they can lie without being caught. In the second part, we then offer the participants a job, where we vary whether or not they receive information about the employer's CSR activities and observe if they accept the job. With the results we are able to identify if employers that promote CSR activities in their job description attract not just rather honest or dishonest employees but people who lie for pro-social reasons or not. A secondary outcome of interest is the behavior of the participants on the job. Since the design of the experiment gives participants the opportunity to misbehave on the job, by misreporting whether they completed the assigned task, we can analyze how rather honest and rather dishonest employees behave on the job and whether an employer's CSR activities have an influence on it.
External Link(s)

Registration Citation

Citation
Günther, Christina, Rainer Michael Rilke and Dana Suelberg. 2022. "CSR and Job Selection II." AEA RCT Registry. June 06. https://doi.org/10.1257/rct.9495
Experimental Details

Interventions

Intervention(s)
In the second part of the experiment, Mturk workers are randomly assigned to one of two job offers. One job offer contains information about the employer's CSR activities and the other one does not reveal any information about CSR activities.
Intervention Start Date
2022-06-07
Intervention End Date
2022-06-14

Primary Outcomes

Primary Outcomes (end points)
Our primary outcome is the reported die roll outcome of workers which happens in the first part of the experiment before workers are randomly assigned into a treatment and control group.
Primary Outcomes (explanation)
The die roll is leaning on the design by Fischbacher and Föllmi-Heusi (2013).

Secondary Outcomes

Secondary Outcomes (end points)
A secondary outcome is the worker's performance on the job.
Secondary Outcomes (explanation)
After workers accept a job we analyze how they perform on the job. The job is designed in a way that allows workers to misreport (lie) about completing the task without fear of getting payment reductions.

Experimental Design

Experimental Design
The experiment consists of two parts and is conducted on Amazon's Mechanical Turk where we act as an employer who is looking for workers to complete a Human Intelligence Task (HIT).

In the first part of the experiment, workers receive information that our HIT is a short transcription task for which they will be paid $0.50 but that they also have the opportunity to earn up to $1.00 more if they complete an extra task before starting the transcription. The extra task is a one shot die roll game for which workers can get a playing die at home or use an online die, roll it once and report the outcome. The payment for this extra task is set in a prosocial fashion. For the purpose of the die roll game, workers are randomly assigned to another participant. The payment of this other participant and the worker's payment are added and split equally between both. The payment therefore depends on both, the worker's reported die roll outcome as well as the matched participant's outcome and can range from $0.00 to $1.00 per person.
After the die roll, workers start the transcription task. Here, each worker gets five scanned images with short text sections from German textbooks which they have to transcribe. After the transcription task we ask workers to fill in a questionnaire about demographic factors.

The second part of the experiment is conducted as soon as the data for the first part is complete. In this part we ask workers on Mturk to complete a short survey for $0.10 but explain that they also have the opportunity to earn up to $1.00 more based on the decision they make in this HIT. After they complete the short survey, workers receive a job offer for a transcription task that pays in total $1.00. We explain that the payment structure for the job is as follows: workers receive $0.50 just for accepting the HIT, regardless of whether they decide to work on the transcription task or not. The remaining $0.50 will be paid to workers who accept the job offer. Workers who accept the job, then get ten images of short text sections from German textbooks for which they first have to report whether the scanned image is readable and then transcribe it. Instructions state that it is possible for some images to be blurry and unreadable and that reporting them and thus skipping the transcription is legitimate and will not lower the payment.
Experimental Design Details
Randomization Method
Workers are randomly assigned to one of two job offers. The randomization is done by a computer.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
-
Sample size: planned number of observations
The experiment consists of two parts. We expect that not all participants from the first part will also participate in the second part. Based on a previous experiment we conducted we expect a drop out of about 40%. Therefore, we will work with 400 Mturk workers in the first part of the experiment and expect there to be 240 workers who participate in the second part.
Sample size (or number of clusters) by treatment arms
Treatments are only in the second part of the experiment where we expect 240 participants, hence: 240/2= 120 participants per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We determined our sample size to be large enough to detect a small-to-medium effect size (d= .4) with a power of 80% (α = .05, two-tailed). An a priori power analysis yielded a target sample size of N=104 per treatment in part 2 of our experiment. However, based on a previously conducted experiment we anticipate that about 40% of participants would drop out between part 1 and part 2. Therefore, we decided to exceed the target sample size, resulting in a sample size of N=400 who participate in part 1.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials