CSR and job selection I

Last registered on December 02, 2021


Trial Information

General Information

CSR and job selection
Initial registration date
November 29, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 02, 2021, 4:58 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

WHU - Otto Beisheim School of Management

Other Primary Investigator(s)

PI Affiliation
WHU - Otto Beisheim School of Management
PI Affiliation
WHU - Otto Beisheim School of Management

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
In our study we focus on the job selection effect of corporate social responsibility (CSR) activities. We investigate if rather ethical or unethical people are attracted by employers that promote CSR. In a field experiment on Amazon’s Mechanical Turk (MTurk), we therefore act as an employer who is looking for workers to complete a job. Our experiment consists of two parts. In the first part we focus on finding out about the participant's general ethical predisposition by testing whether they have a tendency to lie for profit. In the second part, we then offer the participants a job, where we vary whether or not they receive information about the employer's CSR activities and observe if they accept the job. With the results we are able to identify if employers that promote CSR activities in their job description attract rather honest or dishonest employees. A secondary outcome of interest is the behavior of the participants on the job. Since the design of the experiment gives participants the opportunity to misbehave on the job by misreporting whether they completed the assigned task, we can analyze how rather honest and rather dishonest employees behave on the job and whether an employer's CSR activities have an influence on it.
External Link(s)

Registration Citation

Guenther, Christina, Rainer Michael Rilke and Dana Suelberg. 2021. "CSR and job selection." AEA RCT Registry. December 02. https://doi.org/10.1257/rct.8582
Experimental Details


In the second part of the experiment, Mturk workers are randomly assigned to one of two job offers. One job offer contains information about the employer's CSR activities and the other one does not reveal any information about CSR activities.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Our primary outcome is the reported die roll outcome of workers.
Primary Outcomes (explanation)
The die roll is leaning on the design by Fischbacher and Föllmi-Heusi (2013).

Secondary Outcomes

Secondary Outcomes (end points)
A secondary outcome is the worker's performance on the job.
Secondary Outcomes (explanation)
After workers accept a job (see primary outcome) we analyze how they perform on the job. The job is designed in a way that allows workers to misreport (lie) about completing the task without fear of getting payment reductions.

Experimental Design

Experimental Design
The experiment consists of two parts and is conducted on Amazon's Mechanical Turk where we act as an employer who is looking for workers to complete a Human Intelligence Task (HIT).

In the first part of the experiment, the workers receive information that our HIT is a short transcription task for which they will be paid $0.50 but that they will also have the opportunity to earn up to $1.00 more if they complete an extra task before starting the transcription. The extra task is a one shot die roll game for which workers can get a playing die at home or use an online version, roll once and report the outcome. Based on the die roll outcome workers report, they will get a bonus of up to $1.00. After the die roll, workers start the actual job (transcription task). Here, each worker gets five scanned images with short text sections from German textbooks which they have to transcribe. After the transcription task we ask workers to fill in a questionnaire about demographic factors.

The second part of the experiment is conducted as soon as the data for the first part is complete. In this part we ask workers on Mturk to complete a short survey for $0.10 but explain that they also have the opportunity to earn up to $1.00 more based on the decision they make in this HIT. After they complete the short survey, workers receive a job offer for a transcription task that pays in total $1.00. We explain that the payment structure for the job is as follows: workers receive $0.50 just for accepting the HIT, regardless of whether they decide to work on the transcription task or not. The remaining $0.50 will be paid to workers who accept the job offer. Workers who accept the job, then get ten images of short text sections from German textbooks for which they first have to report whether the scanned image is readable and then transcribe it. Instructions state that it is possible for some images to be blurry and unreadable and that reporting them and thus skipping the transcription is legitimate and will not lower the payment.
Experimental Design Details
Randomization Method
Workers are randomly assigned to one of two job offers. The randomization is done by a computer.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
The experiment consists of two parts. We expect that not all participants from the first part will also participate in the second part. Based on a pilot we conducted we expect a drop out of about 30%. Therefore, we will work with 300 Mturk workers in the first part of the experiment and expect there to be 210 workers who participate in the second part.
Sample size (or number of clusters) by treatment arms
Treatments are only in the second part of the experiment where we expect 210 participants, hence: 210/2= 105 participants per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We determined our sample size to be large enough to detect a small-to-medium effect size (d= .4) with a power of 80% (α = .05, two-tailed). An a priori power analysis yielded a target sample size of N=104 per treatment in part 2 of our experiment. However, based on a previously conducted pilot study we anticipate that about 30% of participants would drop out between part 1 and part 2. Therefore, we decided to exceed the target sample size accordingly, resulting in a sample size of N=300 who participate in part 1.

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials