Social recognition in a click for charity online experiment

Last registered on April 21, 2020

Pre-Trial

Trial Information

General Information

Title
Social recognition in a click for charity online experiment
RCT ID
AEARCTR-0005737
Initial registration date
April 17, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 21, 2020, 11:31 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
UC Berkeley

Other Primary Investigator(s)

PI Affiliation
Boston University
PI Affiliation
Copenhagen Business School
PI Affiliation
UC Berkeley

Additional Trial Information

Status
In development
Start date
2020-04-18
End date
2020-04-22
Secondary IDs
Abstract
A growing body of empirical work shows that social recognition of individuals' behavior can meaningfully influence individuals’ choices. This paper studies whether social recognition is a socially efficient lever for influencing individuals’ choices, relative to standard financial incentives. Because social recognition generates utility from esteem to some but disutility from shame to others, it can be either positive-sum, zero-sum, or negative-sum. This depends on whether the social recognition utility function is convex, linear, or concave, respectively. We develop a new revealed preferences methodology that allows us to investigate this question, as well as to structurally estimate leading models of social signaling and their equilibrium implications. We deploy the methods in an online experiment using three different subject pools: (i) participants on Prolific, (ii) UC Berkeley undergraduates recruited from the school’s Xlab, and (iii) Boston University undergraduates who are enrolled in the same economics course.
External Link(s)

Registration Citation

Citation
Butera, Luigi et al. 2020. "Social recognition in a click for charity online experiment ." AEA RCT Registry. April 21. https://doi.org/10.1257/rct.5737-1.0
Experimental Details

Interventions

Intervention(s)
We will run an online experiment on three different subject pools, (i) participants on Prolific, (ii) UC Berkeley undergraduates recruited from the school’s Xlab, and (iii) Boston University undergraduates who are enrolled in the undergraduate business courses MS222 and potentially MS221B.
Intervention Start Date
2020-04-18
Intervention End Date
2020-04-22

Primary Outcomes

Primary Outcomes (end points)
Performance in the button-clicking task and willingness to pay to be socially recognized conditional on a realized performance level.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The design below is the same for all three pools, unless otherwise noted.

After obtaining consent, we first give participants instructions on the button-pressing task they are about to complete. They are to alternate pressing the “a” and then the “b” key on their computer for up to five minutes, with no minimum. They earn a point, and money for charity, every time they press the “a” and then the “b” key. Participants then practice the task for up to 30 seconds.

We then give participants an outline of the structure of the experiment. They are to complete the task in three rounds, the order of which is randomly chosen. Each round corresponds to a different incentive.

• Anonymous Effort Round: participants raise money for the Red Cross (2 cents per 10 points in the Prolific pool, 5 cents per 10 points in the other pools). Their performance remains anonymous.
• Anonymous and Paid Effort Round: participants raise money for the Red Cross (2 cents per 10 points in the Prolific pool, 5 cents per 10 points in the other pools). They also earn bonus compensation for themselves that is the same as their Red Cross contribution. Their performance remains anonymous.
• Publicly-Shared Effort Round: participants raise money for the Red Cross (2 cents per 10 points in the Prolific pool, 5 cents per 10 points in the other pools). Their performance in the task, along with their picture (and name for the non-Prolific pools) is revealed to other participants after the conclusion of the study.

Participants are told there is a 10% chance they will be given the Choose Your Visibility option (described below). Otherwise, one of the three rounds above will be randomly chosen to count, and participants will receive the corresponding incentive.

We next describe how their effort will be publicly-shared with others. We inform them that after all participants have completed the study, they will receive a link to view the pictures and contributions raised for the Red Cross of all participants assigned to have their effort publicly shared with others. We inform them that we will also include their score in the button-pressing task and include their rank relative to other participants whose effort is publicly-shared. In the Berkeley and BU pools, we inform participants we will also include their name.

We then have participants take a picture using their webcam and give them the option to upload a picture instead if they are assigned to be publicly-recognized. We inform them that we will reject their submission if their picture does not match that on the webcam, or if their picture includes anything inappropriate. For the Berkeley and BU pools, we also elicit their name.

We then inform participants we will elicit their willingness to pay for receiving (or avoiding) having their effort be publicly-shared with others, which we do using a combination of the strategy method and the Becker-DeGroot-Marschak elicitation method (BDM). The strategy-proof method contains 18 questions asking participants to state whether their effort be publicly-shared with others different intervals of contributions raised for the Red Cross, and then eliciting for each question how much they are willing to pay (between $0 and $10 for Prolific, and between $0 and $25 for the other pools) to guarantee that their choice is implemented (the BDM component). The categories of possible contributions are the following in the Berkeley and BU pools: $0.00-$0.50, $0.50-$1.00, …, $8.50 or more. For the Prolific pool, we multiply these by 0.4 and use the intervals: $0.00-$0.20, $0.20-$0.40, …, $3.40 or more.

Each question has the following structure: “If you raise ${N} for the Red Cross, do you want your effort publicly shared with others?” Participants are then asked how much of their budget they are willing to use to have their choice implemented: “How much of your $25 [$10 for Prolific] budget would you be willing to use to have your effort be anonymous if you raise ${N}”, where “be anonymous” is either “be anonymous” or “be publicly-shared with others” to match their preferences.

We explain this to participants by first telling them it is in their best interest to answer truthfully. The details are then explained in simple and plain language. To minimize any negative inferences that could be drawn about those not in the social recognition group, we guarantee that the BDM responses would be used to determine assignment only with 10% chance. We tell participants that with 10% chance they will have the Choose Your Visibility option, in which their choices determine whether their effort will be publicly-shared or not. If this is chosen, we inform participants that we will randomly choose one of the three rounds, and use that to determine their Red Cross donation.

We then explain to participants that at the end of the study, a computer will randomly choose one of the three rounds, check how much money they raised for the Red Cross in that round, and then match it with their answers. With 50% chance, they will receive a $25 bonus and their preferred choice for having their effort publicly shared or be anonymous. Otherwise, with 50% chance a computer will randomly choose a number between $0 and $25. If the computer chooses a value less than or equal to the value they were willing to use from their $25 budget, then their preferred choice for having their effort publicly shared or be anonymous will be implemented. They will receive a bonus of $25 minus the number chosen by the computer. If the computer chooses a value greater than the value they were willing to use from their $25 budget, then their preferred choice for having their effort publicly shared or be anonymous will NOT be implemented. They will receive a $25 bonus. Again, for the Prolific pool we use a $10 budget instead.

From subjects' perspective, this procedure is equivalent to a second price sealed-bid auction against an unknown bidder. Note that providing a 50-50 chance to receive the desired option whenever random draws are above participants' bids is necessary, as otherwise participants would have had an incentive to misrepresent their true preferences in the first part of the “yes/no” elicitation and then simply bid zero.

After explaining the Choose Your Visibility Option, we then ask an attention check question, where participants are instructed to leave the question blank and advance to the next screen. We plan to exclude those who fail the attention check from the main analysis.

Because others' behavior plays a crucial role in social recognition payoffs, we then inform participants about how well participants performed in the Publicly-Shared Effort round in a pilot study. For the Berkeley and BU pools, we use information from a pilot on Berkeley undergraduates; from the Prolific pool we use information from a pilot on Prolific. We inform participants of the average performance, the inter-quartile range, and the number of participants in the pilot study. We also provide them a link to view a CDF of past performances.

For the Berkeley and Prolific pools, we also tell participants we will divide them into groups and tell them how many participants are in their group, and thus the number of people whose effort will be publicly shared. We divide them randomly and after the study. For the Berkeley pool, we inform participants the group size is approximately 75, and that approximately 25 will have their effort publicly-shared with others. For the Prolific pool, we randomize group size, as discussed in the “Experiment Characteristics” section. For the BU pool, we inform them that their class (MS222 or MS221B) will be one group, and do not give them information on number of participants or the number whose effort will be publicly-shared with others.

After answering the willingness-to-pay questions, participants complete the three rounds in a randomly chosen order. In each round, we remind them of the incentive. In the Publicly-Shared Effort round, we show them an image of what might be shown to other participants.
We then ask participants demographic questions, and randomly assign them to have their effort publicly-shared or not, their bonus compensation, and which round counts for their Red Cross contribution. After the end of the study, we send all participants a link to view the performance information of all participants assigned to have their effort publicly-shared with others.
Experimental Design Details
Randomization Method
Randomization will be done through Qualtrics
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
We will aim to recruit 1005 participants for Prolific pool, 300-400 participants for the Berkeley pool, and 300-400 participants from MS222 and potentially MS221B.

We will only recruit participants who have a working webcam on the computer they use to take the study. Participants must take the study on a laptop or personal computer using either the Chrome or Firefox browser.

For the Prolific pool, we will only recruit participants 95%+ approval rating, plus at least 15 past studies.
Sample size: planned number of observations
Same as above
Sample size (or number of clusters) by treatment arms
This is a within-subject design, and thus each participant experiences all of the treatments described in the "Experimental Design" section.

For the Prolific pool, we will randomize the size of the group into which the participants will be split, and thus the number of participants whose effort will be publicly-shared.
• Approximately 300 participants will be told that approximately 100 of 300 participants will have their effort be publicly shared with others (i.e., there will be one group with approximately 300 participants).
• Approximately 450 participants will be told that approximately 25 of 75 participants will have their effort be publicly shared with others (i.e., there will be six groups with approximately 75 participants each).
• Approximately 255 participants will be told that approximately 5 of 15 participants will have their effort be publicly shared with others (i.e., there will be 17 groups of approximately 15 student each).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Boston University
IRB Approval Date
2020-03-19
IRB Approval Number
5473X
IRB Name
UC Berkeley
IRB Approval Date
2020-03-26
IRB Approval Number
2020-01-1288
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials