Experimental Design
The design below is the same for all three pools, unless otherwise noted.
After obtaining consent, we first give participants instructions on the button-pressing task they are about to complete. They are to alternate pressing the “a” and then the “b” key on their computer for up to five minutes, with no minimum. They earn a point, and money for charity, every time they press the “a” and then the “b” key. Participants then practice the task for up to 30 seconds.
We then give participants an outline of the structure of the experiment. They are to complete the task in three rounds, the order of which is randomly chosen. Each round corresponds to a different incentive.
• Anonymous Effort Round: participants raise money for the Red Cross (2 cents per 10 points in the Prolific pool, 5 cents per 10 points in the other pools). Their performance remains anonymous.
• Anonymous and Paid Effort Round: participants raise money for the Red Cross (2 cents per 10 points in the Prolific pool, 5 cents per 10 points in the other pools). They also earn bonus compensation for themselves that is the same as their Red Cross contribution. Their performance remains anonymous.
• Publicly-Shared Effort Round: participants raise money for the Red Cross (2 cents per 10 points in the Prolific pool, 5 cents per 10 points in the other pools). Their performance in the task, along with their picture (and name for the non-Prolific pools) is revealed to other participants after the conclusion of the study.
Participants are told there is a 10% chance they will be given the Choose Your Visibility option (described below). Otherwise, one of the three rounds above will be randomly chosen to count, and participants will receive the corresponding incentive.
We next describe how their effort will be publicly-shared with others. We inform them that after all participants have completed the study, they will receive a link to view the pictures and contributions raised for the Red Cross of all participants assigned to have their effort publicly shared with others. We inform them that we will also include their score in the button-pressing task and include their rank relative to other participants whose effort is publicly-shared. In the Berkeley and BU pools, we inform participants we will also include their name.
We then have participants take a picture using their webcam and give them the option to upload a picture instead if they are assigned to be publicly-recognized. We inform them that we will reject their submission if their picture does not match that on the webcam, or if their picture includes anything inappropriate. For the Berkeley and BU pools, we also elicit their name.
We then inform participants we will elicit their willingness to pay for receiving (or avoiding) having their effort be publicly-shared with others, which we do using a combination of the strategy method and the Becker-DeGroot-Marschak elicitation method (BDM). The strategy-proof method contains 18 questions asking participants to state whether their effort be publicly-shared with others different intervals of contributions raised for the Red Cross, and then eliciting for each question how much they are willing to pay (between $0 and $10 for Prolific, and between $0 and $25 for the other pools) to guarantee that their choice is implemented (the BDM component). The categories of possible contributions are the following in the Berkeley and BU pools: $0.00-$0.50, $0.50-$1.00, …, $8.50 or more. For the Prolific pool, we multiply these by 0.4 and use the intervals: $0.00-$0.20, $0.20-$0.40, …, $3.40 or more.
Each question has the following structure: “If you raise ${N} for the Red Cross, do you want your effort publicly shared with others?” Participants are then asked how much of their budget they are willing to use to have their choice implemented: “How much of your $25 [$10 for Prolific] budget would you be willing to use to have your effort be anonymous if you raise ${N}”, where “be anonymous” is either “be anonymous” or “be publicly-shared with others” to match their preferences.
We explain this to participants by first telling them it is in their best interest to answer truthfully. The details are then explained in simple and plain language. To minimize any negative inferences that could be drawn about those not in the social recognition group, we guarantee that the BDM responses would be used to determine assignment only with 10% chance. We tell participants that with 10% chance they will have the Choose Your Visibility option, in which their choices determine whether their effort will be publicly-shared or not. If this is chosen, we inform participants that we will randomly choose one of the three rounds, and use that to determine their Red Cross donation.
We then explain to participants that at the end of the study, a computer will randomly choose one of the three rounds, check how much money they raised for the Red Cross in that round, and then match it with their answers. With 50% chance, they will receive a $25 bonus and their preferred choice for having their effort publicly shared or be anonymous. Otherwise, with 50% chance a computer will randomly choose a number between $0 and $25. If the computer chooses a value less than or equal to the value they were willing to use from their $25 budget, then their preferred choice for having their effort publicly shared or be anonymous will be implemented. They will receive a bonus of $25 minus the number chosen by the computer. If the computer chooses a value greater than the value they were willing to use from their $25 budget, then their preferred choice for having their effort publicly shared or be anonymous will NOT be implemented. They will receive a $25 bonus. Again, for the Prolific pool we use a $10 budget instead.
From subjects' perspective, this procedure is equivalent to a second price sealed-bid auction against an unknown bidder. Note that providing a 50-50 chance to receive the desired option whenever random draws are above participants' bids is necessary, as otherwise participants would have had an incentive to misrepresent their true preferences in the first part of the “yes/no” elicitation and then simply bid zero.
After explaining the Choose Your Visibility Option, we then ask an attention check question, where participants are instructed to leave the question blank and advance to the next screen. We plan to exclude those who fail the attention check from the main analysis.
Because others' behavior plays a crucial role in social recognition payoffs, we then inform participants about how well participants performed in the Publicly-Shared Effort round in a pilot study. For the Berkeley and BU pools, we use information from a pilot on Berkeley undergraduates; from the Prolific pool we use information from a pilot on Prolific. We inform participants of the average performance, the inter-quartile range, and the number of participants in the pilot study. We also provide them a link to view a CDF of past performances.
For the Berkeley and Prolific pools, we also tell participants we will divide them into groups and tell them how many participants are in their group, and thus the number of people whose effort will be publicly shared. We divide them randomly and after the study. For the Berkeley pool, we inform participants the group size is approximately 75, and that approximately 25 will have their effort publicly-shared with others. For the Prolific pool, we randomize group size, as discussed in the “Experiment Characteristics” section. For the BU pool, we inform them that their class (MS222 or MS221B) will be one group, and do not give them information on number of participants or the number whose effort will be publicly-shared with others.
After answering the willingness-to-pay questions, participants complete the three rounds in a randomly chosen order. In each round, we remind them of the incentive. In the Publicly-Shared Effort round, we show them an image of what might be shown to other participants.
We then ask participants demographic questions, and randomly assign them to have their effort publicly-shared or not, their bonus compensation, and which round counts for their Red Cross contribution. After the end of the study, we send all participants a link to view the performance information of all participants assigned to have their effort publicly-shared with others.