Red or Blue Pill? A Positive Welfare Analysis

Last registered on August 17, 2023


Trial Information

General Information

Red or Blue Pill? A Positive Welfare Analysis
Initial registration date
July 27, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 09, 2023, 3:27 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
August 17, 2023, 7:17 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

Carnegie Mellon University - Social and Decision Sciences

Other Primary Investigator(s)

PI Affiliation
Stanford - Economics

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Modern welfare economics defers to revealed preference when assessing an individual's welfare, i.e., option A is deemed better than option B if, given the choice, the individual would choose A over B. However, in some cases, such choice data is difficult or even impossible to obtain. Consider, for example, the welfare of a taxpayer whose tax dollars may either go to what he considers a good or bad cause. Supposing there is no way for the taxpayer to learn where the tax dollars went, does the way the government spends them matter for the individual's welfare? Note that we cannot ask the taxpayer in an incentive-compatible way which cause he prefers while keeping him ignorant of which cause is ultimately chosen.

In this experiment, we use a choosing-for-others framework to study the welfare consequences of satisfying someone's (call him Alex) preferences in a similar paradigm as in the example above. We study whether participants decrease Alex's surprise bonus for his preference to be satisfied (i.e., a willingness-to-pay measure using Alex's bonus). We do so for two cases: when Alex will learn whether his preference is satisfied and when Alex will not learn whether his preference is satisfied. We also vary Alex's expectation, whether he believes it is likely (or not) his preference is satisfied, and we tell the participants about this expectation. Assuming that participants choose for others as they would choose for themselves, this allows us to shed light on the welfare consequences of choice problems where preference satisfaction and beliefs do not move in tandem (i.e., where preferences can be satisfied while beliefs remain fixed).
External Link(s)

Registration Citation

Arrieta, Gonzalo R. and Lukas Bolte. 2023. "Red or Blue Pill? A Positive Welfare Analysis." AEA RCT Registry. August 17.
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
WTP (in terms of reducing Alex's surprise bonus) for Alex's preference to be satisfied for the case where Alex learns about it and the case when he does not.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Three (unincentivized) questions that also capture welfare consequences of similar situations (1. Nozick's Experience Machine, 2. Unknown tax-funded relief effort, 3. Andy Warhol drawings) and correlations of these questions with the primary outcomes.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We recruit participants on Prolific. We provide experimental instructions to our participants and ensure good understanding through understanding questions. We assign participants to different treatments that vary by Alex's beliefs about whether he gets Object A instead of Object B. For each treatment, we elicit participants' WTP for both cases---when Alex learns which object he got and when he does not.
Experimental Design Details
In our experiment, participants make decisions for another participant, which we call Alex. There are two objects over which Alex has preferences. Object A: "Discovering Prices" (Paul Milgrom) and "Who Gets What and Why?" (Al Roth), both with original handwritten notes from the authors addressed to Alex. Object B: the same two books with fake notes. Alex strictly prefers Object A over Object B, and this is known to participants. Alex will get Object A or B with certainty, and we might or might not tell him which object he got. It is impossible for him, or anyone else, to tell which object he got once he got it. Moreover, Alex receives a surprise bonus. This is all known to the participants.

We elicit how much participants are willing to reduce Alex's surprise bonus (henceforth, WTP) for him to get Object A instead of Object B in two cases: when he knows which object he gets and when he does not. We elicit this WTP using asking the participants the following twelve questions:

Which books do you prefer Alex to receive?
...the ones with the original notes and $1 or with the fake notes?
...the ones with the original notes or with the fake notes?
...the ones with the original notes or the ones with the fake notes and $1?
...the ones with the original notes or the ones with the fake notes and $2?
...the ones with the original notes or the ones with the fake notes and $3?
...the ones with the original notes or the ones with the fake notes and $4?
...the ones with the original notes or the ones with the fake notes and $5?
...the ones with the original notes or the ones with the fake notes and $7?
...the ones with the original notes or the ones with the fake notes and $10?
...the ones with the original notes or the ones with the fake notes and $15?
...the ones with the original notes or the ones with the fake notes and $25?
...the ones with the original notes or the ones with the fake notes and $50?

We also elicit the WTP for the case when Alex won't know which object he gets, varying Alex's beliefs on the likelihood that he gets Object A (known to the participants).

We add unincentivized open-ended elicitations of the reasons why participants made their decisions. We use human coders to classify participants by whether they understood the experimental paradigm. We analyze the data by this classification.

We correlate elicited WTPs with participants' responses to three (unincentivized) questions that also capture welfare consequences of similar situations.

1. Nozick's Experience Machine

If given the option, would you choose to plug into an experience machine that could provide you with an entirely immersive, simulated reality where you can experience any desirable scenario, despite not being real? Keep in mind that while plugged in, you would never be aware that you are in the experience machine and would believe that the simulated reality is real.

Suppose there was an experience machine that would give you any experience you desired (eating good food, having a successful career, making meaningful connections, etc.). While in the machine, you would not know that you are in it; you would think that what you are experiencing is actually happening.

Would you go into the machine?

Answer options: "Yes" and "No"

2. Unknown tax-funded relief effort

A small town in Arkansas experiences massive flooding, leaving many families homeless. To provide financial relief to the impacted families, the government raises taxes, including a $100 levy on John. In general, John dislikes paying taxes, but he would gladly contribute $100 to the relief effort if he knew about the flood. However, he never learns about the flooding or the relief effort.

Does the government raising taxes to provide financial relief make John better or worse off?

Answer options: "Better off" and "Worse off"

3. Andy Warhol drawings

Hundreds of Andy Warhol fakes, and one original drawing worth $20k, sold for $250 each. An art collective purchased an original Warhol drawing and copied it 999 times. The copies are carefully created so that not even their creators can tell them apart from the original drawing. They then mixed the original together with the copies and sold the 1000 drawings.

Someone got the original Andy Warhol drawing without knowing about it. Is this person better off by getting the original one instead of a copy?

Answer options: "Yes" and "No"
Randomization Method
All randomizations are done using a computer.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
The experiment is not clustered.
Sample size: planned number of observations
1,500 participants.
Sample size (or number of clusters) by treatment arms
500 participants in the treatment where we do not induce a particular belief Alex holds.
1,000 participants for the treatments that vary Alex's belief.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Stanford University
IRB Approval Date
IRB Approval Number
IRB Name
Carnegie Mellon University
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials