Donating for Life-saving Medical Treatments: Need-Based or Merit-Based

Last registered on August 10, 2023

Pre-Trial

Trial Information

General Information

Title
Donating for Life-saving Medical Treatments: Need-Based or Merit-Based
RCT ID
AEARCTR-0011874
Initial registration date
August 01, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 10, 2023, 12:52 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Boston University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2023-08-03
End date
2023-09-15
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This paper studies how donors respond to merit and need when giving to families facing unaffordable medical expenses. With data from a leading Chinese crowdfunding platform, I find that campaigns receive more donations if fundraisers report attending higher-ranked colleges. By controlling for textual characteristics and exploiting within-donor variation at the transaction level, I show that donors favor patients who attend better colleges, and moderately favor ill family members of a college-attending fundraiser. In line with the donor's limited information about college rank, the residual rank effect is more pronounced for the top institutions and is muted for out-of-province ones.

I design an online RCT to elicit respondents’ willingness to donate to 20 synthetic fundraising stories. The college tier of the patients and requested donation amounts will be independently randomized. Incentives a la Kessler et al. (2019) will be provided to ensure truthful reporting. I expect respondents with better knowledge of college ranking would be more responsive to patients' college attendance.
External Link(s)

Registration Citation

Citation
Chen, Junhao. 2023. "Donating for Life-saving Medical Treatments: Need-Based or Merit-Based." AEA RCT Registry. August 10. https://doi.org/10.1257/rct.11874-1.0
Experimental Details

Interventions

Intervention(s)
In each fundraising vignette shown to the respondents, I randomize story components while keeping the body of the vignette constant. The randomized components are 1) the college tier and 2) the amount of medical expenses that have occurred or are required. Treatments are randomized at the question-respondent level.
Intervention (Hidden)
Intervention Start Date
2023-08-04
Intervention End Date
2023-09-14

Primary Outcomes

Primary Outcomes (end points)
Self-reported willingness to donate to each synthetic fundraising vignette with randomized components (college tier and expenses occurred)
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This RCT aims to identify if people prefer helping people with merit and/or in need. I formulate 16 synthetic vignettes based on actual fundraising stories. Within each vignette, I randomize two story components, the college tier and the amount of medical expenses that have occurred or are required. Treatments are randomized at the question-respondent level. Then I elicit willingness to donate to the patient in the vignette from respondents by asking directly (how much would they like to donate). The respondents report their willingness to donate between CNY 0-20 by scrolling a bar. An Incentive scheme a la incentivized resume rating (Kesler et al. 2019) is designed to ensure attention and truthful reporting.
Experimental Design Details
I construct the vignettes in the following steps:
- A subset of candidate stories are randomly chosen from a pool of 4000 scraped stories
- Excluded cases that are too long, too complicated, not simultaneously compatible with top university/ low-ranked university, and cases with obvious factual mistakes
- Summarized by ChatGPT to 300-400 Chinese characters (180-250 words), this improves objectivity and saves labor
- Manually modify/proofread the ChatGPT summary to anonymize, improve realism and writing coherence, and correct factual mistakes and grammar errors.
- Add a description of (expected or realized) expenses/debt if not in the ChatGPT output, replace the university name and amount of expenses by randomization.
- Manually picked 16 vignettes from 30 candidate vignettes, in order to maximize variation in graduation/work status, gender, disease, and other aspects, in order to keep the respondent attentive and allow for heterogeneity analysis.

Each respondent would randomly answer 16 questions in a random sequence. A question is one vignette filled with two components that are independently randomized at the participant level. For each question, there are 9 possibilities and each respondent would see one of them.

The MERIT TREATMENT consists of 3 tiers of 4-year colleges: 1) ranked top 50, 2)100-300, and 3)500-800. To satisfy information invariance criteria and enhance consistency within a vignette, we put restrictions when randomizing the school name/tier: Within each vignette, 3 candidate colleges are from the same provinces and have the same university type. Whenever possible, I assign the college triplet with the same province and type as the original story on which the vignette is based. A college only appears once in the survey. (Less developed provinces with prominent few prominent universities are essentially excluded)

University types mentioned above include 1) comprehensive university, 2) science technology and engineering university, and 3) non-science and engineering university, which includes normal universities and universities that focus on finance & business or liberal arts.

To prevent conflicts between college background and work experiences in stories and prevent respondents from inferring major, jobs, and income from college names. We exclude medical schools, institutions with a focus on agriculture, forestry, sports, police, and aviation, and institutions partnering with a foreign university, We also excluded schools in the least developed remote provinces (Tibet, Qinghai, Xinjiang)

The NEED TREATMENT consists of 3 categories of expenses needed or spent: CNY 150K-300K / 350K-500K / 600K-800K (USD 30K-110K). The three categories roughly correspond to 0.6, 1, and 1.5 times the average amount that is mentioned in the actual fundraising stories.

I take into account the average amount for this disease when formulating the expense triplets, for example, if it is a costly disease, then I fill in a larger amount within the category interval. I also vary the wording and specific amount across vignettes for realism. This treatment is designed to provide a benchmark for the size of the effect that the RCT is capturing and understand how responsive the readers are to the content in general.

At the end of the survey, I obtain information on demographics, including gender, age, income, job, and past and current province of residence. I ask them what drives the variation in their reported willingness to donate, and how much they value the following aspects: educational background, college attended, perceived credibility, perceived deservedness, medical condition, perceived financial status and etc. At last, I measured their perception/knowledge of college rank. Respondents are expected to be responsive to college rank only if they have enough knowledge in the first place.

The survey is administered and distributed on Credamo.com, a professional survey website similar to Prolific in the US, which is widely used in academic and market research. Various measures are taken to guard against robots and inattentive respondents, ensuring the quality of response. Also, the recruitment is restrictive in the following way: one response per IP, one response allow within 5km, excluding respondents with low credit scores.

Respondents are recruited from Credamo’s respondent pool of 3 million people. A fee of CNY 8 is paid to the respondent on finishing and passing the attention checks.
Randomization Method
Randomization was done by the survey platform Credamo. It assigns 1 content pair out of 9 possibilities to each respondent for each question with an even possibility. However, it does not ensure sample size balance across arms. But on the whole, a roughly equal number of treatments are assigned when pooling all questions.
Randomization Unit
Only one level of randomization. Randomization is realized at a question-participant level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Wave1: 16 vignettes of medical fundraising for a young patient with college attendance (half current student, the other half recent graduates)
Wave 2: 16 vignettes where a young adult with college attendance fundraises for her ill senior family member. (Half of the young adults are current students, and the other half are recent graduates)
Sample size: planned number of observations
500 respondents in each wave. A total of 500*16 = 8000 questions would be answered.
Sample size (or number of clusters) by treatment arms
The main body of each question (the vignette) would stay constant for every survey participant, while two treatments (story content) are independently randomized at the participant level. The first is a merit treatment of three tiers of college: 1) top 40 colleges, 2) colleges ranked 100-300, and 3) 4-year colleges ranked 500-800. The second is a need treatment of three categories representing medical expense (spent or required) above/around/below the average amount. There are 9 combinations of treatments, Thus for each question, roughly 55 participants out of 500 would be allocated in each arm for each question.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Given the sample size, the minimal detectable effect size is estimated to be a treatment fixed effect of 0.2~0.3 relative to the base group, with the range of outcome variables being CNY 0-20.
IRB

Institutional Review Boards (IRBs)

IRB Name
Charles River Campus Institutional Review Board, Boston University
IRB Approval Date
2023-07-21
IRB Approval Number
7066X

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials