Testing a condensed methodology to estimate distributional preferences à la Fisman et al. (2007) – Follow-up Study

Last registered on July 01, 2021

Pre-Trial

Trial Information

General Information

Title
Testing a condensed methodology to estimate distributional preferences à la Fisman et al. (2007) – Follow-up Study
RCT ID
AEARCTR-0007217
Initial registration date
February 16, 2021
Last updated
July 01, 2021, 12:43 PM EDT

Locations

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
University of Nottingham
PI Affiliation
University of Nottingham
PI Affiliation
University of Nottingham

Additional Trial Information

Status
Completed
Start date
2021-01-13
End date
2021-04-02
Secondary IDs
Abstract
Estimating individual distributional preferences has become a widely used technique in behavioural economics research. A frequently utilised methodology to do so has been put forward by Fisman, Kariv and Markovits (2007). In their methodology, subjects make distributional choices in 50 modified dictator games, which are then used to estimate two distinct utility parameters. One captures the relative weight for the own payoff (α) and the other describes an efficiency-equity trade off (ρ). In this study, we use the original and new data to explore whether we can reduce the number of distributional choices made by subjects whilst still accurately estimating preference parameters. Simulation results, already obtained by us, show high accuracy in estimating both parameters when using as few as 20 dictator games. In order to experimentally confirm our simulations, we collected a first wave (Wave 1) of data, indicating good performance of our proposed methodology. However, we are going to extend the analysis with additional data to explore some aspects of the results based on Wave 1 data.
External Link(s)

Registration Citation

Citation
Baader, Malte et al. 2021. "Testing a condensed methodology to estimate distributional preferences à la Fisman et al. (2007) – Follow-up Study." AEA RCT Registry. July 01. https://doi.org/10.1257/rct.7217-1.1
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2021-02-22
Intervention End Date
2021-04-02

Primary Outcomes

Primary Outcomes (end points)
Distributions of preference parameters across distinct methodologies
Primary Outcomes (explanation)
We will estimate two distinct preference parameters, alpha and rho, based on a CES utility function using Tobit maximum likelihood estimations. These will then be compared across treatments.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Our proposed methodology only utilised 20 modified dictator games, rather than 50 as in Fisman et al. (2007). However, the price ratios of the games are not randomly drawn but the draw follows a bimodal distribution.
In Wave 1 of data collection we compared the original (Fisman et al., 2007) with our proposed methodology across the following three treatments:

1. Control – Original design using 50 modified dictator games.
2. Treatment 20 + 30 – First 20 dictator games according to our proposed methodology, last 30 follow the original design.
3. Treatment 20 & 20 – Two independent sets of our proposed methodology spaced out by a filler task.

If our proposed methodology is as effective as the original one, we should observe comparable distributions of alpha and rho parameters across treatments. From our simulation exercise, we predicted very consistent estimates of alpha and slightly more noisy estimates of rho. Overall, Wave 1 confirmed our predictions that the estimated parameters across the methodologies are identical, except for our estimates of alpha in ‘Treatment 20 + 30’. This seems to be an anomaly, as we also fail to obtain identical distributions of alpha parameters when estimating the parameter based on 50 dictator games, as in the original methodology.
As we collected data for the different treatments in two distinct weekdays, we hypothesise that the inconsistent estimations of alpha are likely to be a result of sampling error and imperfect randomisation.
Thus, in order to address this concern in this follow-up study we want to collect additional data for our ‘Control’ and ‘Treatment 20 + 30’ but with randomisation of subjects to conditions within a session. Through this randomisation strategy, we hope to diagnose the anomalous results from Wave 1.

Reference
Fisman, R., Kariv, S., & Markovits, D. (2007). Individual preferences for giving. American Economic Review, 97(5), 1858-1876.
Experimental Design Details
Randomization Method
We will randomise subjects within the same session using a random number generator.
Randomization Unit
Individuals
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
300 MTurk subjects
Sample size: planned number of observations
300 MTurk subjects
Sample size (or number of clusters) by treatment arms
150 MTurk subjects
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
The study will be conducted using Amazon MTurk and all sample sizes are determined as a result of a power analysis utilising the effect size (Cohen’s δ = 0.44) found in Wave 1. The sample size below provides 90% power for a Kolmogorov-Smirnov test and 95% power for Wilcoxon-Mann-Whitney test. Thus, if the inconsistency is alpha is persistent we ensure that we have a high probability of detecting it.
IRB

Institutional Review Boards (IRBs)

IRB Name
Nottingham School of Economics Research Ethics Committee
IRB Approval Date
2020-12-15
IRB Approval Number
N/A

Post-Trial

Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials