Complexity and the demand and supply of narratives

Last registered on May 13, 2024

Pre-Trial

Trial Information

General Information

Title
Complexity and the demand and supply of narratives
RCT ID
AEARCTR-0013350
Initial registration date
April 19, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 13, 2024, 12:48 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
WZB & DIW Berlin

Other Primary Investigator(s)

PI Affiliation
University of Lausanne
PI Affiliation
Brown University

Additional Trial Information

Status
In development
Start date
2024-04-20
End date
2024-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project studies the effect of complexity on the selection of models in a market environment. We simulate a market involving three types of agents – buyers, non-expert sellers, and expert sellers who have access to better data statistics – interacting over multiple rounds. In this "market for models", all agents have access to the same data but must choose among different models that explain these data. Buyers are incentivized to select the most accurate model, while sellers also have an incentive to match the preferences of the buyer. Our study will first assess how increased complexity affects the dynamics of model selection from both the sellers' and the buyers' perspectives. Subsequently, we will explore market design interventions that could alter these dynamics. This includes education of buyers by informing them about the accuracy of the chosen statement. Additionally, we will introduce a reputation system allowing buyers to evaluate sellers based on their past offers.
External Link(s)

Registration Citation

Citation
Hakimov, Rustamdjan, Tiziano Rotesi and Renke Schmacker. 2024. "Complexity and the demand and supply of narratives." AEA RCT Registry. May 13. https://doi.org/10.1257/rct.13350-1.0
Experimental Details

Interventions

Intervention(s)
2x3 treatment design

Univariate&Bivariate vs. Univariate&Interacted: In the first treatment arm, we vary the complexity of the data-generating process (with univariate being the simplest and interacted being the most complex model).

Baseline vs. Feedback vs. Reputation: In the second treatment arm, we introduce two market design interventions and compare it to the baseline treatment. In the feedback treatment, buyers receive feedback on the accuracy of their selected model. In the reputation treatment, buyers see a history of the models offered by each seller.
Intervention Start Date
2024-04-20
Intervention End Date
2024-06-30

Primary Outcomes

Primary Outcomes (end points)
(i) Share of correct models offered (1) in general and (2) by experts
(ii) Share of overly simplistic models (in general and by experts)
(iii) Share of overly complex models (in general and by experts)
(iv) Share of buyers who select the correct model
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
(i) Payouts
(ii) Share of experts selected
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
See below
Experimental Design Details
Participants are matched into groups of three (2 sellers and 1 buyer). All subjects are shown graphs depicting the relationship between two independent variables (square and circle) and an outcome variable (money). Sellers offer a model to the buyer and are paid according to the accuracy if and only if the model is selected by the buyer. Buyers are incentivized for selecting the model that they believe most accurately describes how square and circle impact money (once before and once after seeing the models offered by the sellers). One realization is drawn at random.

There are two types of sellers: experts and non-experts. Experts know the accuracy of the models and non-experts do not.

The choice situation is repeated for 7 rounds, in which new samples are drawn from the data-generating process.

We test the following hypotheses:

H1: The higher the complexity, the lower the share of offered and purchased models that are correct.
H2: The higher the complexity, the lower the share of experts that are chosen in the first round.
H3: In settings where a high share of buyers fails to select the correct model, the share of correctly offered models by experts decreases over time.
H4: When complexity is low, experts will be more likely to offer overly complex models in later rounds compared to the first round.
H5: When complexity is high, experts will be more likely to offer overly simple models in later rounds compared to the first round.
H6: In the last rounds of the market design treatments, (i) the share of correct models offered by experts is higher than in the baseline and (ii) the share of correct models chosen by buyers is higher than in the baseline.

We will test heterogeneous treatment effects by performance of the buyers in a group, education, political affiliation, and tendency to believe in conspiracy theories.
Randomization Method
Randomisation by a computer
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Target sample: 1440 to 1680 subjects in total

(The final sample depends on the budget and how many subjects can be matched into groups of three. Those who cannot be matched have to be paid the participation fee and it is a priori not clear how many these will be.)
Sample size: planned number of observations
Target sample size: 1440 to 1680 subjects in total (The final sample depends on the budget and how many subjects can be matched into groups of three. Those who cannot be matched have to be paid the participation fee and it is a priori not clear how many these will be.)
Sample size (or number of clusters) by treatment arms
Target sample size:
Baseline: 300-360 univ/biv + 300-360 univ/inter (100-120 groups)
Feedback: 210-240 univ/biv + 210-240 univ/inter (70-80 groups)
Reputation:210-240 univ/biv + 210-240 univ/inter (70-80 groups)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
H1: Assuming a correct share of 80% in the simple (univariate) treatment, with 100 groups per treatment arm, we can detect a decrease of 18 ppts with 80% power at the p=0.05 level. H2: Assuming that in the first round of the simple (univariate) treatment, the expert is chosen 71% of the time, with 100 groups per treatment arm, we can detect a decrease of 19 ppts with 80% power at the p=0.05 level. H3-H5: Assuming that 86% of experts offer the correct model in the first round. With 100 groups, we can detect a change of 10 ppts comparing the final round to the first round. H6: Assuming that in the final round of the Baseline treatment, experts offer the correct model in 60% of cases. With 100 groups in the baseline treatment and 70 groups in the market design treatment, we can detect an increase of 20 ppts with 80% power at the p=0.05 level.
IRB

Institutional Review Boards (IRBs)

IRB Name
WZB Research Ethics Review
IRB Approval Date
2023-12-04
IRB Approval Number
2023/11/224

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials