Back to History Current Version

Narratives and Valuations (Field)

Last registered on September 28, 2022


Trial Information

General Information

Narratives and Valuations
Initial registration date
August 04, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 09, 2022, 2:33 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 28, 2022, 2:29 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.


Primary Investigator

University of Pittsburgh

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
While the significance of narrative thinking has become increasingly recognized by social scientists, very little empirical research has documented its consequences for economically significant outcomes. We address this gap in one important domain: valuations. In an online experiment, participants selected and uploaded pictures of an item they owned (hat) without knowing why. They were then asked to either tell the story of their item, list its characteristics, or neither (counting zeros filler task or no task at all. Finally, participants were given the opportunity to sell their items to us via an incentive-compatible procedure (Multiple Price List). That is, they decided whether to accept or reject a series of prices (between $1-300), knowing that one of their decisions might be randomly selected to be executed. Finally, participants answer a questionnaire to allow for mechanism analysis.
External Link(s)

Registration Citation

Morag, Dor and George Loewenstein. 2022. "Narratives and Valuations." AEA RCT Registry. September 28.
Experimental Details


Self-constructed stimuli - Participants were randomized to describe the item they chose in different ways or not at all.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Willingness to Accept (WTA) in USD, and unwillingness to sell (refusing to sell at any offered price)
Primary Outcomes (explanation)
WTA is the switching price in the MPL

Secondary Outcomes

Secondary Outcomes (end points)
Fourteen post-MPL questions
Secondary Outcomes (explanation)
After submitting their MPL choices, participants answer a set of questions regarding their item and their selling process

Experimental Design

Experimental Design
Participants choose a hat they already owned (not knowing why) and are then randomized to describe and think about it differently. Finally, participants are offered to sell their items to the experimenter, and their willingness to accept (WTA) is elicited through incentivized multiple price lists (where one decision might be randomly selected to count). Finally, participants answer 14 questions regarding their hat.
Experimental Design Details
We manipulate the likelihood of narrative thinking by asking participants to describe their self-selected item with either a story (narrative thinking) or a list (analytical thinking). Thus, participants naturally create their own stimuli in the same way they would in real life. Note that when the item is chosen, participants are blind to the consequences of their choice (i.e., they don't know they'll be offered to sell it).
Randomization Method
Digital randomization for the treatment arms, and a public lottery for the binding decisions in the MPL.
Randomization Unit
The individual is the randomized unit of treatment and control, and individual-decision touple (e.g., subject 104, decision 32) is the randomized unit for the lottery.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
1000 participants
Sample size: planned number of observations
1000 participants
Sample size (or number of clusters) by treatment arms
250 narrative, 250 list, 250 blank baseline, and 250 filler task baseline.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Human Judgment and Decision-Making
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials