Retelling and Memory

Last registered on January 02, 2025

Pre-Trial

Trial Information

General Information

Title
Retelling and Memory
RCT ID
AEARCTR-0015038
Initial registration date
December 16, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 02, 2025, 9:40 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Bonn

Other Primary Investigator(s)

Additional Trial Information

Status
Completed
Start date
2024-12-16
End date
2024-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Online experiment to study the effect of whether biased retelling makes recall of an original event biased as well.
External Link(s)

Registration Citation

Citation
Laubel, Alexander. 2025. "Retelling and Memory." AEA RCT Registry. January 02. https://doi.org/10.1257/rct.15038-1.0
Experimental Details

Interventions

Intervention(s)
Online experiment to study the effect of whether biased retelling makes recall of an original event biased as well.

For more details, see attached pre-analysis plan.
Intervention (Hidden)
Intervention Start Date
2024-12-16
Intervention End Date
2024-12-31

Primary Outcomes

Primary Outcomes (end points)
Main Experiment: Day 2 Rating, Day 2 Retelling (as assessed by human coders)
Primary Outcomes (explanation)
See pre-analysis plan.

Secondary Outcomes

Secondary Outcomes (end points)
Robustness Experiment: Rating Recall
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
I. DATA COLLECTION
This study consists of two experiments: a main experiment and a robustness experiment. Both experiments will be conducted on Prolific in December 2024. Both experiments will pre-screen participants in the following way: first language “English”, fluent languages “English”, Sex: 50 % male, 50 % female.

Main Experiment
The main experiment will be conducted over two consecutive days. The timeline is as follows:
Day 1:
1.1 Subjects see descriptions of three hypothetical products and provide a subjective rating for each of them (incentives: match the average rating among all subjects).
1.2 Subjects provide an audio recording of themselves retelling each product description. Treatment variation happens on the subject level and concerns whether a subject is incentivized to retell accurately or overly optimistically.
Day 2:
2. The same subjects are invited again and are asked to recall their rating from 1.1 and provide another recording. This time, the incentives are always accuracy.

There are two sets of three distinct products each (set 1: DreamGlow, EchoScent, WonderWeave; set 2: VibeVest, ZenPod, BrainBoost) and it will be randomized on the subject level which product set a subject encounters.

Robustness Experiment
The robustness experiment will be conducted with an entirely different set of subjects.
The timeline is as follows:
1. Subjects see the description of just one of the hypothetical product and provide a subjective rating (incentives: match the average rating among all subjects).
2. Subjects provide an audio recording of themselves retelling the product description. Treatment variation concerns whether a subject is incentivized to retell accurately or overly optimistically.
3. Subjects are asked to recall the rating provided in 1.
The product a subject encounters is from the same set of in total six products of the main experiment.

Additional Data: Human Coding of Recordings
Subjects’ recordings from the main experiment will be coded by two human coders (research assistants at Institute for Applied Microeconomics, Bonn University).

For each recording, the coders assess:
1. Is the audio quality of the recording sufficiently good to understand it?
2. Is there sufficient information in the audio recording to answer 3. and 4.?
Only if the answers to 1. and 2. are “Yes”:
3. Coders assess whether the recording was overly optimistic or accurate.
4. Coders provide a (subjective) rating based on the audio recording.

II. TREATMENT CONDITIONS
There are two treatment conditions each in the main experiment and in the robustness experiment:
Treated: subjects are asked to record an overly optimistic retelling.
Control: subjects are asked to record an accurate retelling.
Note: For the main experiment, treatment variation only concerns the recording on day 1. On day 2, all subjects are asked to provide an accurate retelling.
Experimental Design Details
Randomization Method
Randomization done by survey provider (Prolific).
Randomization Unit
Subject
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
For the main experiment, “one observation” requires that a subject shows up on day 2. Based on pilots, this applies to approx. 80 % of all subjects who complete day 1. For procedural reasons, I pre-register the day-1 sample size for the main experiment.
I pre-register 550 complete day-1 responses for the main experiment and 420 complete responses for the robustness experiment.
Note: “complete” only excludes subjects who did not pass the attention checks, but it includes subjects (a) who did not show up on day 2 (only applies to the main experiment), (b) who will be excluded from data analysis because they indicated that they used notes, or (c) – only applies to “Hypothesis Retelling” – whose audio recordings are assessed by the human coders to have not sufficiently good audio quality or to provide not sufficiently much information to infer a rating (only applies to the main experiment).
Sample size: planned number of observations
For the main experiment, there are 3 observations per subject. For the robustness experiment, there is 1 observation per subject.
Sample size (or number of clusters) by treatment arms
50:50 by survey provider during data collection (precise numbers on data collected depend on attention checks, attrition etc.)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials