Limited Working Memory and Amount of Information Sources

Last registered on October 11, 2021

Pre-Trial

Trial Information

General Information

Title
Limited Working Memory and Amount of Information Sources
RCT ID
AEARCTR-0008333
Initial registration date
October 05, 2021

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 07, 2021, 4:00 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
October 11, 2021, 4:51 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Bonn

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2021-10-06
End date
2021-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Online experiments (MTurk and Qualtrics) to study the effect of the number of information sources on information selection and beliefs.
External Link(s)

Registration Citation

Citation
Amelio, Andrea. 2021. "Limited Working Memory and Amount of Information Sources." AEA RCT Registry. October 11. https://doi.org/10.1257/rct.8333-1.1
Experimental Details

Interventions

Intervention(s)
Online experiments (MTurk and Qualtrics) to study the effect of the number of information sources on information selection and beliefs.
Intervention Start Date
2021-10-06
Intervention End Date
2021-12-31

Primary Outcomes

Primary Outcomes (end points)
The study contains three main outcome variables (more details in the attachment):

1) Dichotomic variable, equal to 1 if the participant selected the source correctly according to the Source Selection Rule (the best source for Main, the first to satisfy requirements for Satisficing) and 0 otherwise.
2) The position of the selected source in the list.
3) One-hundred minus the absolute distance between the probability assigned to state A in Part 2 and the correct updated probability following Bayes Rule (100 - |participant_guess - bayesian_guess|).
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Vary the number of information sources available to participants and the selection rule of these sources. Measure their performances in optimally selecting sources and in belief updating tasks.
Experimental Design Details
Also, see attachment.

I. DATA COLLECTION:

The data are collected through an online experiment. Participants are recruited using Amazon Mechanical Turk (MTurk) and the experiment is run on Qualtrics.
The basic structure of the experimental setup is as follows. Participants are asked to select a set an information source from a list of sources. Information sources are presented in the form of 2x2 tables. These tables show the probability that the source ‘suggests’ state A or B, given that the true state is A or B. The lists can be of different lengths. After selecting a source participants have to complete a belief updating task related to the source they picked.

The experiment consists of the following stages:

Part 1:
A participant is shown a list of L sources.
Each source in the list is visible to the participant only if she/he hovers on it with the mouse.
L can be equal to 10, 20, or 40.
The participant picks one of the sources.

Part 2:
State of the world A or B is drawn, with a probability (prior) that varies in every single task.
The participant learns the prior and is shown a suggestion from the computer, which depends on the true state and on the probabilities specified in the table describing the source.
The participant is asked to formulate a guess about the probability of each of the 2 states.

Part 3: working memory assessment task and demographic questions.









II. TREATMENTS:

Treatment Main

In treatment Main, participants are asked to select the best information source. They are explained that sources that have a higher probability of reporting state A (B) when the true state is A (B) are better. Importantly, longer lists contain at least one source that is strictly better than all sources in shorter lists.

Treatment Satisficing

The setup is exactly the same as in Main, except that participants are asked to apply a satisficing selection rule for Part 1. More specifically, participants are asked to select the first source in the list that has a least a certain level of probability of correctly reporting both states.

Each source selection choice for both treatments exhibits a level of complexity, determined by the length of the list of sources.


III. OUTCOME VARIABLES:

The study contains three outcome variables:

Dichotomic variable, equal to 1 if the participant selected the source correctly according to the Source Selection Rule (the best source for Main, the first to satisfy requirements for Satisficing) and 0 otherwise.
The position of the selected source in the list.
One-hundred minus the absolute distance between the probability assigned to state A in Part 2 and the correct updated probability following Bayes Rule (100 - |participant_guess - bayesian_guess|).


IV. NATURE OF ANALYSES
We analyze our experimental data by means of OLS or probit regressions:
The main dependent variables are (1) and (3).
The main independent variables are:
The length of the sources list.
The position of the best source.
Since multiple observations per subject (9 each) are collected, standard errors are clustered at the subject level.



Additional variables using mouse-tracking data:

Time spent hovering on any sources.
Mouse pattern.
Options clicked on before the final decision and in which order.

Specifically, using the latter, we run the same analysis used in Caplin and Dean (2011) to test satisficing choice patterns, comparing the distribution of the Houtman-Masks (HM) index of participants’ choices consistent with satisficing and a simulated distribution of random choices. For further details see Caplin and Dean (2011) pp. 2905-2907.

V. HYPOTHESES:

Non-rational source selection

Take as dependent variable (1). Run a probit regression having as independent variables the interaction between both the length of the sources list with a dummy for Main, controlling for the performance in the n-back task, the position of the optimal source and the dummy for treatment Main. We hypothesize that in both cases the coefficient of the interaction is significantly smaller than zero.

Selection rule switching (towards satisficing)

Restrict attention to treatments Main. Take as dependent variable (2). Regress on list length, controlling for performance in the n-back task and position of optimal source. We hypothesize that the OLS coefficient of list list length is significantly smaller than zero. For the analysis using the MH index, we hypothesize that the distribution of participant’s MH index significantly differs from the simulated one (KS test).

Information quality vs complexity trade-off

Take as dependent variable (3). Regress on list length, its square and (1), controlling for performance in the n-back task. We hypothesize that the OLS coefficient of list length squared is smaller than zero. Also, when regressing without the square term, we hypothesize that the coefficient of list length is significantly smaller than 0.

Source selection vs belief updating trade-off

Add (1) to the previous analysis having (3) as a dependent variable. We hypothesize that the OLS coefficient of both (1) would be significantly smaller than zero.


VI. EXCLUSION CRITERIA
After subjects read the experimental instructions, they answer a series of comprehension questions. In case a subject makes one mistake, they are excluded from the experiment. Before being excluded, subjects are given a second chance to re-read instructions and answers comprehension questions.
VII. RANDOMIZATION AND SAMPLE SIZE / POWER CALCULATION

Treatments are with the following weights:
Treatment Main: 70%
Treatment Satisficing: 30%

The sample size will be given by (with a total sample size of 300):
Treatments Main: 210
Treatment Satisficing: 90

Randomization Method
Random draw using javascript code.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
n/a
Sample size: planned number of observations
300
Sample size (or number of clusters) by treatment arms
VII. RANDOMIZATION AND SAMPLE SIZE / POWER CALCULATION

Treatments are with the following weights:
Treatment Main: 70%
Treatment Satisficing: 30%

The sample size will be given by (with a total sample size of 300):
Treatments Main: 210
Treatment Satisficing: 90
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials