x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Scarcity or False Abundance? Testing the Impacts of Debt on Cognition
Last registered on August 03, 2020

Pre-Trial

Trial Information
General Information
Title
Scarcity or False Abundance? Testing the Impacts of Debt on Cognition
RCT ID
AEARCTR-0006235
Initial registration date
August 01, 2020
Last updated
August 03, 2020 12:31 PM EDT
Location(s)

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Affiliation
University of Maryland - College Park
Other Primary Investigator(s)
PI Affiliation
University of Maryland - College Park
Additional Trial Information
Status
In development
Start date
2020-08-05
End date
2020-12-15
Secondary IDs
Abstract
Previous research has shown that a mindset of "scarcity" - having too little of a resource, be it time, money, or even chances in a game - can change cognition and decision-making. These changes can help explain behavior that may appear irrational. Our research looks at whether similar changes to cognition occur in response to debt, by creating a scenario in which study participants face debt in the context of a simple computer game. Study participants will be recruited and will play the game using Amazon Turk (MTurk).
External Link(s)
Registration Citation
Citation
Brown, Julia and Erkut Ozbay. 2020. "Scarcity or False Abundance? Testing the Impacts of Debt on Cognition." AEA RCT Registry. August 03. https://doi.org/10.1257/rct.6235-1.0.
Experimental Details
Interventions
Intervention(s)
In one of the first experiments to investigate the impacts of scarcity on cognitive function, Shah et al. (2012) randomly vary “income” levels in a computer game similar to the popular game Angry Birds, where income is defined as the number of shots a player can take. They track differences in performance (as measured by points earned per shot) and focus (as measured by the amount of time spent aiming each shot) across different “income” groups. Although games of this kind abstract away from reality, they can shed light on how underlying structures – in this case access to resources – can impact behavior, while removing confounding factors that would be impossible to control for in a real-world setting. We propose to adapt the Shah et al. (2012) Angry Blueberries game to test our hypotheses.

The object of the game is to destroy stacks of waffles by firing blueberries at them with a slingshot. The player can control the aim and force of the slingshot using their mouse, and it is possible to destroy several waffles in one shot. There are seven waffles per round. Each player is given a certain number of blueberries (details below) which they can use in any round, and play continues until the player runs out of blueberries. Players earn one point for each waffle they destroy, plus three points for destroying all the waffles in a round, for a total of ten possible points per round. Blueberries left over at the end of the game are worth nothing. At all times during the game, players can see their score and the number of blueberries remaining.

The interventions (detailed in Experimental design) each vary the amount of blueberries available to fire, as well as introducing various "debts," to identify the impacts of the debts on game performance.

Players will receive payment based on game performance, with each point equaling one cent. After the game, we will collect basic demographic data via a brief survey. Only players who finish the game and complete the survey will be eligible for payment. Payment will be delivered automatically via MTurk.
Intervention Start Date
2020-08-05
Intervention End Date
2020-10-30
Primary Outcomes
Primary Outcomes (end points)
Game performance: how long participants spend aiming each shot (in log milliseconds) and how many points they score per shot fired.
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
Experiment 1: Can debt lead to a false sense of abundance? Players will be randomly assigned to one of three groups:
1. The scarcity group (group S): Players will be given 30 blueberries; play will proceed as described above.
2. The scarcity with debt group (group SD): Players will be given 70 blueberries, but will be told that they start the game with a “debt” of 40 blueberries. Interest is zero, so these players have on net the same number of blueberries as players in the scarcity group. The game will stop automatically after the player has used thirty blueberries. Players will see the total (gross) number of blueberries they have available displayed on their screen, i.e., the number that includes the borrowed berries.
3. The abundance group (group A): Players will be given 70 blueberries; play will proceed as described above.

The scarcity hypothesis predicts that players in group S will focus more (spend more time aiming each shot) and use their available resources better (score more points per shot) than players in group A. If debt creates a false sense of abundance, I expect players in group SD to behave more similarly to group A than to group S, which will negatively affect their overall score.

Experiment 2: Does debt stress impact performance? All players will receive 50 points ($0.50) just for completing the survey (a show-up fee). Players will then be randomly assigned to one of two groups:

1. The scarcity group (group S): Conditions identical to those in experiment 1 for group S.
2. The scarcity with debt stress group (group SDS): Players will start the game with a 40 point ($0.40) debt. If they do not win at least 40 points during play, they will lose that amount from their show-up fee. This will be explained to them prior to the start of the game.

In this experiment, a false sense of abundance cannot affect performance, since debt in the form of points cannot be used (and therefore cannot be wasted), unlike debt in the form of blueberries. This is intended to generate, on a small scale, the anxiety over the potential for not being able to make payments that might be present for people with actual loans. If debt stress imposes a cognitive cost, then we would expect to see diminished points earned per shot for people in the SDS group. Ideally, this experiment would be identical to Experiment 1 in that players would start from zero rather than 50 points; however, since we cannot make players pay the researchers, the “show-up fee” ensures that players cannot end with negative points/take-home pay, even if they fail to make the 40 point minimum.

Experiment 3: Does mental accounting impact performance? Players will be randomly assigned to scarcity (S) or abundance (A) conditions, as in experiment 1, and then cross-randomized to one of two groups:

1. The mental accounting group (groups S-M and A-M): Players will be asked to remember something unrelated to the game, such as the date of a historical event. If they enter the wrong answer at the end of the game, they will be charged a 5 point penalty. This will be explained to them prior to the start of play.
2. The no mental accounting group (groups S and A): Conditions identical to those in experiment 1.

An alternative to the false abundance and debt stress hypotheses is the Ong et al. (2019) hypothesis that debt imposes a cognitive cost by requiring mental accounting. The requirement to remember an unrelated fact is intended to mimic the cognitive cost of having to remember that you owe 40 blueberries, without the potential confounding false abundance effect. I will test to see if the difference in performance between the S and SD groups in experiment 1 is similar in magnitude to the difference between the S and S-M groups in this experiment. If this difference also exists between the A and A-M groups, that will provide additional evidence to support the hypothesis that debt operates primarily through a mental accounting effect and not through a false sense of abundance.
Experimental Design Details
Not available
Randomization Method
Randomization will be conducted by a computer. When a participant agrees to play the game on MTurk, JavaScript code runs a pseudo-randomizer and allocates the participant to one of the treatment groups.
Randomization Unit
Randomization occurs at the level of the individual, for all experiments.
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
0
Sample size: planned number of observations
2450 total participants
Sample size (or number of clusters) by treatment arms
Experiment 1: 350 participants x 3 treatment arms = 1050
Experiment 2: 350 participants x 2 treatment arms = 700 total
Experiment 3: 350 participants in each of two cross-randomized treatments = 700 total (175 per study arm)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Outcome 1, time spent aiming shot, in log milliseconds: with 350 per treatment arm, and a standard deviation of 0.42, I can detect an effect of 0.09 (a 1% increase) . Outcome 2, points earned per shot: with 350 per treatment arm, and a standard deviation of 0.12, I can detect an effect of 0.12 (a 6% increase).
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
University of Maryland College Park (UMCP) IRB
IRB Approval Date
2020-06-22
IRB Approval Number
1615172-1
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information