Poverty and Cognitive Function

Last registered on July 27, 2015


Trial Information

General Information

Poverty and Cognitive Function
Initial registration date
July 27, 2015

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 27, 2015, 1:50 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

UC San Diego

Other Primary Investigator(s)

PI Affiliation
Princeton University

Additional Trial Information

On going
Start date
End date
Secondary IDs

This document describes the analysis plan for a randomized experiment examining the psychological effect of poverty on cognitive function. We will recruit 260 respondents from informal settlements in Nairobi, Kenya, and ex- pose them to either a poverty prime or a control prime in the lab. We then measure cognitive performance using Raven’s Progressive Matrices and a spa- tial compatibility task. The design of the study is a close replication of Mani et al. (2013). This plan outlines the design of the study, the outcomes of interest, and the econometric approach.
External Link(s)

Registration Citation

Abraham, Justin and Johannes Haushofer. 2015. "Poverty and Cognitive Function." AEA RCT Registry. July 27. https://doi.org/10.1257/rct.790-1.0
Former Citation
Abraham, Justin and Johannes Haushofer. 2015. "Poverty and Cognitive Function." AEA RCT Registry. July 27. https://www.socialscienceregistry.org/trials/790/history/4865
Experimental Details


Our study utilizes the methodology developed by Mani et al. (2013) adapted to our Kenyan sample, to identify the psychological e↵ect of poverty primes on cognitive function in the lab. We present three hypothetical scenarios to respondents, each of which describes a financial problem respondents might experience. The primes are described in detail in Appendix A. Respondents are given 5 minutes per scenario to contemplate about how they might deal with these problems. The aim of exposure to these scenarios is to trigger thoughts of the respondents’ economic situation.
We identify the e↵ect of interest by manipulating the financial stakes involved in each of the hypothetical scenarios. For example, where respondents incur an unexpected cost of KES 50 in an “easy” scenario, the cost is KES 500 in a “di
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
1. Poverty primes (randomly assigned treatment)
2. Cantril Self-Anchoring Ladder
(a) Current life
(b) Life 5 years from now
3. Financial Worry Questionnaire 4. Raven’s Progressive Matrices
(a) Comprehension (b) Scored task
5. Spatial Incompatibility Task
(a) Comprehension (b) Scored task
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We will conduct a series of laboratory sessions with an expected mean of 20 respondents per session, until we reach 260 respondents. At the beginning of each session, respondents will be randomly assigned to treatment and control groups, and will be administered the treatment or control prime according to treatment assign- ment. Immediately following the priming, respondents will complete the Cantril

Self-Anchoring Scale and a questionnaire about financial worry, followed by Raven’s Progressive Matrices and the spatial compatibility task. The treatments, tasks, and questionnaires were administered using touch screen computers to enable illiterate and computer-illiterate respondents to participate. Project staff read instructions to the respondents in English and Swahili to maximize comprehension. Respondents received a base compensation of KES 200 for participating in the experiment, plus any money earned as a result of paid tasks. The compensation and bonus were transferred to the respondents via M-Pesa after the experiment.
Experimental Design Details
Randomization Method
Computer randomization
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
18 lab sessions
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
130 control, 130 treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Princeton University Institutional Review Board
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials