x

We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Knowing Thy Neighbor: What Information Neighbors Have and How Best to Elicit It
Last registered on March 07, 2016

Pre-Trial

Trial Information
General Information
Title
Knowing Thy Neighbor: What Information Neighbors Have and How Best to Elicit It
RCT ID
AEARCTR-0001109
Initial registration date
March 07, 2016
Last updated
March 07, 2016 7:56 PM EST
Location(s)
Region
Primary Investigator
Affiliation
MIT
Other Primary Investigator(s)
PI Affiliation
Yale
PI Affiliation
MIT
Additional Trial Information
Status
On going
Start date
2016-01-01
End date
2017-01-01
Secondary IDs
Abstract
This project tests a novel method of assessing microenterprise potential by harnessing community information. We ask: can community information—knowledge that neighbors, customers, community leaders, family members, and friends hold about one another—help identify which would-be microentrepreneurs have the most growth potential? Previous studies have demonstrated that community members have information about one another’s assets. Here, we study whether community members can also predict who high-potential business owners are.
External Link(s)
Registration Citation
Citation
Hussam, Reshmaan, Natalia Rigol and Benjamin Roth. 2016. "Knowing Thy Neighbor: What Information Neighbors Have and How Best to Elicit It." AEA RCT Registry. March 07. https://doi.org/10.1257/rct.1109-2.0.
Former Citation
Hussam, Reshmaan et al. 2016. "Knowing Thy Neighbor: What Information Neighbors Have and How Best to Elicit It." AEA RCT Registry. March 07. https://www.socialscienceregistry.org/trials/1109/history/7163.
Experimental Details
Interventions
Intervention(s)
Intervention Start Date
2016-03-01
Intervention End Date
2016-04-30
Primary Outcomes
Primary Outcomes (end points)
Our outcomes of interest are whether people can predict

1. Education

(a) We only ask for quintile rankings

2. Marginal returns to a Rs.6000 grant

(a) We ask for relative and quintile rankings

3. Average monthly income over the past year

(a) We ask for relative and quintile rankings

4. Projected monthly profits with an Rs.6000 grant

(a) We ask for relative and quintile rankings

5. Total value of household assets

(a) We ask for relative and quintile rankings

6. Number of hours worked by business owner

7. Medical expenses

8. Loan repayment trouble

9. Digitspan

We also ask who deserves the grant (according to any criteria the respondent herself chooses).
Primary Outcomes (explanation)
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We cross-randomize 3 treatments arms:

1. Rankings Used for Allocation Decisions (R0 vs R1) – A random subset of households will be informed that their reports will influence the probability that their peers receive cash grants.

2. Public Rankings (P0 vs P1): Respondents will be randomized to make reports in public (visible to the entire group) or in private (group members do not see reports).

3. Incentives (I0 vs I1): We will randomize whether or not respondents receive monetary incentives for the accuracy of their reports.
Experimental Design Details
After all baseline data is collected in a neighborhood, groups will be invited to conduct the ranking exercises in a large community hall. One group will be invited to conduct the exercise at a time. Once in the hall, group members will be paired with a surveyor at a surveying station. The station is completely private with a section divider so that respondents cannot observe each others’ rankings while sitting with the surveyor. To minimize variation across surveyors in implementation of the treatments, we have created animated videos to show to respondents. In the videos we explain: what is a quintile and how to do a quintile ranking, what are marginal returns, profits, income, and assets. The videos also explain the treatments. For the incentive treatments, the videos explain how incentives are paid. The payment rule is difficult to explain in a seminar, let alone to person with low levels of literacy. Following Prelec (2004) and Rigol Roth (2016), respondents are told that their payments will depend on their rankings and the rankings of their peers. The message that is emphasized is that truthful answers are rewarded more highly than untruthful answers. We also explain what second order beliefs are and collect this data along with the first order rankings. Incentives are paid at the end of each question (7 times in total). In the public treatment, after each ranking, respondents are asked to gather in the center of the room while the surveyors process the data. They take their ranking sheets along with them and these sheets are visible to all group members. In the private treatment, respondents are assured that their individual rankings will never be observed by anyone other than the researchers. The respondents remain behind their privacy screens. Lastly, in the revealed treatments, the videos explain how the rankings can affect who receives the grant. Respondents are told when they arrive in the hall that at the end of the exercise, a lottery will be conducted to randomly select the lottery winner. They are each given 20 tickets. In the revealed treatment, the video explains that there are extra lottery tickets that will be awarded after each IAP round (Q4-Q7). The lottery tickets will be awarded to the person who was most highly ranked in each round. Once all rankings are completed, a lottery is conducted. Group members will put their lottery tickets in a bucket and one or two winners will be selected, depending on the randomization status of the group.
Randomization Method
Public lottery
Randomization Unit
The randomization unit is the ranking group. There are 8 treatments so we created clusters of groups and randomized the 8 treatments within those clusters at the neighborhood leve.
Was the treatment clustered?
Yes
Experiment Characteristics
Sample size: planned number of clusters
36
Sample size: planned number of observations
1500
Sample size (or number of clusters) by treatment arms
187 for 8 treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We are powered to detect treatment effects on marginal returns to a grant on par with De Mel, McKenzie, Woodruff (2008).
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
MIT IRB
IRB Approval Date
2014-03-31
IRB Approval Number
(IRB#: 1403006218).
Analysis Plan

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports and Papers
Preliminary Reports
Relevant Papers