We are happy to announce that all trial registrations will now be issued DOIs (digital object identifiers). For more information, see here.
Housing Expectations and Market Behavior
Last registered on July 19, 2019


Trial Information
General Information
Housing Expectations and Market Behavior
Initial registration date
July 17, 2019
Last updated
July 19, 2019 11:58 AM EDT

This section is unavailable to the public. Use the button below to request access to this information.

Request Information
Primary Investigator
Other Primary Investigator(s)
PI Affiliation
Additional Trial Information
In development
Start date
End date
Secondary IDs
We designed a field experiment to measure how housing price expectations affect real estate transactions in the United States. Our motivation stems from the fact that home price expectations play a prominent role in many accounts of the housing boom that occurred during the mid-2000s. We will launch a large-scale high-stakes information experiment with tens of thousands of households who have recently listed their houses for sale.
External Link(s)
Registration Citation
Bottan, Nicolas and Ricardo Perez-Truglia. 2019. "Housing Expectations and Market Behavior." AEA RCT Registry. July 19. https://doi.org/10.1257/rct.3663-1.0
Former Citation
Bottan, Nicolas and Ricardo Perez-Truglia. 2019. "Housing Expectations and Market Behavior." AEA RCT Registry. July 19. https://www.socialscienceregistry.org/trials/3663/history/50384
Experimental Details
We will send letters to tens of thousands of individuals who have listed their houses on the market.
Intervention Start Date
Intervention End Date
Primary Outcomes
Primary Outcomes (end points)
The primary outcomes correspond to the market behavior of the subjects: the transaction dates and prices.
Primary Outcomes (explanation)
The main outcomes are: i. the time elapsed from the day of letter delivery until the time the property was sold; ii. the price at which the house was sold.
The variable "time until sale" will be right-censored at the time we collect the data (e.g., if a house has not been sold 3 months after the intervention, we do not know if it will be sold 3, 4, 6... months after the intervention). We can use standard methods to account for the censoring. A second (minor) caveat is that some houses may be actively taken off the market. We do not expect this behavior to be common, but if it is then we may need to include it as a separate outcome.

The variable "price sold" will also be censored for some subjects, because the price is observed only if the home is sold. We will use standard methods to deal with this censoring. Additionally, we will have an "intermediate" variable related to the sales price: the listing price. We will use publicly available data to track changes in the list price, and use it as a separate outcome. The advantages of this alternative outcome are: i. It is not subject to censoring; ii. The list price may reflect the price that the seller "hopes" to sell the house for and thus it may be more elastic to housing expectations. However, ex-ante, we do not know if changes in listing price are frequent enough to give us enough statistical power when considered as the outcome variable.
Secondary Outcomes
Secondary Outcomes (end points)
Survey measures of the housing price expectations of the subjects.
Secondary Outcomes (explanation)
We will collect survey data in two ways. First, the letter sent to the subjects will also include a URL to complete an online survey. A sample of this online survey is attached to this registration. This survey includes questions about the expected future median price in the ZIP code of the respondent (1 year ahead and 5 years ahead).

There is a high likelihood that this survey data will not be useful. First, based on similar surveys, the response rate to the online survey is probably going to be very low. Also, it is possible that treatments affect the response rate, thus challenging the internal validity of the analysis of the survey data. For these reasons, we will conduct a complementary survey experiment with an auxiliary sample (Amazon Mechanical Turk workers). In that auxiliary survey, we will not be able to measure the primary outcomes (i.e., market transactions), but we will be able to measure the secondary outcomes (i.e., survey expectations). We will deploy that auxiliary survey at around the same date in which we will send the letters. And we will also attempt to conduct a follow-up survey with this auxiliary sample a month after the baseline survey. Please find attached screenshots of the baseline and follow-up auxiliary surveys.
Experimental Design
Experimental Design
This is an information provision experiment. Subjects will be randomly assigned to different treatment arms, which result in different information being included in the letter.
Experimental Design Details
Not available
Randomization Method
Randomization done in office by a computer.
Randomization Unit
Was the treatment clustered?
Experiment Characteristics
Sample size: planned number of clusters
60,000 individuals
Sample size: planned number of observations
We will send 60,000 letters. It is important to note that the letters will take over a week to be delivered. As a result, by the time the letter was delivered, we anticipate that a significant fraction of these 60,000 individuals will have sold their properties already. We will only analyze the effects on the subjects who did not sell their houses by our estimated delivery date (because it is impossible for our letter to affect a transaction that has happened already). We can use the rest of the sample for a falsification test in an event-study fashion.
Sample size (or number of clusters) by treatment arms
Subjects will be randomized into the following letter types: 20% to the "present" type; 30% to the "past" type and 50% to the "future" type.

Within the "past" letter type, subjects will be randomized with equal probability to 1-year or 2-year sub-treatments. Within the "future" letter type, subjects will be randomized with equal probability to one of the three possible forecast sub-treatments.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Supporting Documents and Materials

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
IRB Name
Cornell University Institutional Review Board
IRB Approval Date
IRB Approval Number
IRB Name
Institutional Review Board at University of California Los Angeles
IRB Approval Date
IRB Approval Number