Back to History

Fields Changed

Registration

Field Before After
Last Published November 12, 2021 06:07 PM December 23, 2021 10:09 AM
Experimental Design (Public) To measure the impact of access to personalized, just-in-time information from a credit price comparison tool on consumer beliefs, search, and loan characteristics, we are conducting a randomized control trial (RCT) with the CMF (Chilean financial regulator). The target sample group is adults searching for a consumer loan or mortgage in Google. To recruit participants, we have identified a set of Google search keywords related to consumer loans and have designed a Google ads campaign that will target Chilean residents searching for consumer loans online. We expect 147,000 people to click our ads, which translates to 26,500 participants (18%) being assigned to a treatment arm, based on our pilot data. When loan searchers click our ads, this leads them to our online survey questionnaire. If they provide informed consent, they enter into the initial part of the survey, which asks about sociodemographic characteristics, loan characteristics, and beliefs about the credit market. After completing this initial part, participants are randomized into one of the six treatment arms. After treatment, we ask again about the beliefs about the credit market to measure whether they change subsequently. The total survey duration is of 9 minutes on average. We have three treatment arms in our RCT, which are described below: 1) Full price comparison tool: it allows users to enter information about their income and the loan type (consumer or mortgage), maturity, amount, and municipality they would like to search. Using CMF's administrative data on the universe of consumer and mortgage loans over the last six months, the interactive tool shows the distribution of APRs that other consumers with similar characteristics received from banks for similar loans. User characteristics are based on the income and municipality tool inputs, and loan characteristics are based on the loan type, amount, and maturity tool inputs. 2) Simplified price comparison tool: the inputs are the same as in the full version of the tool. This simplified version shows users the benefits of search by returning users a statement of how much they would save in terms of monthly and total costs by searching at more banks, conditional on loan and consumer inputs. 3) Control group video: it explains basic financial concepts about credits and does not provide information about the search and comparison of loans across banks. We will also test whether asking people about search induces more search by randomly not asking these questions to a fraction (25%) of our respondents—more on this in the power calculations section. To measure the impact of access to personalized, just-in-time information from a credit price comparison tool on consumer beliefs, search, and loan characteristics, we are conducting a randomized control trial (RCT) with the CMF (Chilean financial regulator). The target sample group is adults searching for a consumer loan or mortgage in Google. To recruit participants, we have identified a set of Google search keywords related to consumer loans and have designed a Google ads campaign that will target Chilean residents searching for consumer loans online. We expect 147,000 people to click our ads, which translates to 28,673 participants (19.5%) being assigned to a treatment arm, based on our pilot data. When loan searchers click our ads, this leads them to our online survey questionnaire. If they provide informed consent, they enter into the initial part of the survey, which asks about sociodemographic characteristics, loan characteristics, and beliefs about the credit market. After completing this initial part, participants are randomized into one of the six treatment arms. After treatment, we ask again about the beliefs about the credit market to measure whether they change subsequently. The total survey duration is of 9 minutes on average. We have three treatment arms in our RCT, which are described below: 1) Full price comparison tool: it allows users to enter information about their income and the loan type (consumer or mortgage), maturity, amount, and municipality they would like to search. Using CMF's administrative data on the universe of consumer and mortgage loans over the last six months, the interactive tool shows the distribution of APRs that other consumers with similar characteristics received from banks for similar loans. User characteristics are based on the income and municipality tool inputs, and loan characteristics are based on the loan type, amount, and maturity tool inputs. 2) Simplified price comparison tool: the inputs are the same as in the full version of the tool. This simplified version shows users the benefits of search by returning users a statement of how much they would save in terms of monthly and total costs by searching at more banks, conditional on loan and consumer inputs. 3) Control group video: it explains basic financial concepts about credits and does not provide information about the search and comparison of loans across banks. We will also test whether asking people about search induces more search by randomly not asking these questions to a fraction (25%) of our respondents—more on this in the power calculations section.
Planned Number of Observations We need 147,043 people clicking on our ads to detect an effect on our main outcomes with 80% power. This translates to 26,468 respondents (18%) getting assigned to a treatment arm, based on our pilot data. We need 147,043 people clicking on our ads to detect an effect on our main outcomes with 80% power. This translates to 28,673 respondents (19.5%) getting assigned to a treatment arm, based on our pilot data.
Sample size (or number of clusters) by treatment arms As we have three treatment arms, the sample size by treatment arm would be the number of respondents getting assigned to a treatment arm (26,468) divided by 3, which is 8,823. As we have three treatment arms, the sample size by treatment arm would be the number of respondents getting assigned to a treatment arm (28,673) divided by 3, which is 9,558.
Power calculation: Minimum Detectable Effect Size for Main Outcomes We will study several outcomes: initial survey outcomes, consumer and mortgage loans interest rates using administrative data, and follow-up survey outcomes. As our primary outcome of interest is the consumer loan interest rate, power calculations are run in such a way that the consumer loan interest rate is the foundation of them. When potential loan consumers click on our ads, they are directed to our survey. Not all of them complete the survey nor all survey modules, or consent. They can drop out whenever they want. Based on our pilot data, 18% of people who click on our ads reach the module where they get assigned to a treatment arm, and 17% of people who get assigned to a treatment arm end up taking a consumer loan within six months after the survey. Given our budget constraint and following List et al. (2011), we need 147,043 loan-searchers to click our ads to detect a minimum effect of 0.1 standard deviations (SD) —91 basis points given a consumer interest rates SD of 8.9 within a 6-month window —and to achieve 80% power and a significance level of 5% in our study. This means that 26,468 people would get assigned to a treatment arm, and 4,500 people would end up taking a consumer loan within six months after the survey. With this number of people clicking our ads, we could detect a minimum effect of 0.185 SD—12 basis points given a mortgage interest rate SD of 0.625 within a one-year window—for mortgage loans interest rates. Loan take-up rates are 5.2% for mortgage loans one year after the survey among those participants assigned to a treatment arm, hence the higher minimum detectable effect. Moreover, we could detect a minimum effect of 0.06 SD on our initial survey outcomes and 0.1 SD on our follow-up outcomes, assuming that at least 17% of participants assigned to a treatment arm answer our follow-up survey. Finally, we will also test whether asking people about search induces more search. To do this, we randomly do not ask these questions to 25% of our respondents. This fraction was set so that we could detect a minimum effect similar to our main outcomes. This way, we could detect a minimum effect of 0.1 SD on our follow-up outcomes and consumer loan interest rates, equivalent to 86 basis points. We are aware that not asking search questions reduces our power for initial survey outcomes and adjusted for this accordingly. We will study several outcomes: initial survey outcomes, consumer and mortgage loans interest rates using administrative data, and follow-up survey outcomes. As our primary outcome of interest is the consumer loan interest rate, power calculations are run in such a way that the consumer loan interest rate is the foundation of them. When potential loan consumers click on our ads, they are directed to our survey. Not all of them complete the survey nor all survey modules, or consent. They can drop out whenever they want. Based on our pilot data, 19.5% of people who click on our ads reach the module where they get assigned to a treatment arm, and 17.3% of people who get assigned to a treatment arm end up taking a consumer loan within six months after the survey. Given our budget constraint and following List et al. (2011), we need 147,043 loan-searchers to click our ads to detect a minimum effect of 0.1 standard deviations (SD) —87 basis points given a consumer interest rates SD of 8.9 within a 6-month window —and to achieve 80% power and a significance level of 5% in our study. This means that 28,673 people would get assigned to a treatment arm, and 4,961 of these treated participants would end up taking a consumer loan within six months after the survey. With this number of people clicking our ads, we could detect a minimum effect of 0.178 SD—11 basis points given a mortgage interest rate SD of 0.625 within a one-year window—for mortgage loans interest rates. Loan take-up rates are 5.2% for mortgage loans one year after the survey among those participants assigned to a treatment arm, hence the higher minimum detectable effect. Moreover, we could detect a minimum effect of 0.07 SD on our initial survey outcomes and 0.1 SD on our follow-up outcomes, after assuming that at least 17% of participants assigned to a treatment arm answer our follow-up survey. Finally, we will also test whether asking people about search induces more search. To do this, we randomly do not ask these questions to 25% of our respondents. This fraction was set so that we could detect a minimum effect similar to our main outcomes. This way, we could detect a minimum effect of 0.09 SD on our follow-up outcomes and consumer loan interest rates, equivalent to 81 basis points. We are aware that not asking search questions reduces our power for initial survey outcomes and adjusted for this accordingly.
Back to top