Intervention (Hidden)
Program: Replacing the VI-SPDAT
The VI-SPDAT has long been the most common way that communities decide who will receive scarce resources to respond to homelessness, but a common concern about the use of VI-SPDAT scoring in housing assistance is its lack of concentration on equitable housing decisions for different subpopulations. There has been some research attention on the validity and reliability of VI-SPDAT in the homeless coordinated entry system. Studies find that the tool’s content does not fully align with housing tenure indicators observed in studies of permanent supportive housing for individuals with mental illness or chronic homelessness histories (Malone, 2009; Pearson et al., 2009; Pickett-Schenk et al., 2007; Tsemberis & Eisenberg, 2000; Wong et al., 2006). More recently, some agencies have replaced the VI-SPDAT and even its creator OrgCode Consulting now recommends phasing out its usage (OrgCode, 2022). For these reasons, SARAH decided to transition into a new assessment tool in partnership with LEO.
Sample and Setting:
Intake and use of the new scoring tool will take place at any of the HMIS access points in Bexar County. Clients may conduct a call-in assessment or visit an access points for an in-person assessment. When an individual approaches one of these access points, they will be screened for eligibility before going through the one-time intake assessment with a case worker. The assessment will have clients complete mandatory questions as required by HUD along with the additional questions for the data-driven tool and client preferences. After completing the assessment, case workers will be asked to enter a priority rating on a scale of 0 to 100, with 0 representing clients unlikely to return to homelessness and 100 representing the most vulnerable. The instructions to caseworkers for this scoring will clearly explain that rating all clients as 100 (“highest risk”) will not result in improved chances of receiving housing services. For reference, the case worker will be able to access their current clients, along with their corresponding subjective risk scores.
Since we randomly assign individuals to different prioritization rating systems, random assignment is only relevant if those two systems disagree. For many individuals, the system will agree that a given individual is high priority or low priority, and random assignment has no effect on the services they receive. We will limit the sample to people over whom the rating systems disagree.
Research Design:
We will estimate treatment effects by OLS using the following regression:
Yi=+Tiβ0+XiƔ0+εi
Yi is the outcome. Ti is an intent-to-treat dummy indicating the random assignment of person i to data-based versus case-worker prioritization. In the case of non-compliance, Ti takes on the value of the original random assignment. The vector Xi includes a set of person-level characteristics collected at baseline, and εi is an error term. The coefficient on the treatment dummy β0 will give us the difference in means between the treatment and comparison groups, the estimated impact of the program.
Measures:
To measure the effectiveness of the two prioritization systems, we will look at data collected in HMIS, as well as various administrative data sources. Our primary measure that we plan to track is returns to homelessness, defined as a new program enrollment in HMIS after random assignment. Additionally, we plan to link to administrative data sources to measure secondary outcomes related to creditworthiness and use of credit, criminal justice, employment, and income.