A recent paper in Procedia Computer Science discusses how artificial intelligence can be used in changing the way homeless people are matched with services. Researchers from the University at Albany applied advanced machine learning techniques to data from over 38,000 people receiving homelessness assistance in New York’s Capital Region between 2005 and 2019.
Determining a person’s eligibility for housing and matching them to programs is labor-intensive, and often ineffective. The researchers worked with AI to create a solution.
- K-Nearest Neighbors: This technique takes a look at the k most similar cases and then clusters them, with future inputs given a prediction.
- Random Forest: This technique involves constructing many decision trees and aggregating their predictions.
- Multiclass AdaBoost: This is a boosting algorithm that combines several “weak” classifiers to create a more powerful one.
A key innovation in the study was a two-stage classifier approach.:
First, determine the overall general type of program most appropriate (such as emergency shelter, transitional housing, or rapid re-housing). Then, recommend one particular program belonging to that category. This approach has enabled more specialized classifiers and helped solve class imbalance problems of the data.
The best performance, 81.5%, was obtained when using Multi-class AdaBoost for recommending programs of a particular type after the type had already been identified. This shows a huge win compared to non-program-type-based narrowing recommendations.
Such success working in this area lends credence to the hope that AI will be extremely useful in ensuring appropriate services are effectively connected with homeless individuals, therefore increasing the efficient use of limited resources.
The researchers point out that while the results are overwhelmingly positive for AI applied to social services, there is still need for caution. It is no application without consideration toward perpetuating previous biases or even amplifying current inequalities. The study works in favor of the human interpretability of recommendations made by the AI and the careful assessment of features used to derive its recommendations. This means that implicit human biases are still present in the model and its applications.