Tagged: fair housing

Disparate Impact in Machine Learning

June 8th, 2020 in Federal Legislation, State Legislation

Machine learning or “AI” is a novel approach to solving problems with software. Computer programs are designed to take large amounts of data as inputs, and recognize patterns that are difficult for humans to see. These patterns are so complex, it is sometimes unclear what parts of the data the machine is relying on to make a decision. This type of black box often produces useful predictions, but it is unclear whether the predictions are made based on forbidden inferences. It is possible that machine learning algorithms used in banking, real estate, and employment rely upon impermissible data related to protected classes, and therefore violate fair credit, housing, and labor laws.

Machine learning, like all programmed software, follows certain rules. The software generally compares images or numbers with

Getty Images

descriptions. For instance, facial recognition software may take as inputs pictures of faces, labeled with names, and then identify patterns in the face to associate with the name. Once the software is trained, it can be fed a picture of a face without a name, and the software will use the patterns it has developed to determine the name of the person that face belongs to. If the provided data is accurately prepared and labeled, the software usually works. If there are errors in the data, the software will learn the wrong things–and be unreliable. This principle is known as “garbage in, garbage out”–machine learning is only ever as reliable as the data it works with.

This causes problems when machine learning is applied to data that has racial bias baked-in to its core. Machine learning has been used in the criminal justice system to purportedly predict–based on a lengthy questionnaire–whether a perpetrator is likely to reoffend. The software was, after controlling for arrest type and history, 77% more likely to wrongfully predict that black defendants would re-offend than white defendants. Algorithms used by mortgage lenders charge higher interest rates to black and latino borrowers. A tenant screening company was found to rely partly on “arrest records, disability, race, and national origin” in its algorithms. When biases are already present in society, machine learning spots these patterns, uses them, and reinforces them.

There are two ways of interpreting the racial effect of machine learning under discrimination law: disparate treatment and disparate impact. Disparate treatment is fairly cut-and-dry, it prohibits treating members of a protected class differently from members of an unprotected class. For instance, giving a loan to a white person, but denying the same loan to a similarly situated black person, is disparate treatment. Disparate impact is using a neutral factor for making a decision that affects a protected class more than an unprotected class. Giving a loan to a person living in a majority-white zip code, while refusing the same loan to a similarly situated person living in a black-majority zip code, creates a disparate impact.

Whether machine learning creates disparate treatment, disparate impact, or neither, depends on the lens the system is viewed through. If the software makes a decision based on a pattern that is the machine equivalent to protected class status, then it is committing disparate treatment discrimination. If a lender uses the software as its factor in deciding whether to grant a loan, and the software is more likely to approve loans to unprotected class members than it is to approve loans to similarly situated protected class members, then the lender is committing disparate impact discrimination. And if a lender uses software that relies on permitted factors, such as education level, yet nevertheless winds up approving more loans for unprotected class member than it does for protected class members, the process could be discriminatory but not actionable under either disparate treatment or disparate impact theory.

The problem thus depends in part on understanding how the machine interprets the data–the exact information we often lack when machine learning is particularly complex. Thus, instead of relying on either disparate impact or disparate treatment theory, perhaps legal analysis of discrimination in machine learning should be entirely outcomes-driven. If, in fact, an algorithm wrongly predicts the likelihood of an event occurring, and that algorithm is less accurate for protected class members than unprotected class members, the algorithm should be considered prima facie discriminatory. Such a solution is viable for examining recidivism, interest rates and loan repayment, but it may be insufficient to cover problems like housing denials or employment. If an algorithm denies protected class members access to housing in the first place, it is difficult to falsify the algorithm’s decision, as nonpayment or other issues necessarily do not arise.

Machine learning offers great promise to escape the racial biases of our past if it is programmed to only rely on neutral factors. Unfortunately, it is currently impossible to determine which factors used by machines are truly neutral. Improperly configured machine learning has led to higher incarceration rates for black inmates and higher loan interest rates for black and latino homeowners. Such abuses of machine learning technology must be rejected by the law.

Schecharya Flatté anticipates graduating from Boston University School of Law in May 2021.

Tagged , , , , , ,