We Are Hard-Coding Injustices for Generations to Come
It is a grim truism of modern life that everything from civil rights violations and health crises to environmental degradation and educational barriers are disproportionately suffered by the people least financially and socially equipped to deal with them. Black Americans are incarcerated at five times the rate of whites. Of the 28 million nonelderly Americans lacking health insurance, over half are people of color. Rich Americans have far greater access to healthy food and productive schools than poor Americans.
The same is true with computational systems — and on this front, a bitter fight is emerging. At issue is whether a society now indivisibly dependent on computer technology and its underlying programming can ensure that its vast benefits and inevitable burdens will be distributed equally across social and economic classes. If some version of this egalitarian principle, which I call “computational justice,” does not soon become commonplace, we run the risk of hard-coding all manner of injustice for generations to come.
In 2016, mathematician and former investment banker Cathy O’Neil explored the idea that algorithms in the economic, financial, and educational realms contribute to the structural effects that maintain divisions in wealth. She successfully argued that algorithms’ effect of keeping the rich rich and the poor poor is no accident; it’s evidence of a set of biases algorithms inherit from their creators.
O’Neil’s “weapons of math destruction,” disembodied models that swallow people and spit out biased recommendations, are an illuminating first step, but only one example of a computational injustice. Class affects how an algorithm applies to different people, but it also affects which algorithm applies to different people. For example, being wealthy means algorithms will find you vacation homes on AirBnb; being homeless means robots will move you if you sleep too close to buildings. Algorithms find work for the well-educated while taking it away from those without education. AI is decreasingly disembodied, often embedded in robots who exist in and interact with the physical world. Without oversight, the opportunities for injustice are abundant.
Understood like this, computational justice is the newest version of an old principle. Starting in the 19th century, through the fight for labor laws and civil rights especially, social justice became commonplace. In the 1980s, people began to talk about environmental justice, understanding that segregation often meant more than Jim Crow — in particular, that “pollution is segregated, too.” It’s now well documented that low-income areas fare worse than wealthy ones when natural disasters strike, as when Hurricane Harvey hit Texas, Katrina hit New Orleans, or Maria hit Puerto Rico. Most recently, “health justice” has entered our vernacular, with advocates opposing the unequal quality of care received by the wealthy, white, and able on one hand and the poor, brown, and disabled on the other. Like the lenses of social, environmental, and health justice, the lens of computational justice shows us that discriminatory structural effects do, in fact exist, that they affect outcomes, and that they result from conscious choices.
AI’s unique talent for finding patterns has only perpetuated our legal system’s history of discrimination, for example. Since people of color are more likely to be stopped by police, more likely to be convicted by juries, and more likely to receive long sentences from human judges, the shared features identified are often race or proxies for race. Here, computational injustice codifies social injustice. Human and cultural bias lead us to think people of color are more dangerous so we disproportionately arrest and convict people of color thus confirming our biases.
The lower classes also have more interactions with these systems through no fault of their own.
After three consecutive years of more than 300 annual homicides, the Baltimore Police Department has implemented algorithms to preemptively identify “the where, when, and who” of crime — a high-tech version of hot spot policing that critics say disproportionately targets minorities and lower class communities. It isn’t clear that “predictive policing” is effective, but even if it is, increased police targeting causes distrust and has negative, long-lasting psychological effects on residents. That’s not saying these systems don’t have the potential to produce good, but that their burdens are simply irrelevant to the privileged.
Computational injustices also last longer for the less socially mobile. Consider the 2017 hack of the credit report company Equifax that affected 145 million Americans. These kinds of thefts often lead to fraudulent credit activity and lower credit scores. Lower scores mean algorithms assign higher car insurance rates and higher interest rates on loans. The easiest way to improve a credit score, experts say, is to spend credit and make the subsequent payments on time. That’s not easy for those in the poorest income bracket in the U.S., who have an average of only $367 of disposable income per year.
Computational justice resembles its cousins in an endless body of ways, with each inflicting deeper wounds on the lower class than the upper class. Among a vanguard of others, the ACLU already fights for social justice, iterations of the EPA for environmental justice, and the WHO for health justice. Framing computational justice in the category of these ongoing efforts is necessary to jumpstart the fight from nascence to activism.
This fight hasn’t broken out yet, though it is picking up. The ACLU has recently helped pass algorithmic discrimination legislation in NYC and the Electronic Frontier Foundation has testified on behalf of algorithmic justice, but neither organization is primarily focused on this issue and both maintain the narrower view of “algorithmic justice.” Groups that do spend the bulk of their time on computational justice, like the AI Now Institute at New York University, are research institutes less directly involved in activism.
Collaboration between these groups is promising, but at this late stage of technological integration, we need to recognize computational justice as a virtue and create standalone structures dedicated to actively defending it.
Nick Thieme is a research fellow at the University of California Hastings Institute for Innovation Law and freelance writer with work appearing in Slate Magazine, BuzzFeed News, and Significance Magazine. His research focuses on AI regulation, cybersecurity, and pharmaceutical patent trolling.
Comments are automatically closed one year after article publication. Archived comments are below.
To say that a credit system that takes income and payment history into account is discriminatory is disingenuous since that was it’s purpose. Any predictive system discriminates the instant that variables are determined. Predictive policing is another example of the author restating the obvious as a conspiracy while all it does is put assets where crime has occurred in the past.
I don’t know what you could do regarding credit since the willingness to lend is a function of the expected recovery rate. As to policing the problem is not the distribution of those assets but their training. The “poor” are those most negatively effected by crime. They welcome protection. It’s the harassment that results from poor personnel selection and training of police that they object to.
If you object to the use of predictive algorithms your foolish. Don’t you ever watch the weather report. I can sum up why this article is stupid with the example of home ownership. Those owning in the most dangerous (to value) locations are bimodal. The rich build on the seashore, sloping ground and other dangerous locations. Those choices are subsidized by the government through flood insurance and disaster relief. The poor live in dangerous locations as well. The primary danger to the poor are environmental costs. These could be mitigated but society is in the process of ending environmental regulation since it reduces ROI by an estimated .005. It’s not the math that discriminates it’s people.