Algorithms Are Biased. That Might Help Regulators End Discrimination, a New Paper Argues.

Algorithmic prejudices can create inequalities. But in doing so, they might help lawyers pinpoint discrimination where they could not in the past.
Gavel judge courtroom justice court

Do the multitude of algorithms engineered to govern our livelihoods reflect the biases and prejudices of those who create them? Absolutely—but, according to a new paper, that may not ultimately be such a bad thing.

Over the past decade, a growing body of research has found that algorithms themselves can contain biases: For example, a ProPublica analysis in 2016 found that an algorithm used by many states in sentencing decisions and to set bail overestimates the likelihood of recidivism of black defendants and underestimates that of white ones; a 2013 paper found that, when someone searches a name on Google, the search engine is 25 percent more likely to serve advertisements for websites suggesting that person has an arrest record if the searched name is black-identifying, like DeShawn or Jermaine.

A National Bureau of Economic Research working paper, authored by renowned legal scholar Cass Sunstein, Jon Kleinberg, Jens Ludwig, and Sendhil Mullainathan, posits that such algorithmic biases can actually help expose the intangible and often unconscious biases of their creators in measurable, and therefore legally actionable, terms. If Adam Smith’s “invisible hand” guides the progress of free markets, algorithms can help make the complexities of methodological individualism, and discrimination, visible in the eyes of the state.

Disparities in outcomes and impact are easy to spot throughout society, but the law has had difficulty adjudicating the specific acts of discrimination and prejudice that produce such disparities. While decades of data exist detailing racial inequalities across civil society, from income and wealth accumulation to educational achievement and health-care outcomes, Sunstein and his co-authors note that this evidence is primarily statistical; the data doesn’t necessarily provide insights into discrimination on an individual level. This poses tremendous challenges in uniformly applying the United States’ fundamental legal prohibitions against discrimination on a case-by-case basis.

The paper’s authors provide the example of the disparity between the number of African Americans as a proportion of college-aged young people—15 percent—and the proportion of college students at top American institutions: 6 percent. There’s clearly an inequality, but identifying the locus of the discrimination leading to that inequality is a challenging feat for legal bodies to meticulously assess. Is the primary factor here the implicit bias of a college admissions board? How about that of a wealthy donor at a pre-application season meet-and-greet? Perhaps it’s other, pre-existing disparities in the education system limiting the achievement of non-white students? And if it’s all of the above, which ones are recognizable and resolvable by the existing legal infrastructure.

“Without some kind of formal discrimination (‘no women need apply’) or a ‘smoking gun’ document, the only other direct way to tell whether someone discriminated in a specific case may be to ask them,” the authors note. “Even setting aside the risk they lie … they honestly might not even know themselves.”

“When algorithms are involved, proving discrimination will be easier—or at least it should be, and can be made to be,” the authors add. “The law forbids discrimination by algorithm, and that prohibition can be implemented by regulating the process through which algorithms are designed.”

The researchers argue that, once algorithmic inequalities are detected, they can be solved by engineering algorithmic solutions. Algorithms that consider the biases people have can theoretically address those biases before they take place.

What does that algorithmic solution to discrimination look like? The answer, according to the working paper, is to introduce algorithmic screening tools into unstructured decision-making processes, like hiring. “Human beings can introduce biases in their choice of objectives and data; importantly, they might use data that are themselves a product of discrimination,” the authors note. “But conditional on getting objectives and data right, the algorithm at least removes the human bias of an unstructured decision process. The algorithm, unlike the human being, has no intrinsic preference for discrimination, and no ulterior motives.”

The paper’s authors propose a level of uniform algorithmic regulation that “requires us to be able to identify and interrogate human choices in the construction” of a core algorithm in the first place. For example, in the case of hiring algorithms, regulations can ensure what factors can be input into a hiring model, and what procedures can be used for building a screening algorithm in the first place. The authors note that “an important requirement for a new legal framework to prevent discrimination through the use of algorithms is transparency”—transparency only algorithmic examinations of large data sets can provide.

“The inclusion of an algorithm in the decision loop now provides the opportunity for more feasible and productive transparency,” they write. “Effective transparency is easier to imagine once an algorithm is involved, because each of the things we are now asking the firm to hand over—the choice of outcome, candidate predictors, training sample—is a tangible object.”

Whether this is even close to a political feasibility is a concern largely bracketed away by the paper’s authors, but the core argument is primarily an epistemic one: that more data can help bridge the gap between those face-to-face moments of discrimination and larger macro-economic trends. Algorithms are never perfect for the simple reason that they’re designed by humans, but under the right legal framework and with the right set of core standards, our existing biases can be eradicated from the institutions that govern our daily lives.

Related Posts