If there was any confusion over why Facebook has so vociferously defended its policy of requiring users to display their real, legal names, the company may have finally laid it to rest with a quiet patent application. Earlier this month, the social giant filed to protect a tool ostensibly designed to track how users are networked together—a tool that could be used by lenders to accept or reject a loan application based on the credit ratings of one’s social network.
In short: You could be denied a loan simply because your friends have defaulted on theirs. It’s the kind of digital redlining that critics of “big data” collection have been warning of for years. It could make Facebook a lot of money, and it could make the Web even less safe for poor people. And it could be just the beginning.
The United States has a long and storied history of discriminatory lending. Bank redlining practices denied many people of color the opportunity to apply for mortgages and buy their homes and forced racial segregation in cities across the country that persists to this day. Federal laws passed in the 1970s made these practices illegal and further protected the poor from discriminatory credit reporting and lending practices. But these laws narrowly define lenders and creditors in ways that don’t apply so neatly in an era of big data.
Depending on which factors are considered and which aren’t, predictive modeling based on one’s own history and behaviors can be terribly incorrect. When there’s more and more data to choose from, that could be good or bad news for consumers, depending on the algorithm.
“There are a lot of companies out there now that are offering different types of risk control products using big data,” says Lauren Leimbach, executive director of the non-profit Community Financial Resources. “On one level it’s rational behavior on their part to try to avoid problems ahead of time. It’s just a lot of institutions don’t dig into it. They don’t look at what’s really going on. On the other hand, the credit industry has figured out that there’s a lot of untapped market out there—people that they could be extending credit to that would be a good credit risk and pay back whatever loan, but they’re not showing up in regular credit models.”
Despite Facebook’s self-assured patent application and the company’s apparent confidence in its “authorized nodes,” modeling based on one’s social network only presents more opportunities for discriminatory and inaccurate conclusions.
Research consistently shows we’re more likely to seek out friends who are like ourselves, and we’re even more likely to be genetically similar to them than to strangers. If our friends are likely to default on a loan, it may well be true that we are too. Depending on how that calculation is figured, and on how data-collecting technology companies are regulated under the Fair Credit Reporting Act, it may or may not be illegal. A policy that judges an individual’s qualifications based on the qualifications of her social network would reinforce class distinctions and privilege, preventing opportunity and mobility and further marginalizing the poor and debt-ridden. It’s the financial services tool equivalent of crabs in a bucket.
Facebook’s true value comes from the data it collects on us, which it can in turn use and sell to advertisers, lenders, and whoever else it wants to. The veracity of that data is paramount to the company’s business model, and this patent is Facebook doubling down on the supposed truth in its networks.
But a lot of that data is bad. Facebook isn’t real life. Our social networks are not our friends. The way we “like” online is not the way we like in real life. Our networks are clogged with exes, old co-workers, relatives permanently set to mute, strangers and catfish we’ve never met at all. We interact the most not with our best friends, but with our friends who use Facebook the most. This could lead not just to discriminatory lending decisions, but completely unpredictable ones—how will users have due process to determine why their loan applications were rejected, when a mosaic of proprietary information formed the ultimate decision? How will users know what any of that proprietary information says about them? How will anyone know if it’s accurate? And how could this change the way we interact on the Web entirely, when fraternizing with less fiscally responsible friends or family members could cost you your mortgage?
The availability of more consumer data is not inherently evil—the problem lies in how it’s used. While other credit services are looking at factors that could make previously “risky” consumers look responsible, such as timely rent payments instead of medical debt, Facebook is betting everything on its broken, privatized community. The potentially discriminatory applications of big data are very scary. But returning to an era where the demographics of your community determined your credit-worthiness should be illegal.
The Crooked Valley is an illustrated series exploring the systems of privilege and inequality that perpetuate tech’s culture of bad ideas.