Is Google’s algorithm responsible for the electorate’s inability to agree on even basic facts?
By Rick Paulas
(Photo: donkeyhotey/Flickr)
Following this election has been a horror show. But more than that, it’s also been an unsettling experience in the action of observing reality.
Spend time on Twitter, Facebook, or the potpourri of independent online news sources (which: maybe don’t?), and it’s been like glimpsing parallel universes side by side. The truly disconcerting part is that the two camps can’t even agree on the same basic facts.
Blame can be placed far and wide for this dissonance. There’s the proliferation of misinformation campaigns, such as the pro-Donald Trump content mill based in Macedonia, that make a quick buck by making up news. There’s the fact that conspiracy theories about mainstream media have been legitimized by one of the presidential candidates. And there’s the social media effect, of everyone adding to the noise by sharing whatever story conforms to their opinion, whether or not it’s real.
Google’s goal is not to provide the user with information, but to provide the user with what they’re looking for.
But there’s another component of this confusion that hasn’t been much discussed: the Google Bubble Effect.
Last year, I wrote about how Google’s perceived omniscience has given users too much trust in whatever the algorithm spits out. Because Google has been great when delivering search results for innocent things such as the best pair of pliers or the quickest route home, users trust Google with supplying them the most accurate information. Perhaps you can see the problem.
In 2015, Dr. Robert Epstein, a research psychologist for the American Institute for Behavioral Research, and his colleagues developed an experiment to test how this misplaced trust could alter an election. They built a fake search engine (dubbed “Kadoodle”) and had volunteers use it to look up information about two candidates for prime minister of Australia. But it wasn’t just a normal search. Epstein and his crew tinkered with the results, shifting biased information to the top and websites partial to the other candidate at the bottom. Then, they asked people to vote.
When the researchers compared the votes to how the searches had been manipulated, they discovered that those who viewed biased searches were 48 percent more likely to vote for the candidate favored by the bias when compared to a control group. A later experiment, using an electorate already familiar with the candidates (more similar to the Hillary Clinton/Trump match-up, involving two people that have been on the public stage for decades), showed a 12 percent shift in undecided votes associated with search engine tinkering. While that may seem modest, that could ultimately decide the winner in a tight race.
“In the United States, half of our presidential elections have been won by margins under 7.6 percent, and the 2012 election was won by a margin of only 3.9 percent — well within Google’s control,” Epstein wrote in a piece for Politico.
Now, there are two different ways this effect can be manipulated. Google — either the company itself, or some lone programmer with access to the algorithm — could manipulate the search results to help whoever they deem best for president. While this seems like an outrageous scenario, there is a quasi-precedent with the election of 1876, when Rutherford B. Hayes rode the tools of Western Union and the Associated Press into the White House. (The scenarios differ in that Hayes won due to a last-minute tinkering, rather than a long-range manipulation over months.)
But what’s more likely, and potentially more worrying, is the second scenario: that Google’s feedback-loop algorithm has been fracturing the electorate all along.
It’s important to understand that, while it may seem that a Google search is a static database wherein the results are the same for every user — like, say, a library card catalog — it is not at all. Google’s goal is not to provide the user with information, but to provide the user with what they’re looking for. It’s a subtle, but dramatic, difference. If the data of a user shows that they’ve been spending time obsessing about how the moon landing was faked, Google will likely not bump a scientifically vetted piece debunking that possibility to the top.
This is why it seems as though the two groups of voters aren’t having an argument, but are speaking two entirely different languages. If you’re a Trump supporter, it’s more likely Google will funnel your searches to right-wing websites like Breitbart or Drudge. If you’re a Clinton supporter, you’ll end up at The Huffington Post or Slate. The accuracy of the sites is irrelevant: Believability depends entirely on where Google has placed them in the search result order.
Perhaps the most obvious improvement for next time is forcing Google and Bing to make their algorithms transparent, allowing users to see how information is being delivered. This is what German Chancellor Angela Merkel called for in a recent speech. But making algorithms public could destroy a company’s development secrets and allow for easier exploitation.
The big question that remains in the days and weeks after the votes are tallied will be the question of The Great Reconciliation, when the losing chunk of the electorate is forced to come to terms with how their perceived reality hasn’t been accurate.
Will they trust the results? Or will they continue to trust what the Google God has been telling them this whole time?