Is There a Good Way to Fact-Check Public Figures in Real Time?

A recent goof by the Associated Press indicates that real-time fact-checking still doesn’t exactly know what it’s doing.
President Donald Trump's address about the government shutdown is streamed on a television screen in the Press Briefing Room of the White House in Washington, D.C., on January 8th, 2019.

It’s usually a bad sign when a fact-checker makes the news.

My own corner of the fact-checking world is that of magazines, where every claim in a draft of a story that comes to a checker’s desk is supposed to be meticulously verified prior to publication—so if I were to make news, that would mean that some error has slipped through, and my publication is suddenly under fire for misinformation or misquoting. In the more public-facing corner of the fact-checking world—where the claims of public figures are evaluated as true or false after they are made—it usually means that a checker made an error in judgment about the purpose and nature of the medium.

Last Tuesday, the Associated Press Fact Check ran a post evaluating claims from Speaker of the House Nancy Pelosi and Senate Minority Leader Chuck Schumer that President Donald Trump was responsible for the ongoing government shutdown. The AP determined their statements to be half-truths, as “it takes two sides to shut down the government.”

The AP’s post went viral. Critics wondered how a fact-checker could assess the assignment of blame, a decidedly non-objective criterion. Others contended that the checking itself was so devoid of context—about the nature of the budget dispute, about who had proposed solutions—as to fall short of basic factual analysis. The AP’s fact-check post quickly became a target for those who argue that the fact-checking industry is not equipped to handle the post-truth era, and, more significantly, that fact-checkers of this sort were even contributing to disinformation.

At the center of the debate around the utility of fact-checking are three important concerns: what is fact-checking actually, does it work, and is the current style of checking public figures’ statements fair or useful as a format?

Peer-reviewed research into the efficacy of fact-checking is extensive but inconclusive, and—as all fact-checkers are wont to hedge—definitive answers are hard to come by. Nonetheless, one central critique is clear: Non-partisan, contrary to what some outlets believe, does not mean unbiased. The choice of whose statements and what topics to check, along with variable evidentiary standards depending on the statement under scrutiny, can all introduce bias—and the perception of bias—into an endeavor that’s meant to avoid it.

Simply checking conservatives and liberals in equal measure won’t solve the problem. To combat misinformation more faithfully, fact-checking outlets need to codify standards of what statements by whom get checked, and establish internally consistent guidelines about the types of proof that satisfy their assessments of lying or honesty. Otherwise, they’ll keep getting in the way of their own project.

Magazine Fact-Checking vs. Fact-Checking in Real Time

Part of the dispute about fact-checking efficacy and fairness turns on the fact that a single term stands for two very different sets of practices. Fact-checking first referred to the cottage industry that arose in magazines in the 1920s and ’30s largely as a response to complaints from readers about wildly inaccurate articles, and later to the threat of libel. Reportedly, corrections like those Edna St. Vincent Millay’s mother sent to The New Yorker following a 1925 profile of the poet prompted the creation of a proto-fact-checking department in the organization.

The type of fact-checking that emerged in the magazine world calls for a checker to re-report a story to verify its truth, at every degree of granularity. You ask: Are names, dates, and numbers correct? Does a comma make a sentence’s meaning incorrect? but also: Do the sources accord with the claims? Are the sources themselves credible? Are there sources the author did not use that would undermine the article’s argument? And so on.

Most magazine fact-checkers have a story about the ridiculous lengths they’ve gone to in pursuit of a trivial piece of information: In the name of truth itself, I once talked to the founder of Neuticles, a company that sells prosthetic (and cosmetic) testicles for neutered dogs, to confirm whether they sold their product by the unit or the pair—in order to assess a one-word claim in an article about pet spending.

The type of fact-checking that is more publicly visible—that of checking public figures’ statements—emerged in full in the early 2000s. There are currently 161 organizations providing this type of service, according to the Reporters’ Lab at Duke University, spanning the paper of record, to Gossip Cop (a celebrity gossip verifier), to the Daily Caller. This type of operation is decidedly not the same as standard magazine checking.

For the purposes of clarity throughout the rest of this article, let’s call this type of checking “real-time fact-checking,” and the type that magazines do prior to publication “magazine fact-checking.”

Does Fact-Checking Work?

Magazine fact-checking is effective in that, in theory, it captures errors from the minute to the fantastical, the good-faith to the bad-: from a reporter mistakenly analyzing a study’s results, to a reporter fabricating sources.

The research on real-time fact-checking’s efficacy offers mixed conclusions, and often traffics in small sample sizes. One 2015 study out of Arizona State University finds that fact-checks can change people’s perceptions about the accuracy of negative political advertisements. Other studies find a “backfire effect,” whereby people presented with evidence that their beliefs are wrong actually double-down on those beliefs. (Notably, attempts at replication of the studies have failed to find this same effect.)

Independent of whether fact-checking can change readers’ minds, a 2018 paper out of Stanford University posits that being fact-checked makes politicians more truthful in their speeches in the future. Donald Trump’s repeated lies, despite repeated fact-checks, do put this theory to the test. The Stanford study finds that even Trump made fewer false claims per speech on the campaign trail in 2016 after being fact-checked, but the process hardly came close to eradicating his propensity for lying.

Morgan Marietta, an associate professor of political science at the University of Massachusetts–Lowell and co-author of the forthcoming One Nation, Two Realities: Dueling Facts in American Democracy, believes the efficacy of real-time fact-checking is limited by three factors: First, people “project their preferred values onto perceived facts” rather than letting facts inform their values. Second, even before considering the substance of a fact-check, many people already distrust various organizations offering those checks based on perceived biases. Third, people reject fact-checkers based on legitimate and perceived flaws in outlets’ past work.

How Fact-Checking Itself Becomes Biased

The question of whether real-time fact-checking works is, in some ways, superfluous. Even if fact-checking convinces no one in the public to change her mind at the polls, and no politician to change her relationship with the truth when speaking to the public, holding false statements to account is an honorable goal in itself. The central question of the value of real-time fact-checking outlets, therefore, is not whether they are effective at convincing, but whether they truly serve to hold false statements to account.

In answering the latter question, there are two distinct factors at play: Whether the format of fact-checking, before any consideration of what statements are being checked, serves to fairly broker truth; and whether outlets are checking the right statements to begin with.

The format of real-time fact-checking seems simple: A politician makes a claim, and outlets determine whether it is true or false—or, as the Washington Post‘s Fact Checker column puts is, they “truth squad” it. What sounds like a simple proposition, though, is a more nuanced one in practice. Checkers don’t seek to determine whether a statement is “true or false”; they seek to determine whether a statement is true, or they seek to determine whether a statement is false. And which specific answer they seek can swing their analysis of a statement. Consider the following example:

Dick takes half of the cookies from a cookie jar before dinner. Jane takes the other half of the cookies from the jar before dinner. Later, mom asks who took the cookies from the jar. Dick says “Jane took cookies from the jar.” Jane says, “Dick took cookies from the jar.”

If a real-time fact-checker asks “Is there reason to believe Dick’s claim is true?” the answer is, undeniably, yes. Jane did take cookies from the jar.

If a real-time fact-checker asks, “Is there reason to believe Dick’s claim is false?” the answer can also be yes. Dick has cherry-picked data, which is a form of falsification.

(The standard that magazine checkers tend to use is: Is this the fullest attainable version of the truth?)

This isn’t just an abstract example. One can easily see the AP determining that Schumer and Pelosi were telling the truth in blaming Trump for the shutdown if the checker’s standard was, “Is there reason to believe their statements are true?” instead of, “Is there reason to believe their statements are false?”

There’s nothing wrong, inherently, with either of these questions, or either of these answers. A problem arises, however, if the same organization alternates between those frameworks. Suppose the same outlet asked of Dick’s statement, “Is there reason to believe it’s true?” and of Jane’s statement, “Is there reason to believe it is false?” A checker could, in discretely reasonable terms, conclude that Dick is telling the truth and Jane is lying, passing off as factual reality what in actuality is a difference of evidentiary standards.

Marietta identifies this very issue of switching between standards as a fundamental problem with the format of real-time fact-checking.

“You will see [fact-checkers] ask and then demonstrate that something could be considered to be false; hence it is lying. Or you will see them ask and demonstrate if something could be considered to be true; hence it is honest,” Marietta says. “The problem is they could have reversed the standards and then reversed the conclusions. They have no consistency about the standards of proof applied. Which approach they take seems to be dictated by their initial perception of whether something seems to be suspect from the outset [to them].”

Even if uniform evidentiary standards are set in place, organizations need to apply those standards to the right subset of statements if they wish to avoid bias.

One of the primary concerns magazine checkers have is making sure that the reporter hasn’t ignored sourcing that casts doubt on the reporting, even if a story accords with its sources. Similarly, one type of misstatement that real-time fact-checkers investigate is the kind that correctly cites a source, but has cherry-picked that source (like, say Dick’s claim above). In other words, checkers of all types are on the lookout for selection bias: non-random choices about what information to privilege in an argument.

But selection bias, more than residing at the level of individual claims, also seeps into the very form of real-time fact-checking: There are a lot of politicians, making a lot of claims, on a lot of topics, and an organization has to decide what warrants checking and what doesn’t. Most checking organizations have rules dictating, and explaining, what they choose to check, typically involving some determination of “newsworthiness” and reader demand. The problem is that newsworthiness is a nebulous concept, and the very act of selecting claims to check can impose precisely the sort of biases that all types of fact-checking aim to avoid—independent of whether the substance of the fact-checking post is correct or not. (On that last point, some research suggests there’s sufficient disagreement between different organizations’ checks of the same statements that the veracity of their analysis itself comes into question.)

There have been numerous analyses of potential biases in fact-checking operations. These studies typically focus on the political affiliation of politicians who are checked, and, beyond that, the level of falsehood they find members of each party to display. (Many organizations have sliding scales of accuracy, like Politifact’s “Pants on Fire!” to “True” scale, and the Washington Post‘s “Pinocchio” rating system.) Such studies find three key types of selection biases: those concerning the party-affiliation of politicians who are checked, those concerning the ideological extremity of the politicians who are checked, and those concerning the topic areas of politicians’ statements that are checked.

One analysis finds that, while organizations check roughly the same number of statements across parties, some prominent outlets, like Politifact, consistently rate the statements they check by Republicans as more false than the ones they check by Democrats. It could simply be the case that Republicans lie more than Democrats do and deserve harsher checking, but if so, devoting an equal number of checks to those left of center reflects a bias of its own.

All of these potential selection biases bring us back, once again, to the AP Pelosi-Schumer fact-check around whom to blame for the government shutdown. Part of the outrage over the checking of Pelosi’s and Schumer’s claims stems from the fact that, as concerns the shutdown, these two have made misstatements far less often than Trump. The choice to devote bandwidth to covering their reported misstatements, at the expense of Trump’s, is itself a form of misinformation, given the disparities in how often each one tells a mistruth.

How Can Real-Time Fact-Checking Improve Itself?

There are potential fixes that can save real-time fact-checking as an intellectually rigorous and civically virtuous endeavor. Organizations should employ internally uniform evidentiary standards when evaluating claims. And they should set clear guidelines—for internal use and external transparency—on what statements get checked. In establishing such guidelines, newsroom leaders should remember that “newsworthiness” is a subjective standard, prey to both bias and the perception of bias. Checking claims exclusively by those in certain offices, or who are running for certain positions, might be a limited exercise, but it’s the type of framework that can offer a check against cherry-picking.

Marietta, however, says he’s unconvinced that any changes, incremental or sweeping, can save the exercise of fact-checking politicians in real-time. “My sense,” he says, “is that the current real-time fact-checking framework is flawed at a deep epistemological level, with no known fix.”

Related Posts