An epic data nerd battle started last Wednesday over Twitter as critics lambasted Public Policy Polling (PPP) for withholding a weekend poll of voters in Pueblo County, Colorado.
The poll asked Pueblo County voters if they supported the legislative recall of State Senator Angela Giron, a moderate Democrat who supported stronger gun control legislation in her state. Giron’s Tuesday recall election and that of State Senate President John Morse, of Colorado Springs, marked the first legislative recalls of state lawmakers in Colorado’s 137-year history.
PPP found 54 percent of Pueblo’s registered voters overall favored Giron’s recall, while just 42 percent opposed it. Stranger yet, a whopping 33 percent of Democratic respondents said that they supported the recall. Consider that 47 percent of Pueblo’s registered voters are Democrats; just 23 percent are Republicans.
“In a district that Barack Obama won by almost 20 points I figured there was no way that could be right,” a post from PPP Director Tom Jensen explained on the firm’s website. Not only that, voters in Pueblo County support expanded background checks for gun buyers—68 to 27 percent—and are split on ammunition limits.
“VERY bad and unscientific practice for @ppppolls to suppress a polling result they didn’t believe/didn’t like.”
PPP suspected a design flaw. “This was the first legislative recall election in Colorado history. There’s been a lot of voter confusion,” Jensen noted. “That finding made me think that respondents may not have understood what they were being asked.” With no time to test a new survey design, they waited for the election results.
Tuesday’s election results matched the unpublished poll results closely: 56 percent for the recall, 44 percent against. On Wednesday morning, PPP posted the weekend poll with an explanation for the delay and analysis. Poll trolling commenced.
PPP is not a public agency, it’s not a non-profit, and it’s not under any obligation to anyone but its clients to release its polls. The group has nevertheless made a policy of being responsive to public interest and shown a commitment to transparency. That’s laudable, not lamentable.
But Nate Silver couldn’t help himself: the much-feted statistician behind FiveThirtyEightthrew a punch at the pollsters over Twitter early Wednesday afternoon, announcing, “VERY bad and unscientific practice for @ppppolls to suppress a polling result they didn’t believe/didn’t like.” He linked to the PPP post explaining their delay in releasing the Giron poll results. In his next-day follow-up, Silver accused PPP of an “approach to polling [that] is extremely ad hoc.” He added, “that ad-hockery stems from a lack of appreciation/understanding for the statistical fundamentals behind polling.”
Taking the cue from golden-boy Silver, other commentators and analysts have piled on. Mark Blumenthal, founding editor of Pollster.com and senior polling editor for the Huffington Post, rallied behind Silver by early afternoon Wednesday. He launched into his criticisms of PPP with a quick disclaimer: “I’m late to this #NerdFight.” Thursday brought a particularly pointed piece by Nate Cohn, The New Republic’sself-professed amateur polling analyst. Blumenthal rehashed Cohn’s arguments in a post for the Huffington Post.
It’s worth noting that, with the exception of Blumenthal, few of PPP’s critics have polling chops; journalists and practitioners with experiencerallied behind PPP.
Silver criticizing a polling outfit for being reckless and unscientific is like Lindsay Lohan accusing Meryl Streep of lacking work ethic. In November 2011, Silver declared that President Obama had just a 17 percent chance of re-election. I wrote about the many problems with Silver’s piece, statistical and ethical, for the Huffington Post: Silver conceded he used just 17 elections (the post-1944 presidential cycles), discounted the reliability of historical precedent, admitted the inconsistency of economic forecasts, and exposed prevailing methods for predicting electoral outcome through tables and rundowns as no better than hit-or-miss.
In one corner, we have a hitherto respected polling outfit that releases data regularly; in the other, a statistician who strung together a series of untrustworthy variables to land the cover of the New York Times Magazine, justifying the exercise by listing the many problems with his claim in the body of his piece.
Gawker’s Max Read noted, “nate silver is a funny guy to be knocking an organization for lack of transparency!” The Plum Line’s Greg Sargent asked Nate Silver, “are you alleging they ‘suppressed’ the result because they ‘didn’t like’ it?” That “[PPP-Nate Silver] dispute turns out to be pretty narrow,” Brian Beutler of Saloncommented. I found myself drawn into the Twitter war on the topic: “If you mixed blue and yellow and got purple, would you announce it before re-examining your experiment?”
It’s refreshing to find that there are still pollsters who self-regulate. Too many pollsters operating in the political sphere take an Ivory Tower attitude, disavowing responsibility for the consequences of their polls and analyses. Their reasoning, as I understand it, is that, so long as they note the margin of error and any other qualifications in fine print, they’re on the up-and-up. On that, they’re wrong.
Pre-election polling can affect election results—even Nate Silver admits as much. We’ve got multiple theories to choose from, each of which carries some weight. Voters may want to choose the winner or vote with their friends, for example. Low-information voters may take a marketplace approach, relying on public opinion to champion the worthiest candidate.
When a pollster puts out a poll with a huge margin of error or a skewed sample frame, most coverage won’t note the fine print, even if the pollster does. The irresponsible pollster publishes the poll, content to slap on a sensational headline or else criticize media for doing so. The responsible pollster recognizes the dangers and acts to ensure that data isn’t misappropriated.
PPP identified a problematic result and held it until they could explain what happened, despite the fact that they could’ve gotten publicity from the polling before the election. They published the poll after the election knowing they might face sustained criticism for having held it back. That’s a commitment to science and public responsibility that we should welcome.