How did the polls get it wrong?
IT WAS a big polling miss in the worst
possible race. IT WAS a big polling miss in the worst possible race.
On the eve of America’s presidential election, national surveys gave Hillary Clinton a lead of around four percentage points, which betting markets and statistical models translated into a probability of victory ranging from 70% to 99%. That wound up misfiring modestly: according to the forecast from New York Times’s Upshot, Mrs Clinton is still likely to win the popular vote, by more than a full percentage point. But at the state level, the errors were extreme. The polling average in Wisconsin gave her a lead of more than five points; she is expected to lose it by two and a half. It gave Mr Trump a relatively narrow two-point edge in Ohio; he ran away with the state by more than eight. He trailed in Michigan and Pennsylvania by four, and looks likely to take both by about a point. How did it all go wrong?
Every survey result is made up of a combination of two variables: the demographic composition of the electorate, and how each group is expected to vote. Because some groups—say, young Hispanic men—are far less likely to respond than others (old white women, for example), pollsters typically weight the answers they receive to match their projections of what the electorate will look like. Polling errors can stem either from getting an unrepresentative sample of respondents within each group, or from incorrectly predicting how many of each type of voter will show up.
The electoral map leaves no doubt as to how Mr Trump won. In states where white voters tend to be well-educated, such as Colorado and Virginia, the polls pegged the final results perfectly. Conversely, in northern states that have lots of whites without a college degree, Mr Trump blew his polls away—including ones he is still expected to lose, but by a far smaller margin than expected, such as Minnesota. The simplest explanation for this would be that these voters preferred him by an even larger margin than pollsters foresaw—the so-called “shy Trump” phenomenon, in which people might be wary of admitting they supported him. Pre-election polls gave little evidence for this phenomenon: they showed him with a massive 30-point lead among this group. But remarkably, even that figure wound up understating Mr Trump’s appeal to them: the national exit poll put him 39 points ahead. Given that such voters make up 58% of the eligible population in Wisconsin, Michigan, Ohio and Pennsylvania—though a smaller share of those who actually turn out—this nine-point miss among them accounts for a large chunk of the overall error. It is also likely that less-educated whites, who historically have had a low propensity to vote, turned out in greater numbers than pollsters predicted.
Of all the consequences of Mr Trump’s stunning victory, its impact on the public-opinion-research industry is surely among the least important. Nonetheless, it should inspire pollsters to redouble their efforts to better forecast turnout, beyond merely relying on the census and applying simple likely-voter screens. For the layman, it serves as a devastating reminder of the uncertainty in polling, and a warning about being overconfident even when the weight of surveying evidence seems overwhelming. As the physicist Niels Bohr famously quipped, “prediction is difficult—especially about the future.”
No comments:
Post a Comment