The man who saw tomorrow
Nate Silver's predictions for the US election have provoked a debate between gut-feel punditry and data
Barack Obama's victory in the US presidential election was not just a win for those who believe that the federal government has a role to play in creating equality of opportunity, or for the right of women to exercise control over their bodies, or gay rights. It was also a win for math on a night when political punditry met its match in the so-called quants — statisticians who developed models that forecast the high probability of an Obama victory, in contrast to the dominant narrative that the race was too close to call. Over the last few weeks of the campaign, Nate Silver, who runs a blog called fivethirtyeight for The New York Times, emerged as the focal point of the debate over whether statistical models could do a better job of predicting voter behaviour than the "gut instincts" of seasoned political analysts, many of whom are veterans of several presidential elections.
Silver (and others, like Sam Wang at the Princeton Election Consortium, Simon Jackman at Stanford and Josh Putnam at Davidson College) created models to forecast the election result based on a weighted average of state polls. Fivethirtyeight is simply the best known, and became something of a touchstone for liberals as Obama's lead in the national polls narrowed and then vanished altogether. Silver also has credibility since his model accurately predicted the results of the Indiana and North Carolina Democratic primaries in 2008 and correctly predicted the outcome in 49 of the 50 states on Election Day the same year.
Regardless, in recent days, Silver in particular, and forecasting based on aggregate polling data in general, came under sustained attack from Republicans who claimed that the polls were skewed and pundits who could not reconcile their gut feelings with the numbers. Their argument was that polls cannot account for the multiple variables that can influence an election, and that there will always be variables that cannot be guessed at, let alone adjusted for. While a vague disclaimer is nobody's friend, the pundits' constant denial of the legitimacy of all aggregate polling data if it wasn't pointing to a favoured result calls into question their numeracy. It is striking that so many of those who had problems with Silver's analysis mistakenly believed that he was somehow skewing his model for partisan reasons. David Brooks even went so far as to call election forecasting based on quantitative evidence wizardry.