Hardly anyone can have missed the now famous statistician and New York Times blogger Nate Silver. By accurately predicting the outcome in 49 states in the 2008 American presidential election and in all 50 states in 2012, he is now considered a forecasting virtuoso with almost wizard-like status. He has outperformed all major traditional pundits despite only having done political predictions since late 2007. So how does he do it? What is his crystal ball made of, and what does he see when gazing into it?
Different types of professionals sometimes use different methods for predicting the same kinds of events. During election times, traditional pundits like to build up a narrative around a candidate, because it makes for a good story. Often they rely on little more than gut feeling and intuition. However, except in rare cases, this is far from the best way for getting at the truth.
Silver’s new book, The Signal and the Noise, offers many clues on how to tackle the enigma that is political predictions. In it, he writes about three governing principles that he used for his own forecasting model: Thinking probabilistically, updating the forecast as time passes, and looking for consensus. All three of which traditional pundits tend to ignore.
Thinking probabilistically means to not just say whether a candidate will win or not, but to put a number on your confidence that the candidate will win. This better represents the uncertainty in the real world that traditional pundits rarely talk about. Some might claim that not putting 100% probability that a specific event will occur is a way to safeguard yourself in case you would be wrong. But this is not the point. You shouldn’t be too surprised if an event with 95% probability doesn’t occur, because this is expected to happen in 5% of the cases (as long as the prediction is right).
Updating the forecast can be important since facts are not always consistent over time. If the facts change, the forecast should also change. This is a principle that most traditional pundits don’t adhere to, and which they instead see as a sign of weakness. Rather than using new information to improve their forecast, traditional pundits see politics as intrinsically knowledgeable; just as a planet’s movement can be forecasted by knowing a set of physical laws, they also think an election should follow a predictable orbit.
Finally, looking for consensus means taking multiple viewpoints into account. Contrary to a traditional pundit who might dream about making a bold prediction that turns out to be true, Silver instead gets anxious if his predictions are too far off from the consensus. Evidence shows that group forecasts are more accurate than individual ones, and while a radical prediction can turn out to be true (and does so every now and then), this is not the best way to maximize your performance as a predictor over time.
And Silver’s model has legs. Given his track record from the last two U.S. presidential elections he likely knows what he is talking about. But Silver wasn’t the only one performing outstandingly well during last year’s presidential race. Drew Linzer, Sam Wang and Nate Silver can all, by some measure, be considered the most accurate election pundit of 2012. All of them outperformed–by large amounts–traditional pundits.
One major difference between these sage scientists and the poorer-performing traditional pundits is that the former often let statistics take center stage in their models, while the latter tend to rely more on qualitative knowledge such as a candidate’s personality. This doesn’t mean that less quantifiable knowledge has to be useless. Consider the Cook Political Report team: This team works on predicting the outcome of U.S. elections, and has so far been very successful. When making their predictions they use polls as well as data on, among other things, the demographics of a district. But they also conduct interviews with candidates.
The point of this is mostly to look for red flags, which could possibly lower the probability that a candidate would win. Instead of building a “mythical” narrative around a candidate from interviews (as a traditional pundit might do), the Cook Political Report team combines their findings with other information: if a candidate is unlikely to win in a district to begin with, it is probably still unlikely even if he or she is a very charismatic speaker. What makes these predictions so successful is that they carefully weigh in all the available evidence, and don’t just rely on one single piece of fragile intuition.
So, obviously, not all predictions are created equal. Silver’s and others exceptional performance has shown that the role of the traditional media prognosis is little more than to entertain. And while they probably won’t be replaced by gurus like Silver anytime soon, it feels comforting to know that there now are forecasters using proven methods that actually work. Time will tell if the rigorous models of Silver and co. will make political uncertainty a thing of the past.