The Effect Of Predictive Modeling On Reporting The 2012 Election
The starting premise for this idea is that there are two schools of thought of election coverage. On the one hand, there is traditional campaign reporting, which is often based on sensation. Journalists on the ground will follow a candidate’s campaign bus for a while, try to get a sense of his or her connection with voters, and ultimately form a thesis about a candidate’s particular “momentum” in a state or around the nation.
On the other hand, data-oriented political forecasting places a heavy emphasis on opinion polls and how they are conducted. While campaign reporters write in terms of general candidate approval or disapproval, regional history, and demographic information in their analyses, forecasters are able to quantify great masses of individual data points and use them to create a concrete model. Forecasting is a departure from a big-picture approach to elections and rather appropriates the minutiae of personal opinions to create a new picture of the future.
For now, concrete forecasting and reliance on data has won out and captured the imagination of the political blogosphere. The differences between polling junkets dominated the last couple of weeks of discussion of the 2012 presidential race. Audiences seem to like hard numbers more than ever. In this journalistic climate, then, what will the role of traditional campaign reporters become?
This is not to say that affect and emotion failed to play a role in pundits’ analyses. After the Republican primaries ended, especially over the summer before the convention, there was a thread of commentary that Romney was “robotic” and couldn’t connect with voters,. National consciousness would seize on particular campaign failures, or memes, or gaffes. This, pundits claimed, made Obama’s campaign's job easier and ensured he would be more likely to win. Ultimately, election commentators are tasked with synthesizing polling data and real-time campaign coverage.
Toward the end of the campaign season, commentary shifted focus away from the campaigns themselves and more on which campaign was winning. Enter renowned political statistician Nate Silver of New York Times blog Five Thirty Eight. His models aggregate many polls to produce the probability that a given candidate will win the election, a prediction of the popular vote, and a prediction of the Electoral College breakdown. Due to accuracy in predicting the 2008 presidential and Senate contests, the consensus at the outset of the 2012 season was that Silver’s forecasts were respectable and worth watching. Note that Silver himself does not claim a party affiliation.
At the end of September, however, Unskewed Polls arrived on the data analysis scene. Founder Dean Chambers claimed his polling and poll-averaging methods would correct liberal bias by re-weighting the importance of party identification when calculating the average preferences of voters. For instance, when Unskewed Polls came online on September 20, Silver was predicting Obama would take the popular vote by four percentage points. Unskewed Polls’ earliest results had Romney beating Obama by 11 percent of the popular vote, which was the least conservative prediction of a Romney win that had been produced by any major polling outfit.
The fact that conservatives could not accept largely objective mathematical models and poll results, and had to “unskew” the numbers, speaks to the importance of data-driven political analysis during this election season. There seemed to be a sense that the most credible prediction would materialize into a self-fulfilling prophecy come November 6. Of course, the reverse is true: people’s opinions were dictating the numbers the whole time, not be the other way around.
The structural journalistic goal of producing “fair and balanced” coverage also explains the impetus for Unskewed Polls. At the same time this becomes a quest to eliminate supposed partisan bias, it ends up making the conservative agenda look more nationally popular than it actually is. When objective data produced a clear Democratic winner early on, conservatives created controversy where there should have been none. The predictions risked shattering the careful illusion that the Republican Party was popular and that the race was close.
But the illusion of a close race isn’t just a partisan necessity—it is also a feature of political reporting in the modern information-based economy. The election season is a more exciting story when it is discussed like a high-stakes horse race with close odds. More dramatic topics will always garner more page views, and thus more corporate advertising money for news outlets.
The problem with horse race-like political reporting is that it encourages readers, who are also voters, to see politics as an external reality show. This is especially true for national races, because the direct communication between politician and constituent is already minimal, so the connection to the candidate is more distant. The voter-as-audience member is more likely to make a partisan choice similar to the way he or she would choose an athletic team to root for. Horse race-like reporting distracts from the actual issues at stake in an election.
In the brave new world of accurate political forecasting and a glut of opinion data, campaign reporters and political pundits should spend less time trying to interpret the accuracy of polls and more effort investigating why a prediction is what it is. What are the most important things people need to know about each candidate on fiscal issues, military policy, or the state’s role in citizens’ healthcare, and how has that affected the data? Prediction models eclipsed issue-based reporting that could have helped undecided voters in the final weeks leading up to November 6. Maybe next time, models’ definitive answers will free up pundits to perform this important task.