The LongRoom Polling Analysis uses the latest voting data from each state's Secretary of State or Election Division. The voting data is kept current by incorporating the latest updates from each state as they become available. This means that the LongRoom Polling Analysis accurately reflects the actual voting demographics, precinct by precinct, county by county, and state by state.
Because the LongRoom Polling Analysis is exclusively data based, it makes it possible to demonstrate from the crosstabs of an individual poll whether that poll is either left or right leaning.
The analysis of the polls of each polling organization and the associated bias is illustrated in a line chart. The most recent poll results are displayed separately and a graphic representation of the amount the poll leans either left or right is shown.
The graphs below cover the last three presidential elections and show the LongRoom Polling Analysis of polls for those elections. In all cases, the LongRoom Analysis was accurate to within +/- 0.3%.
References for the voting data from each state are included below in the list of sources.
We know the polls are biased because the statisticians who produce the polls say they are biased, both explicitly and implicitly. This is also widely reported in the media. Let's look at two recent examples. The Reuters/Ipsos poll last week, July 29th, decided to use "forcing" to assign those who were surveyed to a candidate, even if the person who was surveyed had no preference. Reuters/Ipsos applied this "technique" not only to their most recent poll, but went back through all their previous polls and redid them, assigning those with no preference to a candidate of the pollster's choice. This innovative approach to polling was not universally popular with other pollsters, as Pat Caddell, a pollster with decades of experience, expounded in this article: "Pat Caddell on ‘Cooked’ Reuters Poll: ‘Never in My Life Have I Seen a News Organization Do Something So Dishonest’" . Another example would be the CNN poll from July 30th, where the crosstabs for Question P1 show that 97% of Democrats have committed to a candidate three months before the election. In the history of elections, it is difficult to find an example where 97% of a demographic have made up their minds on who to vote for even on election day, no less in the middle of summer before an election in November.
For a rather extensive list of biases that a statistician may introduce into a poll, there is an excellent article here by Nate Silver where he discusses the biases he uses in creating his analysis, and why he thinks his biases are good.
Statisticians also use "weighting" to produce the poll results that are published in the media. The weighting is simply how many of each demographic the statistician believes will vote based on the detailed questions that are asked when the poll is taken. An example of how this affects polls is demonstrated in the polls out this last week, ending July 31st. Some polls have changes of 10% and more in presidential preference while other polls have a change of only a few percent. Clearly, both of these results cannot be correct.
So like opinions, every statistician has their own biases, but none of them wants to see the other guy's. Here at LongRoom we leave out the biases and let the data speak for itself.
As we discussed above, each poll reflects the biases of the statisticians who prepare the poll. Since each statistician has their own specific biases that they introduce into their poll, it is extremely difficult to compare one poll to another. At LongRoom we use the actual state voter registration data from the Secretary of State or Election Division of each state. We add no "expert" adjustments to the data. This means that all the polls are rationalized one to another based on actual data.
As the election approaches, the statisticians who produce the various polls will begin to back out their biases. In the final few weeks before the election, you will start to notice a convergence of all of the polls. This occurs because the statisticians will use essentially the same data that LongRoom is using now to produce their polls with their own biases removed. So, you might be thinking at this point, are you really saying that all of the polls will eventually match LongRoom? Yes, we are, it is a mathematical certainty, that as the election approaches, all of the polls will begin to match the polls here on LongRoom. This may be difficult for some to believe, however, there is an excellent archive at RCP that shows the poll results for the 2012 presidential election and this typical convergence of polls as the statisticians' biases are backed out.
The day after the election. This may sound humorous but it is actually the truth, there is no reliable predictor for who will win a democratic vote. An example of this is the March 14th, 2004 Spanish General Election which we covered and analyzed. On March 10th, 2004, the Conservative Party was leading in the polls, and as they were the incumbents, were likely to succeed in the election. However, on March 11th, there was a Madrid train bombing. The Conservative government quickly blamed the ETA Spanish Separatist group. As more information was uncovered, it became obvious that the bombing was the work of the Islamist group, Al-Queda. The Conservative government continued to claim it was ETA in spite of the mounting evidence. The electorate rapidly came to believe that the Conservative government was trying to cover up the Islamic involvement and gave the liberal opposition party a 5 point margin of victory. So, in a matter of only three days, there was an 8 point swing in voter preference.
For more information about the 2004 Spanish General Election and the impact the bombing had on it, Wikipedia has a write up here.
To make this example more relevant to our current presidential election, imagine that 3 days before the election there is a terrorist incident here in America, and Mr. Obama and Mrs. Clinton place the blame on right-wing Christian extremists, while Mr. Trump blames radical Islamic terrorists. As the hours tick by, it becomes obvious that the terrorist incident is the work of radical Islamic terrorists, however Mr. Obama and Mrs. Clinton continue to deny the Islamic involvement. Just as in Spain, it is game, set, match, and Mr. Trump is the next president of the United States.
So, if anyone pretends they can predict the election, just keep in mind: Life Happens.
We have developed our analytical model using the programming language that we and other actuaries have used for the last 30 years, APL.