INTRODUCTION TO POLLING © Polling the Nations
Section 1. The Importance of the Sample
"It is a riddle wrapped in a mystery inside an enigma." Winston Churchill was describing Russia, but many people probably think his observation applies equally well to public opinion polls. It seems to defy common sense that anyone could discover what 250 million Americans are thinking by interviewing just 1,500 people. And yet, somehow, public opinion polls seem to do just that.
This discussion is intended for those who still find opinion polls mysterious, enigmatic, or puzzling violations of common sense. The objective is to explain how polls work and why, so that the reader will be a more informed and discerning consumer of polling data. The focus of the explanations is on how to understand opinion polls, not on how to conduct them.
Public opinion polls, as we know them today, had to earn their spurs. Polling organizations needed to prove that they could accurately determine public sentiment using a relatively small sample of the population. The first convincing demonstration came in the 1936 presidential election.
1936 - Literary Digest
When it comes to a survey sample size, more is not always better. That lesson was learned the hard way by the Literary Digest in 1936. Over the years Literary Digest had developed a sizable mailing list. In 1928 the magazine decided to use its list to conduct a poll for that year's presidential election. People were sent mock ballots which they were asked to mark with their preference and return to the magazine. The poll's final outcome was within four percentage points of Hoover's actual victory margin. Four years later the magazine's prediction was within two points of Roosevelt's winning percentage.
Buoyed by its previous successes the magazine launched its largest survey ever in 1936. Going beyond its own mailing list, Literary Digest also added names from auto registration lists and telephone directories to send out a total of more than 10 million ballots. Considerable time and expense had to be devoted to tabulating the flood of 2.4 million ballots returned. When they were all in, the magazine predicted that Alf Landon would carry 32 states and defeat Roosevelt by 57% to 43%.
Needless to say, the Digest was sorely embarrassed by the final outcome, a 61% to 37% Roosevelt landslide that left Landon with only two states and eight electoral votes compared to Roosevelt's 523. Not long after, the magazine went out of business.
What makes the Literary Digest experience noteworthy is not that the magazine had two good years and one bad call, or that the prediction was so egregiously wrong. The failure of the Digest poll marked the end of one era of polling and the beginning of another.
George Gallup criticized the Digest's methodology. Even a sample as large as two and a half million could not get an accurate picture of national sentiment, Gallup contended, if the sample was not properly selected. Using a sample of only 5,000, Gallup predicted that Roosevelt would carry at least 40 states and win the popular vote by a 56% to 44% margin. Subsequent Gallup Polls would use a smaller sample size and come much closer to the final vote total, but the essential point had been established. A scientifically selected sample of the population was not only much cheaper and easier to deal with, it also produced more accurate results.
Why would so small a sample be so much better than the enormous Literary Digest sample? The key is in mathematics worked out in the 18th and 19th centuries, and applied to polling only in the second quarter of the 20th century. Gallup and other polling pioneers like Elmo Roper and Archibald Crossley based their work on sampling and probability theory.
Reduced to their essence, the mathematical formulations demonstrate that a randomly drawn sample will have certain predictable properties. It does not matter whether the sample is selected from all the people in the United States, or from a bunch of colored balls in a barrel; the same mathematical rules apply. But those rules will apply only if every person in the country or every ball in the barrel has an equal chance of being selected every time a choice is made for the sample. In the case of the barrel, that would probably mean turning it regularly to ensure that balls at the bottom come to the top and increase the chance they will be selected. In the case of a national sample, it means that a person in California is just as likely to be chosen as a person in Delaware.
The Literary Digest ran afoul of the requirements of random selection in two important ways. First, the readers of the magazine were not a representative cross-section of the American public. The magazine's mailing list did not include many kinds of voters. This deficiency was then magnified when the magazine supplemented its mailing list with names from auto registrations and telephone directories. It was 1936, the middle of the depression. People out of work, worrying about how they would feed their families might well consider a telephone an unaffordable luxury. Households with an automobile would be still less likely to be suffering the worst the bad economic times had to offer. The Digest sample was heavily biased toward the wealthy.
It was bad enough that the Digest used so unrepresentative a collection of Americans, but it further violated the requirements of random selection by allowing the respondents to be self-selected. Roughly three-quarters of the people receiving the Digest ballots did not bother to return them. How those people compared with those who took the time to mark and mail the ballot can only be guessed. Later studies have indicated that individuals who respond to such requests for their opinion are more highly motivated and interested than those who do not. In other words, the people who returned the ballots to the Literary Digest were anything but representative of the nation as a whole. They were an unusual subset of an atypical sample of the American public.
Gallup was right: the size of the Digest sample would not be enough to compensate for the mathematical laws it violated. A statistical Titanic, the Digest poll crashed headlong into the iceberg of probability theory and quickly sank.
It was not all smooth sailing after that, however. In 1948, Gallup and others predicted that President Harry Truman would lose the presidential election. In fact, Gallup got it almost exactly backward. His American Institute of Public Opinion had Dewey winning by 49.5% to 44.5%. The actual popular vote went to Truman 49.6% to 45.1%.
What had happened? Why had the random samples failed? The fault, argued Gallup, was not in the underlying sampling theory, but rather in the decisions made by the poll directors. In the case of the Gallup Poll, the interviewing stopped 10 days before the election. Also, in an election that seemed to excite little voter interest, Gallup assumed that respondents who said they were undecided would not vote.
Those two decisions proved to be the undoing of the Gallup prediction. Subsequent data showed that there was a late swing of support for Truman, and that nearly everyone who was undecided when Gallup stopped polling eventually voted for Truman.
One problem could be solved easily; do not stop polling until the last possible moment. The other difficulty continues to vex pollsters. What should be done with the undecideds? Assuming they will all not vote can be risky, as Gallup found in 1948. Dividing them in the same proportion as the rest of the sample would have made the Gallup percentages further off in 1948. Splitting the undecided evenly between the candidates reduces the chance of compounding a mistake, but may well understate support for one of the candidates.
In any case, the 1948 presidential contest demonstrated quite clearly that drawing a sample improperly is not the only way to generate inaccurate polling results.
The 1936 and 1948 elections were important for opinion polling because they showed some polling techniques to be wrong. When the Literary Digest predicted a Landon win in 1936, and Gallup had Dewey favored over Truman in 1948, they were brought up short by the hard numbers of election returns. The elections served either to validate or to call into question the polling method employed.
When Gallup was right in 1936, his methods gained new credibility. This inevitably meant that the findings of his and similar polls would be more widely accepted, even when the results could not be directly checked as they could be with an election.
The 1948 setback was not fatal for random sample polling because Gallup's analysis was correct; the deficiency lay in factors beyond the theory and technique of random sampling. The 1948 presidential election notwithstanding, Gallup's American Institute of Public Opinion and other polling organizations were able to establish a sufficiently good track record to stay in business. For the seven national elections between 1936 and 1948, the average deviation for the Gallup Poll was four percentage points. The average deviation after 1950 dropped to 1.5 percentage points.
The performance of polls at election time has been important because there are so few ways to test the accuracy of polling results. For example, in 1936 the Literary Digest could easily have declared that its poll found blue to be the favorite color of 57% of the American public. Who could argue with the finding? Gallup might conduct a poll asking about favorite colors and find blue to be favored by only 23% of his sample. There would be no way to know which result, if either, to accept.
While the general approach of random-sample polling has been refined and generally accepted, the problem of choosing between varying findings of different polls remains. If Gallup and Harris find roughly the same proportion of Americans saying blue is their favorite color, each poll lends credibility to the results of the other. Conversely, if the findings of the two are far apart they raise serious questions about both polls.
When a number of different polls are examining opinions about a particular topic there may be sufficient overlap to permit comparisons of their results and determine the public's views. However, many issues are not investigated by several polling organizations, eliminating the possibility of comparing poll results. Moreover, the exact wording of the question is so crucial that even if two or more polls do ask about the same issue, variations in wording can produce very different responses.
 Polling the Nations is a compilation of more than 14,000 surveys conducted by more than 700 polling organizations in the United States and more than 80 other countries from 1986 to the present. Each of the nearly 350,000 records reports a question asked and the responses given. Also included in each record is the polling organization responsible for the work, the date the information was released, the sample size, and universe, i.e., the groups or areas included in the interview, such as parents with children in public schools, Great Britain, the United States, or California.
All the surveys reported here were conducted using scientifically selected random samples. Callers to 900 numbers, readers who clip surveys from newspapers or magazines and send them in, or other similarly self-selected respondents are excluded because it is not possible to generalize from the findings of such polls.
Polling the Nations is the most comprehensive collection of public opinion, with information from not only in the United States but also more than 90 countries around the world. The database includes the full text of the questions and responses covering a broad range of issues.
Polling the Nations began in 1981 as a database of American public opinion published in a book format (American Public Opinion Index). Over the years the database expanded to include surveys from more than 90 other countries in Europe, Canada, Mexico, Africa, and Asia, in an electronic format. First, as a CD-ROM and now as an on-line version, Polling the Nations is a powerful reference tool that provides a tremendous storehouse of information with quick access and easy searching capability for locating survey information….Polling the Nations has collected more than 500,000 questions from an amazing array of sources.