Franklin & Marshall College Franklin & Marshall College

    • u-h-7064698425af-jpg
    • u-h-20ccfc70467a-jpg
    • u-h-c8018e0f0ae5-jpg

Roses & Polls

February 28, 2002

by Dr. G. Terry Madonna and Dr. Michael Young

According to Gertrude Stein, a rose is a rose is a rose. So can we also say a poll is a poll is a poll. Are all polls more or less the same - more or less all roses? Or are there some thorns among the roses?

The question has some immediacy. A recent media sponsored poll released around the state shows Ed Rendell ahead of Bob Casey in the Democratic gubernatorial primary. The poll in question was the automated type that calls people, and then uses a computer to ask questions of those who stay on the line.  Since up to now Casey consistently led in the race, the poll raised some eyebrows across the state. It also raised two pointed questions:

  • Could the poll be accurate, meaning could Rendell be leading Casey?

  • And should the poll be trusted, or put another way, was it a good poll?

We will let the raised eyebrows descend by themselves, but both of the questions deserve to be answered.

The short answer to the first question: Could Rendell be actually leading in the race? MAYBE YES, MAYBE NO.  The even shorter answer to the second: Can the poll be trusted? ABSOLUTELY NOT. We will elaborate presently, but some perspective is necessary here.

In the larger scheme of things, does it really matter if a single poll published three months before primary day gets it right or not? What's the big deal, and why should anyone care?

We think it does matter and it is a big deal. Here's why. These polls can and often do affect the outcome of an election. Polls taken weeks or even months before Election Day do have influence, and may determine who wins and who loses.

We want to be very clear. There is no evidence that polls have any "bandwagon effect" on voters. That is, reading or hearing about polls has no net direct effect on the electorate at large.  Few voters cast their vote on the basis of the polls and how a candidate is doing in them.

But there is a powerful indirect effect that comes into play because of the nature of the political community, and here we refer to journalists, fundraisers, politicians, candidates, and their political supporters. The political community does pay close attention to the polls, and is influenced by them. It is their business to pay attention to them.

And because polls are taken seriously in the political community, they have a significant influence on fund raising, press coverage, and other campaign resources. Pollster George Gallup referred to these as the three M's--Media, Money, and Morale.

For a given candidate, strong polls tend to bring better fund raising, more press coverage, and increased grass roots support. Conversely, weak poll results tend to reduce fund raising, limit press coverage, and inhibit grass root efforts.

These effects of polls on the political community are sometimes referred as "indirect bandwagon effects". The electorate ends up being influenced by polls albeit through the intermediation of gatekeepers and others in the political community who are themselves influenced by the polls.

So, getting polls right is necessary and important. The stakes are high; they play a major role in the outcome of the elections. In a democracy, few things matter more.

Now back to the first of the original questions: could Rendell be ahead-is the poll accurate? The best answer here is still, MAYBE YES, MAYBE NO!

It is not possible to say more, given the nature of this particular computer poll. Although it purports to be scientific, these polls suffer from serious methodological problems. Their flaws make any conclusions drawn from them problematic, and they can be best described as a distant cousin to straw polls. Sometimes straw polls get it right by accident. Like folk medicine and even voodoo, they sometimes seem to work.

The ill famed Literary Digest poll, for example--probably the most famous straw poll, had five straight presidential elections correct before disastrously calling the 1936 election for Alfred Landon, even after collecting some two million straw ballots. Other straw polls have had similar winning streaks before they got it wrong.

So Rendell could be ahead or he might not be. That's the problem with straw polls. You just don't know for sure. And that's also the difference between straw polls and random sample-based scientific polls. With the latter, a far higher degree of certainty exists-within a mathematically precise range of sampling error. True, other types of error can foul up the accuracy of a poll.  But, with a straw poll, the dice are always being rolled.

Now to the second of the original questions, can the poll be trusted?  The answer is unequivocally, ABSOLUTELY NOT! It cannot be trusted because it is not possible to know if it is right or wrong. There is no way to know. You might as well flip a coin.

Let's be practical here. Not all straw polls are to be condemned. Some kinds of straw polls are probably harmless, and may even have entertainment value. In addition, they are easy to design and inexpensive to conduct. 

And certainly some of the non-political " life style" straw polls have a role as those that appear in magazines or are syndicated in newspapers. Nobody should make major decisions on the basis of these sorts of polls, and probably nobody does.  

But election polls are different. They are not mere entertainment. They do have consequences. And it is important to get them right. Straw polls have their place, but that place isn't during an election campaign. It's too important.

In the meantime there are some things to look for that help separate scientific polls from the rest. Here are some questions to ask the pollster: 

  • Ask about "response rate", which is the proportion of people actually interviewed from the original sample. Many polls that claim to use random methods end up with non-random samples because of low response. 

  • Ask how many days the interviews took and when the calling was done. Polls conducted in one or two days limit those interviewed to people who happen to be at home or available, reducing their reliability. Good polls usually take days or weeks, and calls are made at various hours throughout the day and evening.

  • Ask how the interviewing was done. Polls that use automated messages tend to have low response.

Finally, look for evidence that the polltaker is willing to make full disclosure of the material facts relating to the conduct of the poll. Such items include:  how the sample was created, how respondents were selected, what interviewing methods were employed, when were the interviews conducted, and what questions were asked?  Disclosure is fundamental. Any poll that is unwilling to provide this information should not be taken seriously.

------------------
Politically Uncorrected™ is published twice monthly. Dr. G. Terry Madonna is a Professor of Public Affairs at Franklin & Marshall College, and Dr. Michael Young is a former Professor of Politics and Public Affairs at Penn State University and Managing Partner at Michael Young Strategic Research. The opinions expressed in this article are solely those of the authors and do not necessarily reflect the opinions of any institution or organization with which they are affiliated. This article may be used in whole or part only with appropriate attribution. Copyright © 2002 Terry Madonna and Michael Young.