First, go read what Dr. Limerick has to say about margin of error. Next, consider the following excerpt from this article from the Wilson Quarterly about polls:
Although the public displays no overt hostility to polls, fewer Americans are bothering to respond these days to the pollsters who phone them. Rob Daves, of the Minnesota Poll, says that “nearly all researchers who have been in the profession longer than a decade or so agree that no matter what the measure, response rates to telephone surveys have been declining.” Harry O’Neill, a principal at Roper Starch Worldwide, calls the response-rate problem the “dirty little secret” of the business. Industry-sponsored studies from the 1980s reported refusal rates (defined as the proportion of people whom surveyors reached on the phone but who declined either to participate at all or to complete an interview) as ranging between 38 and 46 percent. Two studies done by the market research arm of Roper Starch Worldwide, in 1995 and 1997, each put the refusal rate at 58 percent. A 1997 study by the Pew Research Center for the People & the Press found statistically significant differences on five of 85 questions between those who participated in a five-day survey and those who responded in a more rigorous survey, conducted over eight weeks, that was designed to coax reluctant individuals into participating.
Much more research needs to be done on the seriousness of the response-rate problem, but it does seem to pose a major challenge to the business and might help to usher in new ways of polling. (Internet polling, for example, could be the wave of the future–if truly representative samples can be constructed.) Polling error may derive from other sources, too, including the construction of samples, the wording of questions, the order in which questions are asked, and interviewer and data-processing mistakes.
I’ve seen poll numbers all over the place for various candidates. Right here, we’ve got polls showing Ron Kirk and John Cornyn in a tight race and polls showing Cornyn with a ten point lead. I look at the number of people surveyed, and while I know that it’s sufficiently large to be a representative sample, I have to ask: What assumptions are the pollsters making about turnout? Are they taking into consideration extra efforts in the candidate’s hometowns? Is there an axe being ground somewhere?
Fortunately, I have MyDD to tell me about the demographics of the DMN poll as well as the biases of various national polling companies. And it’s not just liberals who have been complaining. Conservatives have made many of the same points about sampling error, nonresponsiveness, and pollster bias.
The only poll that really matters is the one taken on Election Day. Early voting has begun. You know what to do.
And don’t forget, the questions asked in the poll can make a big difference too. If it’s a straight up “If the election were held today, who would you vote for?” question, it’s more reliable than the “Ron Kirk has been described by some as a no good, rotten, so-and-so, knowing that, would you be more likely to vote for Kirk or Cornyn?” (aka “push polling).
So that may explain the differences in numbers that you’ve been seeing. Also, some people believe in the “primacy” theory, which holds that whomever you mention first will get the positive response.
And yeah, the only poll that does count is the one on election day (unless you’re a fundraiser).