What are the polls really measuring?
Friday could be my kind of day
I have mentioned PollingReport.com previously as a single source for polling data from many different polling companies. In their September 13th, 2004 issue, they carried an article by Emory University Political Science professor Alan I. Abramowitz that spelled out in detail how different polling agencies generate their data. I found it fascinating and, thanks to kind permission from The Polling Report's editor, Tom Silver, you'll have a chance to examine it for yourself.
With the first, and most important, of the three Presidential debates just three days away the election is likely to be front and center this week. If the markets did jig because of President Bush's rebound in the polls, then that jig will either get jiggy-er or disappear depending on the "winner" on Thursday night. I expect polling data will help Wall Street figure that out, so this article is especially apropos.
From the Sept. 13, 2004, edition of THE POLLING REPORT (www.pollingreport.com). Reprinted with permission.
Why You Shouldn't Always Believe the Polls
by Alan I. Abramowitz
Alan I. Abramowitz is Professor of Political Science at Emory University.
Public opinion polls have a major impact on media coverage of presidential campaigns and therefore on public perceptions of those campaigns. The Gallup Poll, in particular, because of its visibility and prestige, can dramatically affect the tone of media coverage of the presidential campaign.
When Gallup released its first poll after the Republican National Convention showing that George Bush had moved out to a 7 point lead in a head-to-head race with John Kerry, following earlier polls by Time and Newsweek showing double-digit Bush leads, the results were immediate and obvious. The airwaves were quickly filled with talking heads speculating about the reasons for the Bush bounce, why Kerry's campaign was floundering, and whether Kerry could reverse the damage before Nov. 2nd. This despite the fact that other polls released in the past two weeks showed the race to be much closer. A Zogby poll showed Bush with only a 2 point lead in a head-to-head race with Kerry, a FOX News/Opinion Dynamics poll also showed Bush with a 2 point lead in a head-to-head race, and a Democracy Corps poll showed Bush with a 3 point lead. But these polls have gotten much less coverage in the media because they came out later and because they lacked the stature of the Gallup Poll.
Registered Voters vs. Likely Voters
Almost completely lost in the post-convention commentary was another statistic reported in the post-convention Gallup Poll: George Bush's lead over John Kerry among all registered voters was only 1 percentage point. Although Gallup did not try to hide this discrepancy, few media commentators paid any attention to it. It was the likely voter result that dominated the news coverage. But should it have?
This is not the first time that the Gallup Poll has reported a large gap between its results for registered and likely voters. Several of its recent state polls, including one in the key battleground state of Ohio, also showed a much larger Bush lead among likely voters than among registered voters. And during the 2000 presidential campaign, Gallup's tracking poll often showed big discrepancies between registered and likely voters, with the likely voter results fluctuating much more dramatically over time than the registered voter results: between its October 2-4 poll and its October 5-7 poll, George Bush moved from an 11 point deficit to an 8 point lead among likely voters-a 19 point swing in just 3 days. Two weeks before the election, Gallup showed Bush with an 11 point lead over Al Gore among likely voters. By the time of its final pre-election poll, Bush's lead had shrunk to only 2 points. Even then, Gallup's sample of registered voters yielded a more accurate prediction of the election results than its sample of likely voters.
Which Number Should We Believe?
So what should we make of the discrepancy between the Gallup results for registered and likely voters and which number should we believe? There are good reasons for skepticism about Gallup's likely voter results, especially when they are reported several weeks or months before the presidential election. Gallup's method of identifying likely voters includes questions measuring attention to the campaign along with knowledge of the location of one's polling place. But attention to the campaign can fluctuate dramatically with changing events and whether voters know the location of their polling place several weeks or months before an election is probably a poor predictor of whether they will show up at that polling place on Election Day.
The likely voter screen used by Gallup sometimes produces a larger gap between the candidate preferences of registered and likely voters than most other public polls. Depending on the news background at the time that a poll is conducted, Democrats or Republicans may be paying more attention to campaign news and may therefore be overrepresented in Gallup's likely voter sample. This is a major problem because there is a very strong correlation between party identification and candidate preference. In the post-Republican Convention Gallup Poll, for example, 90% of registered Democratic identifiers backed Kerry, while 90% of registered Republican identifiers backed Bush.
In the post-Republican Convention Gallup Poll, not surprisingly, it was Republicans who were overrepresented among likely voters. Among Gallup's sample of registered voters, there was already a small Republican advantage in party identification. But Gallup's likely voter screen projected that 89% of Bush supporters would turn out in November, compared with only 79% of Kerry supporters-a 10 point Bush turnout advantage. As a result, the Gallup likely voter sample apparently had about a 10 point Republican advantage in party identification. (Gallup does not provide enough information to determine the percentages exactly, but this is a reasonable estimate.)
These projections are not realistic. On average, in the past three presidential elections there has been a small Democratic advantage in party identification of about 2-3 percentage points according to national exit polls. In 2000, the Democratic advantage was 4 percentage points. Weighting the Gallup results based on the proportions of Democrats, Republicans, and independents in the 2000 national exit poll (39% Democrats, 35% Republicans, 26% independents) produces an estimate of a 4 point lead for John Kerry (50%-46%) instead of a 7 point lead for George Bush.
While the problems with the Gallup Poll seem to be mainly due to its likely voter screen, some other polls conducted since the Republican Convention appear to overrepresent Republican identifiers for different reasons. The most recent Newsweek Poll, for example, appears to have a disproportionately Republican sample of registered voters. It is not clear why this is the case, although it has been suggested that during and immediately after the GOP Convention, Republican identifiers were more likely to be at home and/or more enthusiastic about participating in public opinion polls than Democratic identifiers.
Weighting for Party Identification
Whatever the reasons for the overrepresentation of one group of partisans or another in a poll, is there anything that can be done about this problem? Some pollsters have attempted to solve this problem by weighting their samples to reflect the anticipated proportions of Democrats, Republicans, and independents in the electorate in the same way that they weight their samples to reflect the anticipated proportions of men and women or blacks and whites. John Zogby is perhaps the best known proponent of this methodology and the Zogby Poll has produced the most accurate predictions of the past two presidential elections of any major national poll. Not coincidentally, Zogby's recent national poll shows the 2004 presidential race to be closer than other polls that do not use party identification weighting.
Many pollsters, including Gallup, object to using party identification weighting because the distribution of party identification in the population cannot be known in the same way that the distribution of gender, race, or age can be known based on Census data, and because the distributions of demographic variables like gender, race, and age do not change much over time but the proportions of Democrats, independents, and Republicans in the electorate can change from election to election.
While these points have some validity, in reality the partisan composition of the electorate, like its age, gender, and racial composition, tends to change very slowly. The partisan composition of the 2000 electorate was very similar to the partisan composition of the 1996 electorate and the partisan composition of the 2004 electorate is likely to be very similar to the partisan composition of the 2000 electorate.
So what can conscientious pollsters do to make their results more accurate? One intriguing idea has been proposed by Charles Cook, a highly respected political analyst for the National Journal. Cook suggests that polls use a methodology that he calls "dynamic party identification weighting." Cook suggests that polls weight their samples based on the unweighted proportions of Democrats, Republicans, and independents in their samples from recent surveys conducted over several weeks or months. This would allow them to capture any real change in the partisan composition of the electorate, while eliminating short-term fluctuations due to sampling variation.
Dynamic party identification weighting appears to offer a promising solution to the problems caused by overrepresentation of Republican or Democratic identifiers in some recent polls. Had the Gallup Poll employed this methodology, along with a more realistic likely voter screen, their post-Republican Convention poll would have probably shown the presidential race to be extremely close and the tone of media coverage of the Kerry and Bush campaigns since the Republican Convention would have probably been very different. © 2004 POLLING REPORT, INC.
The information on this website solely reflects the analysis of or opinion about the performance of securities and financial markets by the writers whose articles appear on the site. The views expressed by the writers are not necessarily the views of Minyanville Media, Inc. or members of its management. Nothing contained on the website is intended to constitute a recommendation or advice addressed to an individual investor or category of investors to purchase, sell or hold any security, or to take any action with respect to the prospective movement of the securities markets or to solicit the purchase or sale of any security. Any investment decisions must be made by the reader either individually or in consultation with his or her investment professional. Minyanville writers and staff may trade or hold positions in securities that are discussed in articles appearing on the website. Writers of articles are required to disclose whether they have a position in any stock or fund discussed in an article, but are not permitted to disclose the size or direction of the position. Nothing on this website is intended to solicit business of any kind for a writer's business or fund. Minyanville management and staff as well as contributing writers will not respond to emails or other communications requesting investment advice.
Copyright 2011 Minyanville Media, Inc. All Rights Reserved.
Daily Recap Newsletter