mumble
pages

home

Federal pendulum

Mackerras US Presidential pendulum

margins since 1983

2003 reviewed

Qld election 2004

Federal results by two party preferred

Lemmings!

email:
 
elections AT
mumble.com.au

published articles

two decades of Newspolls

state votes at federal elections

Votes and seat
representation
1949 - 2001

Newspoll
preferences


Newspoll &
Morgan graphs

preferential voting

Beazley versus Crean

Newspoll Opposition leader approval ratings

Newspoll Opposition voting intentions

For whom the Polls Toll

Article for Walkley Magazine, (Journalists’ Union) December 04 – January 05

(Unsubbed version)

How did the opinion polls perform in the Australian federal election? A better first question is: how was the media's reporting of them?

In a perfect world, political opinion polls wouldn't get within cooee of the front page outside of election campaigns. They wouldn't make the morning news and Radio National's Peter Thompson wouldn't ask Michelle Grattan why the government “got a bounce” last fortnight.

Still in this sensible parallel universe, polling bosses would be seen and not heard (because they don't make insightful political commentators) and no-one would ever begin a poll report with “an election held last weekend would have .. “

Quantitative opinion polls aren't that precise. But the process that pays for them pretends they are. They cost a bundle and so are given pride of place. Once they're there, everyone involved goes along with the charade.

So how should we treat them? Like this: they're the best tool we have to anticipate election outcomes, and they're pretty good. Importantly, it's the trend that's useful, not the individual data, although as we get closer to polling day each survey becomes a better “predictors”.

Take this year's Queensland state election. The published surveys all pointed to a huge win for Premier Peter Beattie. And a big win was what happened. We all expected it, and we all gave our reasons, but it was really only because of the polls showed one side fifteen to twenty points in front.

Polls have been known to do star turns, like Newspoll in the 1999 Victorian state election. Throughout that campaign, The Australian's pollster, like all the others, had the Kennett government headed for comfortable re-election. But in its final survey, published on election day, Newspoll alone showed the ALP opposition slightly ahead. Which was how Victorians then voted. A class act.

Roy Morgan Research stole the limelight at the 2001 federal election, but for all the wrong reasons (getting the result horribly wrong).

And the polls did ok at the November US presidential election. They anticipated a close result, with Bush slightly ahead, and that's what happened.

The problems begin when we take polls too seriously.We all know about the three percent error margin. A sample size of about 1,000 that reports, say, 48 percent support for a party, has a margin of error of about three percent, so the “true” result might be anywhere from 45 to 51 percent. But that's not the half of it. This is with a 95 percent confidence interval, which means, on average, one opinion poll in twenty is outside that error margin; we just don't know which one.

Another obvious problem: when was the last time the Australian Electoral Commission surprised you with a phone call explaining there's an election on, and who will you vote for? The survey process is highly artificial, although it becomes less so as election day approaches.

And there are always “undecideds” who make up their mind in the polling booth.

So how did the polls, as opposed to those who interpret them, do in the October federal election? The answer has two parts.

Under the Australian preferential voting system, the two party preferred (2pp) votes are more important than the primary ones, because 2pp is how a party wins a seat at least, if not always an election.

Just looking at primary votes, Australians on October 9 gave the Coalition 46.8%, Labor 37.7% and Greens 7.2. If we see these as the “true” figures, then throughout the campaign every pollster over-stated ALP primary support a little, most underestimated the Coalition's and all trended in the right direction as election day neared. Their final surveys produced similar primary votes: Newspoll had the Coalition on 45 and Labor on 39; Morgan said 45.5 to 38.5; ACNielsen (Fairfax papers) 49 to 37; and Galaxy 46 to 39. All correctly had the Greens on 7 or 8 percent. None was a million miles from the “true” result, and Galaxy and Morgan vied for closest.

But after preferences they went everywhere. Still with Coalition support first, 2pps in those final polls were Newspoll 50:50, Morgan 49:51, Nielsen 54:46 and Galaxy 52:48, which covers everything from a narrow Labor win to a Coalition landslide.

The true 2pp on election day was 52.7 to 47.3 in the government's favour, so from being equal most accurate in primary votes, Morgan went to the back of the back after preferences. Galaxy remained the best performer.

How could this be?

Once upon a time the major parties had high levels of primary support and it didn't much matter how pollsters treated minor party (and independent) preferences. Compared with the imprecision of polling itself, preference distribution wasn't important. But this has changed in recent years, particularly since the rise of the Greens (whose preferences overwhelmingly favour Labor ahead of the Coalition) in 2001.

The evolution of Newspoll's preference strategy is a case in point. They used to simply ignore preferences outside of election campaigns and report only primary support. So from November 2001 to early 2003, the hapless Labor leader Simon Crean had to watch while a Coalition primary vote lead of several percentage points was written up as the government comfortably ahead, when actually Crean's position after preferences would have been competitive, sometimes even in front. Newspoll being the most watched poll, these perceptions rippled through the commentariat. (Crean also presided over some truly dire poll numbers.)

In early 2003 Newspoll began calculating a “notional” two party preferred, based on the total preference flow of minor party votes at the most recent election. But because the make-up of those non-major parties had changed, this generally under-stated Labor's two party preferred support - by perhaps one percent. Then from January this year they did what they usually do only in election campaigns - ask those respondents who intended voting for minor parties and independents who would get their second preference, and extrapolate to get 2pp. For complicated reasons, this probably overstated Labor's two party preferred support, again by about a percent.

Of the other outfits, Nielsen and Morgan always ask for full preferences, and new kid on the block, Galaxy, published in News Ltd tabloids, uses, like Newspoll used to, the previous election preference flows but calculates per minor party, rather than in aggregate.

Here's an irony: had Newspoll persisted with its 2003 method, its final poll published on October 9 would have shown, like Galaxy's, the Coalition ahead 52 to 48. That is, a first preference reading that was much too kind to Labor would have almost been compensated by preference distribution that worked the other way.

Nielsen's primary vote in its final poll overly favoured the Coalition, and their 2pp did too. But Morgan is the real mystery; with excellent primary vote data but woeful 2pp. How their preferences went that way is anyone's guess.

You could rank the pollsters, in their ability to “predict” the outcome, in this order: Galaxy, Nielsen, Newspoll and Morgan.

But the main lesson you should take is: by all means read the polls. You can only talk to so many taxi drivers, and anyway, they read the polls too. But keep the things in perspective.

external links
these open new windows

ABC 

Electoral Commissions
Federal
NSW
    Vic 
Qld WA SA
 
Tas ACT NT

Parliaments
Federal
NSW
    Vic 
Qld WA SA
 
Tas ACT NT

Poll bludger

 Psephos

Australian Constitution

Democratic Audit

Nicholson cartoons

Newspoll

Morgan

WA Uni
election database

Distance Learning