Saturday 19 Aug 2017 | 21:29 | SYDNEY
What's happening on

Podcast: How the Lowy Institute Poll works - Alex Oliver and Sol Lebovic

Alex: Sol, thank you for joining me today to talk about the annual Lowy Institute Poll, which we are releasing today on June 4th 2014.

Sol: Not a problem Alex.

Alex: For those of you who are not familiar with his work, Sol Lebovic is one of Australia’s most distinguished pollsters. He founded Newspoll in 1985 and ran it for more than two decades, building one of Australia’s most reliable polling organisations. The Australian newspaper was of course one of Newspoll’s most important clients, however the Lowy Institute Poll now has the benefit of Sol’s expertise and has had for the last six years of our polling. 2014 is an important year for us because it is the year of the 10th annual Institute Poll, so it seems like a good time for us to reflect a bit on our polling history and how the Lowy Institute Poll actually works. Sol, so I would like to ask you about how and why we’ve arrived at our methodology for the poll which is now very established. So there are a few moving parts here and perhaps we can start with a sample, which is the number of respondents who actually participate in the poll, the usual number being about 1000 participants in each poll, how did we arrive with this number?

Sol: Umm, well obviously the ideal would be to interview everyone in Australia, and no one can ever afford that except the government with the census (The Australian Bureau of Statistics), so what we need to do is take a sample, and when we take a sample we’re always going to get a slight variation if we took two different samples, and the idea of getting a sample size that gives us a fair amount of reliability, umm comes around the sampling variation you get with that sample. Now if we take a sample of 1000, we expect the variation is going to be plus or minus three percentage points and at that kind of level most people are generally comfortable that’s giving them a fair degree of accuracy without having to spend a fortune in doing huge sample sizes.

Alex: Is that the sort of sample size you used to use at Newspoll?

Sol: Umm yes we used to do about 1200, 1100 voters in the polling situations, around election time we used to do more than that but the reason wasn’t to get the national figure right but as we know in elections there can be differences by state and by electorate and if we want to break the sample down to various electorates then you need a larger overall sample to give you sufficient sample in individual electorates. But generally a 1000 is regarded as quite a good sample nationally.

Alex: Hmm it’s a pretty standard sample as far as I have seen with respect to other polling organisations in the foreign policy area for example, Pew Research and Chicago Council, they all use a sample of almost exactly 1000 as well.

Sol: Yeah and the interesting thing is that a lot of people don’t understand that if you take a population of the US which is a lot larger than ours, a sample of 1000 is still sufficient for that population because the sample itself is irrespective of population size, all the statistics, the formula that work all that out is based on things other than population size, so 1000 works in Australia’s population with over 20 million people and also in the US with over 200 million people.

Alex: Hmm I thought it was interesting, when you increase the sample size quite dramatically you only get a very small increase in the error margin of the actual accuracy of the poll. For example, if you had a sample of 5000 Australians how much more accurate would that sample be in terms of plus or minus the error margin on each particular result?

Sol: Well it’s very much the law of diminishing returns; so if we double the sample we don’t get double the accuracy. So if we go to 5000, like on a sample of 1000 we’re getting variation of plus or minus three percentage points; if we go to 5000 though, it only reduces roughly in half to about 1.4%, so it’s not a lot more accurate unless we are really trying to distinguish between a figure around 51 percent versus 49%, then we do need that much larger sample. But in general policy areas if we know you know within a couple two to three percentage points that’s generally fine.

Alex: Yep, umm, tell me, this year we decided to include more 18-29 year-olds in our sample. How this came about, we had some really intriguing and concerning results from our last two polls in 2012 and 13 about how this younger age group saw the value of democracy and in the 2012 poll we found that only 39% of young Australians in this age group said that democracy is preferable to any other kind of government, almost the same number said in some circumstances a non-democratic government could be preferable and nearly a quarter of that age group said for someone like me it doesn’t matter what kind of government we have. Not surprisingly there was a great deal of interest in these results with politicians, journalists and academics very concerned about how this seemed to show young how Australians were quite disengaged from their civic life and the proud traditions of our democratic history. So this year we decided to find out what was behind this seeming ambivalence, so there was a problem, wasn’t there, with using our usual base sample of 1000 Australians in terms of the representativeness of the sample if we were trying to probe quite deeply into the views of the 18-29 year-olds, what did we need to do to make these results more indicative and more reliable?

Sol: Well the fact umm we only did 1000 last year and in previous years meant we had a relatively small number of people in that age group the 18-29 year-olds hence the sampling error was much larger so we couldn’t be as definitive about the findings with that kind of sample, so what we did was we boosted the sample of the 18-29 year-olds so we got a base of over 300 of those respondents, then we had more reliability in the result and we could be more confident that what we were finding was the actual result.

Alex: And very nice to be able to report back that adding the extra 150 18-29 year-olds did in fact confirm the last two years of our polling on this very thorny issue of how they see democracy so uhh that was a good result, and we can be much more confident about the views of that age group. That leads me to my point, the telephone interviewer actually asks a series of questions before they actually start with the polling questions to try and make sure that they’re reaching all the quotas and the targets so the samples representative. What sort of questions do they ask upfront?

Sol: Well even before we get to asking the questions, the interviewers have a quota of getting so many interviews in different geographical areas. So, so many in Sydney, so many through the rest of NSW etcetera through the rest of the country so that we get the kind of quotas we want so we in each we have a sufficient base of respondents so we can then weigh the results to make them reliable across the country. Then the interviewer will ask age and sex because we also know there can be biases in results depending on the person’s age or depending on whether they are male or female or whatever, so then there are quotas set for age and sex as well so we can in the sample constructing it in a way that we sufficient number of respondents in each age and sex cell and also in geography.

Alex: Hmm because it is important in Australia to make sure to get that mix of urban and rural population right, isn’t it?

Sol: Exactly and also large state verse small state

Alex: Of course South Australia, Tasmania, Northern Territory and I guess ACT, they will always need to be covered as well, tell me uhh we tried to get our fieldwork done in the shortest possible time, what’s the reason behind that, why cant we string it over six months?

Sol: Umm because the real world changes in six months and we could have a major event that is correlated to one of the questions we have and then we wouldn’t be sure of the finding because some of the interviews would be done before that event had happened and some of the interviews would have been done after and it tends to make for a sort of muddy result, it makes it hard to interpret it. I think it’s important to keep interviewing over a short period to avoid the real world changing.

Alex: Finally Sol, can you explain what we do with the raw data once the interviews are all complete? I get this whopping set of spreadsheets of 500 or so pages of the raw data, so the actual responses from the actual people, what happens then to that data?

Sol: Well the most important thing to do then is to weight the data, back to the population. We said earlier that we set quotas to make sure that we have enough respondents in smaller states by age and sex cells and that kind of thing, and that means then it is not representative of the Australian population.  So what we do is we take the raw results and we weight it back to the known statistics based on the census and updates of the census, so for example we did more interviews in WA than exists to the proportion of the population so what we do is we weight them back a bit and we weight those in Sydney where we didn’t do as many interviews as we should have pro rated the population we weight them up, so the final results are fully representative of the Australian population by geography by age by sex, and that gives us a good representation, we actually take it a step further and weight by education to make sure if there were any biases there depending on the education level of respondents, that we bring it back into line with national statistics.

Alex: Sol, thank you. Just one final question though this could be shooting myself in the foot, is there anything that we could be doing better or more of?

Sol: Well I think there are a couple of things. The first thing is, to do the poll more often, uhh opinions don’t necessarily change only once a year they can change more often than that and if the poll were done more often we would be able to pick up those trends more quickly when they actually do happen, and the other thing is the larger sample size, while 1000 interviews or the 1150 interviews is sufficient at the total level to import results, sometimes the more fascinating findings are within certain demographics and as we cut and dice the data into smaller groups then you start to get into very smaller sample sizes and sometimes it would be better to have larger samples so we could be more confident of the differences we’re getting that they are statistically significant

Alex: Which is exactly why we increased that 150 boost sample for the 18-29 year-olds so that we could review and report reliably on their attitudes. Well if only we had an endless supply of polling funding we would love to do all of that and more, but thank you very much Sol for coming today and describing so eloquently how the Lowy institute poll works, thank you.

Sol: You’re welcome Alex.   

 

Back to all media