All of my research is focused on the methods of assembling and analysis of panel survey data. One of the primary problems of panel survey projects is attrition or drop-out. Over the course of a panel survey, many respondents decide to no longer participate.
Last july I visited the panel survey methods workshop in Melbourne, at which we had extensive discussions about panel attrition. How to study it, what the consequences are (bias) for survey estimates, and how to prevent it from happening altogether.
There are a lot of reasons why would not want to use acces panels for predicting electoral outcomes . These are well discussed in many places on- and offline. I’ll shortly summarize them, before adding some thoughts to why access panels do so badly predicting election outcomes.
1. Access panels don’t draw random samples, but rely on self-selected samples. A slightly better way to get panel respondents is a quota sample, but even these have problems, well discussed here, here and here for example.
I was re-reading one of the papers I wrote as part of my dissertation on survey data quality in panel surveys. The paper deals with the effects of the introduction of an interviewing technique called Dependent Interviewing in the British Household Panel Survey. I wrote this paper together with Annette Jackle, and if you are interested after reading the next bit, you can download a working paper version of it here.
Before people believe I’m old-fashioned, I do think that Internet-surveys, even panel surveys are the future of survey research. John Krosnick makes some good points in a video shot by the people from www.pollster.com
1. Is it clear who ordered and financed the poll?
2. Is there a report documenting the poll’s procedures?
3. Is the target population clearly described?
4. is the questionnaire available and has it been tested?
5. what were the sampling procedures?
* the sample should be drawn for the target population. If it only contains for example people with Internet access, be careful
6. What is the number of respondents?
Many opinion pollers do badly when it comes to predicting elections. This is mainly because they let their respondents self-select them into their polls. So what, who cares? The polls make for some good entertainment and easily fill the talk-shows on television. If everyone knows they cannot be trusted, why care?
We should care. In the Dutch electoral system - with poportional respresentation - every vote counts. If only a small percentage of voters lets their vote depend on the polls of the election result, this can result in shifts of several seats in parliament.
Opinion pollers do a lousy job of predicting elections. For a good read, see for example the prediction of the New Hampshere primary in 2008, when all polls predicted Obama to win, but it was Clinton who won (albeit by a slim margin).
In the Dutch context, there are three main polling firms, that each do equally well (or badly). Out of a hundred and fifty parliamentary seats, peil.nl mispredicted 20, while TNS-NIPO and Synovate shared the honor of only missing the target by 16 seats in the 2010 parliamentary election.
Dear all,
With a new year come new year’s resolutions. I have been working as a survey methodologist for about the last 5 years. I teach and I do research. Teaching gives instant rewards, or at least instant feedback. I like that. Doing research is however a different matter. It is a slow and sometimes agonizing process of muddling through (for me).
Studies remain in review forever, sometimes don’t make it at all into a publication, while some of my ideas or views just never make onto paper at all.