As a survey methodologist I get paid to develop survey methods that generaly minimize survey errors, and advise people on how to field surveys in a specific setting. A question that has been bugging me for a long time is what survey error we should worry about most. The Total Survey Error (TSE) framework is very helpful for thinking which type of survey error may impact survey estimates
But which error source is generally larger?
This is a follow-up on why I think panel surveys need to adapt their data collection strategies to target individual respondents . Let me first note that apart from limiting nonresponse error, there are other reasons why we would want to do this. We can limit survey costs by using expensive survey resources only for people who need them.
A focus on nonresponse alone can be too limited. For example: imagine we want to measure our respondents’ health.
Studies into the correlates of nonresponse often have to rely on socio-demographic variables to study whether respondents and nonrespondents in surveys differ. Often there is no other information available on sampling frames that researchers can use.
That is unfortunate, for two reasons. First, the variables we are currently using to predict nonrespons, usually explain a very limited amount of variance of survey nonresponse. Therefore, these variables are also not effective correctors for nonresponse.
I love watching videos from Richard Feynman on Youtube. Apart from being entertaining, Feynman in the video below does explain quite subtly about what constitutes a good scientific theory, and what doesn’t. He is right about the fact that good theories are precise theories.
Richard Feynman: fragment from a class on the Philosophy of science (source: Youtube)
The video also makes me jealous of natural scientists. In the social sciences, almost all processes and causal relationships are contextual, as opposed to the natural sciences.
I am continuing on the recent article and commentaries on weighting to correct for unit nonresponse by Michael Brick, as published in the recent issue of the Journal of Official Statistics ( here ).
The article by no means is all about whether one should impute or weight. I am just picking out one issue that got me thinking. Michael Brick rightly says that in order to correct succesfully for unit nonresponse using covariates, we want the covariates to do two things:
This week, I have been reading the most recent issue of the Journal of Official Statistics , a journal that has been open access since the 1980s. In this issue is a critical review article of weighting procedures authored by Michael Brick with commentaries by Olena Kaminska ( here ), Philipp Kott ( here ), Roderick Little ( here ), Geert Loosveldt ( here ), and a rejoinder ( here ).
One of the greatest challenges in survey research are declining response rates. Around the globe, it appears to become harder and harder to convince people to participate in surveys. As to why response rates are declining, researchers are unsure. A general worsening of the ‘survey climate’, due to increased time pressures on people in general, and direct marketing are usually blamed.
This year’s Nonresponse workshop was held in London last week.
I am spending time at the Institute for Social and Economic Research in Colchester, UK where I will work on a research project that investigates whether there is a tradeoff between nonresponse and measurement errors in panel surveys.
Survey methodologists have long believed that multiple survey errors have a common cause. For example, when a respondent is less motivated, this may result in nonresponse (in a panel study attrition), or in reduced cognitive effort during the interview, which in turn leads to measurement errors.
The AAPOR conference last week gave an overview of what survey methodologists worry about. There were relatively few people from Europe this year, and I found that the issues methodologists worry about are sometimes different in Europe and the USA. At the upcoming ESRA conference for example there are more than 10 sessions on the topic of mixing survey modes. At AAPOR, mixing modes was definitely not ‘hot’.
With 8 parallel sessions at most times, I have only seen bits and pieces of all the things that went on.
All of my research is focused on the methods of assembling and analysis of panel survey data. One of the primary problems of panel survey projects is attrition or drop-out. Over the course of a panel survey, many respondents decide to no longer participate.
Last july I visited the panel survey methods workshop in Melbourne, at which we had extensive discussions about panel attrition. How to study it, what the consequences are (bias) for survey estimates, and how to prevent it from happening altogether.