As a survey methodologist I get paid to develop survey methods that generaly minimize survey errors, and advise people on how to field surveys in a specific setting. A question that has been bugging me for a long time is what survey error we should worry about most. The Total Survey Error (TSE) framework is very helpful for thinking which type of survey error may impact survey estimates
But which error source is generally larger?
Why we should throw out most of what we know on how to visually design web surveys
In 2000, web surveys looked like postal surveys stuck onto a screen. Survey researchers needed time to get used to the idea that web surveys should perhaps look differently from mail surveys. When I got into survey methodology in 2006, everyone was for example still figuring out whether to use drop down menus (no), how many questions to put on one screen (a few at most), let alone whether to use slider bars (it’s not going to reduce breakoffs).
Back after a long pause. Panel surveys traditionally interview respondents at regular intervals, for example monthly or yearly. This interval is mostly chosen for practical reasons: interviewing people more frequently would lead to a large respondent burden, and a burden on data processing and dissemination. For these practical reasons, panel surveys often space their interviews one year apart. Many of the changes (e.g. changes in household composition) we as researchers are interested in occur slowly, and annual interviews suffice to capture these changes.
This is a follow-up on why I think panel surveys need to adapt their data collection strategies to target individual respondents . Let me first note that apart from limiting nonresponse error, there are other reasons why we would want to do this. We can limit survey costs by using expensive survey resources only for people who need them.
A focus on nonresponse alone can be too limited. For example: imagine we want to measure our respondents’ health.
Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys . We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months.
The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?
Vera Toepoel and I have been writing a few articles over the last two years about how survey respondents are taking up tablet computers and smartphones. We were interested in studying whether people in a probability-based web panel ( the LISS panel ) use different devices over time, and whether siwtches in devices for completing surveys are associated with more or less measurement error.
In order to answer this question, we have coded the User Agent Strings of the devices used by more than 6.
A follow up on last month’s post . Respondents do seem to be less compliant in the waves before they drop out from a panel survey. This may however not neccesarily lead to worse data. So, what else do we see before attrition takes place? Let have a look at missing data:
First, we look at missing data in a sensitive question on income amounts. Earlier studies ( here , here, here ) have already found that item nonresponse on sensitive questions predicts later attrition.
I am working on a paper that aims to link measurement errors to attrition error in a panel survey. For this, I am using the British Household Panel Survey. In an earlier post I already argued that attrition can occur for many reasons, which I summarized in 5 categories.
1. Noncontact
2. Refusal
3. Inability (due to old age, infirmity) as judged by the interviewer, also called ‘other non-interview’.
4. Ineligibibility (due to death, or move into institution or abroad).
Longitudinal surveys ask the same people the same questions over time. So questionnaires tend to be rather boring for respondents after a while. “Are you asking me this again, you asked that last year as well!” is what many respondents probably think during an interview. As methodologists who manage panel surveys, we know this process may be rather boring, but in order to document change over time, we just need to ask respondents the same questions over and over.
I am spending time at the Institute for Social and Economic Research in Colchester, UK where I will work on a research project that investigates whether there is a tradeoff between nonresponse and measurement errors in panel surveys.
Survey methodologists have long believed that multiple survey errors have a common cause. For example, when a respondent is less motivated, this may result in nonresponse (in a panel study attrition), or in reduced cognitive effort during the interview, which in turn leads to measurement errors.