My breaks between posts are getting longer and longer. Sorry my dear readers. Today, I am writing about research done over a year ago that I did with Vera Toepoel and Alerk Amin. Our study was about a group of respondents we can no longer ignore: Mobile-only web survey respondents. These are people, who do no longer use a laptop or desktop PC and use their smartphone for most or any of their Internet browsing, but instead use a smartphone.
Back after a long pause. Panel surveys traditionally interview respondents at regular intervals, for example monthly or yearly. This interval is mostly chosen for practical reasons: interviewing people more frequently would lead to a large respondent burden, and a burden on data processing and dissemination. For these practical reasons, panel surveys often space their interviews one year apart. Many of the changes (e.g. changes in household composition) we as researchers are interested in occur slowly, and annual interviews suffice to capture these changes.
This is a follow-up on why I think panel surveys need to adapt their data collection strategies to target individual respondents . Let me first note that apart from limiting nonresponse error, there are other reasons why we would want to do this. We can limit survey costs by using expensive survey resources only for people who need them.
A focus on nonresponse alone can be too limited. For example: imagine we want to measure our respondents’ health.
Last week, I gave a talk at Statistics Netherlands (slides here ) about panel attrition. Initial and nonresponse and dropout from panel surveys have always been a problem. A famous study by Groves and Peytcheva ( here ) showed that in cross-sectional studies, nonresponse rates and nonresponse bias are only weakly correlated. In panel surveys however, all the signs are there that dropout in a panel study is often related to change.
Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys . We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months.
The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?
Vera Toepoel and I have been writing a few articles over the last two years about how survey respondents are taking up tablet computers and smartphones. We were interested in studying whether people in a probability-based web panel ( the LISS panel ) use different devices over time, and whether siwtches in devices for completing surveys are associated with more or less measurement error.
In order to answer this question, we have coded the User Agent Strings of the devices used by more than 6.
A follow up on last month’s post . Respondents do seem to be less compliant in the waves before they drop out from a panel survey. This may however not neccesarily lead to worse data. So, what else do we see before attrition takes place? Let have a look at missing data:
First, we look at missing data in a sensitive question on income amounts. Earlier studies ( here , here, here ) have already found that item nonresponse on sensitive questions predicts later attrition.
I am working on a paper that aims to link measurement errors to attrition error in a panel survey. For this, I am using the British Household Panel Survey. In an earlier post I already argued that attrition can occur for many reasons, which I summarized in 5 categories.
1. Noncontact
2. Refusal
3. Inability (due to old age, infirmity) as judged by the interviewer, also called ‘other non-interview’.
4. Ineligibibility (due to death, or move into institution or abroad).
Studies into the correlates of nonresponse often have to rely on socio-demographic variables to study whether respondents and nonrespondents in surveys differ. Often there is no other information available on sampling frames that researchers can use.
That is unfortunate, for two reasons. First, the variables we are currently using to predict nonrespons, usually explain a very limited amount of variance of survey nonresponse. Therefore, these variables are also not effective correctors for nonresponse.
I am continuing on the recent article and commentaries on weighting to correct for unit nonresponse by Michael Brick, as published in the recent issue of the Journal of Official Statistics ( here ).
The article by no means is all about whether one should impute or weight. I am just picking out one issue that got me thinking. Michael Brick rightly says that in order to correct succesfully for unit nonresponse using covariates, we want the covariates to do two things: