Passively-generated location data have the potential to augment mobility and transportation research, as demonstrated by a decade of research. A common trait of these data is a high proportion of missingness. Naïve handling, including list-wise …
A follow up on last month’s post . Respondents do seem to be less compliant in the waves before they drop out from a panel survey. This may however not neccesarily lead to worse data. So, what else do we see before attrition takes place? Let have a look at missing data:
First, we look at missing data in a sensitive question on income amounts. Earlier studies ( here , here, here ) have already found that item nonresponse on sensitive questions predicts later attrition.
I am continuing on the recent article and commentaries on weighting to correct for unit nonresponse by Michael Brick, as published in the recent issue of the Journal of Official Statistics ( here ).
The article by no means is all about whether one should impute or weight. I am just picking out one issue that got me thinking. Michael Brick rightly says that in order to correct succesfully for unit nonresponse using covariates, we want the covariates to do two things:
I recently gave a talk at an internal seminar on planned missingness for a group of developmental psychologists. The idea behind planned missingness is that you can shorten interview time or reduce costs, if you decide as a researcher not to administer all your instruments to everyone in your sample. When you either randomly assign people to receive a particular instrument, or do so by design (i.e. only collect bio-markers in an at-risk group), your missing data will either be Missing Completely At Random (MCAR) or Missing at Random (MAR).