data quality

Covid - behavioral effects international datasets

This dataset stems from the project ‘Beprepared’: (https://be-prepared-consortium.nl/) which aims to provide in-depth analyses of mixed-method behavioural science data collected throughout the unprecedented COVID-19 pandemic and inform preparedness …

new website

Dear all, Nine years ago I started blogging. I have been quiet the last few years when it comes to blogging. Perhaps I will pick this up again, perhaps not. What changed quite a bit is how I work as a scientist. I am using R now as my default software for analysis, and have also started to use GitHub for version control, as the cool kids nowadays do. Anyways, my website was long due an overhaul.

Which survey error source is larger: measurement or nonresponse?

As a survey methodologist I get paid to develop survey methods that generaly minimize survey errors, and advise people on how to field surveys in a specific setting. A question that has been bugging me for a long time is what survey error we should worry about most. The Total Survey Error (TSE) framework is very helpful for thinking which type of survey error may impact survey estimates But which error source is generally larger?

Retrospective reporting

Back after a long pause. Panel surveys traditionally interview respondents at regular intervals, for example monthly or yearly. This interval is mostly chosen for practical reasons: interviewing people more frequently would lead to a large respondent burden, and a burden on data processing and dissemination. For these practical reasons, panel surveys often space their interviews one year apart. Many of the changes (e.g. changes in household composition) we as researchers are interested in occur slowly, and annual interviews suffice to capture these changes.

Satisficing in mobile web surveys. Device-effect or selection effect?

Last week, I wrote about the fact that respondents in panel surveys are now using tablets and smartphones to complete web surveys . We found that in the LISS panel, respondents who use tablets and smartphones are much more likely to switch devices over time and not participate in some months. The question we actually wanted to answer was a different one: do respondents who complete surveys on their smartphone or mobile give worse answers?

To weight or to impute for unit nonresponse?

This week, I have been reading the most recent issue of the Journal of Official Statistics , a journal that has been open access since the 1980s. In this issue is a critical review article of weighting procedures authored by Michael Brick with commentaries by Olena Kaminska ( here ), Philipp Kott ( here ), Roderick Little ( here ), Geert Loosveldt ( here ), and a rejoinder ( here ).

Publish your data

This morning, an official enquiry into the scientific conduct of professor Mart Bax concluded that he had committed large-scale scientific fraud over a period of 15 years. Mart Bax is a now-retired professor of political anthropology at the Free University Amsterdam. In 2012 a journalist first accused him of fraud, and this spring, the Volkskrant, one of the big newspapers in the Netherlands reported they were not able to find any of the informants Mart Bax had used in his studies.

How to improve the social sciences

Social scientists (and psychology in particular) have in recent years had somethings of a bad press, both in- and outside academia. To give some examples: - There is a sense among some people that social science provides little societal or economical value. - Controversy over research findings within social science: for example the findings of Bem et al. about the existence of precognition , or the estimation of the number of casualties in Iraq war (2003-2007).

Dependent Interviewing and the risk of correlated measurement errors

Longitudinal surveys ask the same people the same questions over time. So questionnaires tend to be rather boring for respondents after a while. “Are you asking me this again, you asked that last year as well!” is what many respondents probably think during an interview. As methodologists who manage panel surveys, we know this process may be rather boring, but in order to document change over time, we just need to ask respondents the same questions over and over.

Interested in new mixed mode research project?

Mixed-mode research is still a hot topic among survey methodologists. At least at about every meeting I attend (some selection bias is likely here). Although we know a lot from experiments in the last decade, there is also a lot we don’t know. For example, what designs reduce total survey error most? What is the optimal mix of survey modes when data quality and survey costs are both important? And, how can we compare mixed-mode studies across time, or countries, when the proportions of mode assignments changes over time or vary between countries?