Mixed mode surveys: where will be in 5 years from now?
Some colleagues in the United Kingdom have started a half-year initiative to discuss the possibilities of conducting web surveys among the general population. Their website can be found here
One aspect of their discussions focused on whether any web survey among the population should be complemented with another, secondary survey mode. This would for example enable those without Internet access to participate. Obviously, this means mixing survey modes.
Using two different survey modes to collect survey data, risks introducing extra survey error. Methodologists (me inclusive) have worked hard on getting a grip on the existence of differences in measurement effects between different modes. In order to study these properly, one should first make sure that the sub samples that are interviewed in different survey modes, do not differ just because of differences in selection effects between the two samples. I have written some earlier posts on this issue, see some of the labels in the word-cloud on the right.
I have composed a short presentation on ways in which differences in measurement effects in mixed-mode surveys can be studied. The full presentation is here . Comments are very welcome.
In going over the literature, two things stood out, that I never realised:
1. There are few well-conducted studies on measurement effects in mixed-mode surveys. Those that exist show that there often are difference in means, and sometimes in variances of survey statistics. Yet no one (and I’d love to be corrected here), has looked at the effect on covariances. That is, do relations between the key variables in a study change, just because of the mode of data collection? There may be an analogy to nonresponse studies, where we often find bias on means and variances, but much smaller biases for covariances. In this picture, this reflects the relation between x1 and y1 in two different survey modes. Is that relation different because of mode effects? Probably not, but we need more research on this.
2. What to do about mode effects? We are perhaps not ready for answering this question, looking at how little we know exactly about how measurement differences between modes affect survey statistics. But we should start thinking in general about this question. Can we correct for differences between modes. Should we want to do that? It would create a huge extra burden on survey researchers to study mode differences in all mixed-mode surveys, and designing correction methods for them. Could it be that in five years time, we have concluded that it is probably best to try to keep mode effects as small as possible and not worry about the rest?