AAPOR 2013

The AAPOR conference last week gave an overview of what survey methodologists worry about. There were relatively few people from Europe this year, and I found that the issues methodologists worry about are sometimes different in Europe and the USA. At the upcoming ESRA conference for example there are more than 10 sessions on the topic of mixing survey modes. At AAPOR, mixing modes was definitely not ‘hot’.

With 8 parallel sessions at most times, I have only seen bits and pieces of all the things that went on. So the list below is just my take on what’s innovative and hot in survey research in 2013. RTI composed a summary of all tweets for a different take on what mattered at AAPOR this year

1. Probability based surveys vs. non-probability surveys. AAPOR published a report on this topic during the conference, written by survey research heavy-weights. This is recommended reading for everyone interested in polls. The conclusion that non-probability polls should not be used if one wants to have a relatively precide estimate for the general population is not surprising. It can not be re-iterated often enough. Other presentations on this topic features John Krosnick showing empirically that only probability-based surveys give consistent estimates. See a summary of the report here

2. The 2012 presidential elections. See a good post by Marc Blumenthal on this topic. Many sessions on likely voter models, shifting demographics in the U.S. and the rise of the cell-phone only generation.

3. Responsive designs. The idea of responsive (or adaptive) survey designs is that response rates are balanced across important sub-groups of the population. E.g. in a survey on attitudes towards immigrants, it is important to get equal response rates for hispanics, blacks and whites, when you believe that attitudes towards immigrants differ among ethnic sub-groups.
During fieldwork, response rates can be monitored, and when response rates for hispanics stay low, resources can be shifted towards targeting hispanics, by either contacting them more often, or switching them to a more expensive contact mode. If this is succesful, the amount of nonresponse bias in a survey should decrease.
The idea of responsive designs has been around for about 15 years. I had until now not seen many successful applications however. A panel session by the U.S. Census bureau did show that response design can work, but it requires survey organisations to redesign their entire fieldwork operations. For more information on this topic, see the excellent blog by James Wagner

Avatar
Peter Lugtig
Associate Professor of Survey Methodology

I am an associate professor at Utrecht University, department of Methodology and Statistics.

Related