Around this time last year, I wrote a blog about the take-home messages from the International Survey Non-Response Workshop in Oslo. Once a year researchers, statisticians and academics from around the world get together for three days to discuss the issues surrounding participation in social surveys.
This year, I returned to the workshop, this time in the beautiful Dutch city of Utrecht, to see how the debate had moved on over the past 12 months.
Spoiler alert! I’ll start with the end; response rates are still falling in major household surveys across the world. On the final morning of the workshop, Annemieke Luiten from Statistics Netherlands delivered a sobering presentation on global trends in response rates, pulling together data from 25 countries over the past 25 years.
Despite huge increases in effort by data collection organisations, response rates are still falling, and we have not slowed the decline on aggregate (although we have identified some interventions that can help).
While discouraging, this did put things into perspective. At times for those of us working in data collection, it can be disheartening when we put in phenomenal efforts and continue innovating, yet response rates continue to slide. This data further corroborates our sense that the public’s willingness to take part in research has changed, not just in the UK, but across the world.
Response rate trends 1991-2015
Originality in adversity
So far, so familiar. But what was distinct this year, or perhaps drawn into sharper focus, was the variety and ingenuity shown by researchers attempting to stem this tide across face-to-face, telephone and online surveys.
Financial incentives remain the stock tool for increasing participation, with the most robust evidence-base – and consequently the workshop featured a number of presentations looking at this. But thinking has moved on to other softer, cheaper interventions, many drawing on new technology or behavioural science.
I had an early slot on the first day of the workshop and gave my presentation about a successful test we ran at NatCen to see whether text message reminders could boost participation in face-to-face surveys. It turned out that by giving our interviewers this option, it resulted in them making more appointments and completing more interviews. We were really pleased that this simple, common-sense intervention has made a positive impact.
While we had just transferred a simple bit of technology we knew to work in other contexts into surveys methods, others have developed more imaginative interventions. Marieke Haan from Utrecht University presented about a test to understand how telephone interviewers’ conversational style might influence participation.
They trained interviewers to either use a ‘personal’ style (where they emphasised their personal interest in the research and were asked not to mention the university sponsor) or a ‘formal’ style (where they stressed the importance of the study to the university and science in general).
It turned out that interview style made no discernible difference to participation. However, when the research team looked a bit deeper, they noticed that quite a few interviewers had broken the rules and used arguments they were not supposed to. It turned out that these rule-breaking interviewers were among the most successful. It goes to show that while training and protocols are important, sometimes leaving things to the judgement and skill of interviewers can pay off.
In terms of encouraging online participation, Wojciech Jablonski from the University of Lodz in Poland gave a fascinating presentation on the use of humanizing cues. So, what are humanising cues exactly? These are small details intended to make online surveys feel a bit more like an interaction with a real life human being. This was done at different levels of intensity, from a standard survey, to giving the ‘interviewer’ a name, then a photo and then a video intro. As it turns out, people actually prefer online surveys just as they are; the more humanised the survey the less likely people were to take part.
In ten years time?
I could finish by predicting what will be the main theme at next year’s workshop, but I think I will take it a few steps further and consider the questions we might be asking in a decade’s time.
Will response rates still be falling? My expectation is that there will be a bottoming out of response rates, and I think it will come sooner than 2027. Whatever the case, I think that our attention will have shifted to ensuring representativeness in our samples, not simply maximising headline response rates. It’s up to the research community to ensure commissioners have widely-accepted, easy-to-interpret indicators of sample quality.
How will we be encouraging participation? I imagine we will be as reliant, if not more so, on financial incentives. What will have changed is how intelligently we use them. NatCen has already overseen a successful implementation of targeted incentives to enhance sample quality on the British Social Attitudes Survey, and I expect approaches like this to become more widespread. Technology also offers us another avenue for encouraging participation; from providing personalised feedback, to sending prompts based on real-time participant behaviour.
Will face-to-face surveys be a quaint, historical anachronism? There’s no doubt that face-to-face surveys will become rarer as internet and mobile penetration edge towards 100%. But there will still be instances where home visits may be needed. For example, on several NatCen health surveys we collect physical data like blood samples. And similarly, while online data collection may become the conventional first step, it’s likely we will use face-to-face follow-ups to ensure coverage of hard-to-reach groups. There are many challenges ahead, working out how best to combine the skills of our field force with the many possibilities offered by technology.
Follow me on Twitter: @matthewcjonas