Understanding Society Survey Manager Jon Burton blogs on a series of presentations at this year’s conference on attrition.
A morning session, Survey Attrition, had prepared us for how to handle attrition in longitudinal surveys – through weighting or imputation – and how to characterise attrition among migrants and across the life of the British Household Panel Survey. The afternoon session looked at what can be done to reduce non-response in the first place.
The first presentation by Mark Wooden set out the reasons why we spend a disproportionate amount of effort to chase the hard-to-get cases.
The analyses presented showed that those who were hard-to-get were different in a number of characteristics to those who were either easy-to-get or non-responders. Using data from the Household, Income and Labour Dynamics in Australia Survey HILDA, he also demonstrated that a majority (around 70%) of those who were hard-to-get at one wave were easy-to-get at the next wave, suggesting that the level of ‘persistently hard-to-get’ was relatively low.
Whilst those who were hard-to-get were more likely to not respond at the next wave – most of them didn’t (80%+ participated at the next wave). However, there may be some implications for data quality – with those hard-to-get at a particular wave being more likely than the easy-to-get to not respond to income questions, or to round-up to the nearest $1000 when reporting wages or pension income.
The hard-to-get were also less likely to complete the self-completion portion of the interview. However, since most hard-to-get were easy-to-get at the next wave, it was worth the effort to keep them in the sample.
The second presentation, by Oliver Tatum and Angie Osborn from the Office for National Statistics, used an experiment on the Wealth and Assets Survey to show the effect of a keep-in-touch exercise (KITE) on response and updated information on movers.
The KITE was generally done 4 months before interview – 20 months after the previous interview. Oliver and Angie experimented with this KITE, with one-quarter of the sample receiving the standard telephone catch-up, one quarter receiving a new leaflet for respondents which asked them to update their details, one quarter receiving both the telephone catch-up and the newsletter and the final quarter receiving nothing.
They found that the telephone KITE was effective for identifying movers and tracing them to their new location. There was an additional positive effect on final response rates for those who received both the telephone and newsletter KITE. However, there was no effect on the timing of the newsletter (ranging from 4-9 months before interview).
The final presentation, by Richard Boreham of NatCen Social Research, used data from the fifth wave of the Understanding Society Innovation Panel mixed-mode to create a model which would optimise the allocation of households to a web-based survey (CAWI).
There was a trade-off between allocating more household to CAWI and getting a lower response rate, as more of the households who were less likely to participate on-line were allocated to the CAWI. Likewise, the more households allocated to CAWI, the lower response-rate would be achieved by face-to-face interviewers, as they would be left with the harder households.
This model was used to predict that there would be a range of allocations which would optimise the response rate from both the web and face-to-face interviews.