I want to draw attention to a study recently published in the New Zealand Medical Journal on non-response bias to a national hospital inpatient experience survey run using Cemplicity technology.
The study was conducted by Michael Thomson, Megan Pledger, Richard Hamblin, Jackie Cumming and Essa Tawfiq (2018) and can be read in full here.
Response rates is one of the most common topics of discussion we have with clients. Getting a good response rate to a survey is critical for two reasons. Firstly, to ensure the sample size of respondents is large enough to be statistically reliable. Secondly, to ensure the results are representative of the universe that has been surveyed.
However, there is a lot of debate about how big (a response rate) is big enough. Cemplicity’s approach of trying to reach as many patients as possible, often helps solve the problem of sample size overall. However, what percentage response rate is acceptable? Importantly, do the opinions of people who respond differ from those who do not respond?
The specific answer to this question may vary across contexts but we are reassured by the results of this recent publication that show no significant difference in the opinions of those who responded and those who did not, despite some differences in sociodemographic characteristics. This is the first study we have seen where non-respondents to a national survey were resurveyed by phone. The follow up survey asked some of the same questions from the original survey and explored why people did not respond in the first place.
For people responsible for measuring and reporting patient experiences of care in public settings, this will be interesting reading. For Cemplicity and our clients, it gives us confidence in our methodology and in the acceptability of the response rates we are achieving.