“If we don’t have a response rate of 80% or more, I’m not going to use the results. The people who don’t respond might be quite different to the ones that do”. (Anon.)

Welcome to our world of response rate discussions in healthcare settings, where every patient voice matters.

Our clients are experts in running health services and dedicated to achieving good experiences of and outcomes from care for their patients. (I’ve never worked in an industry where our clients care so deeply about their ‘customers’ and it’s one of the main reasons we love what we do.)

However, when it comes to response rates, there is a wide range of knowledge and opinions of what constitutes an acceptable response rate. So, we have written a paper to cover just this topic – ‘Determining good and statistically significant response rates’ – aiming to answer questions like ‘what is a good response rate compared to healthcare averages?’ and ‘when are results safe to use and when should you be cautious?’.

One of the remarkable things about our clients is that they generally want every single patient to give feedback. That’s why they choose Cemplicity – because we implement continuous surveying and think carefully about patient engagement as well as research.

No one would be happier than us to achieve 100% participation in our surveys but there is an endless list of reasons why this isn’t achievable. This is when statistics are important.

In a past life, working in market research, we’d often set up sample quotas so that the demographic profile of the final survey data set closely reflected the population we were measuring. Once we had, say, enough men aged 18-29, we’d shut that quota and any further men in that age group who tried to take part would find the survey closed. When we used public research panels for these projects, the response rates were very poor and you’d plug away with incentives and reminders until you achieved the targets. Because the final dataset was ‘representative’ everyone was happy to use the results, despite the poor response rate.

Our healthcare work takes a different approach. We want every single person through the service to know that their view is important, and we care about their experience of care or their health outcomes. Using mixed mode surveying (commonly email, SMS, QR codes and WhatsApp) we try to contact everyone. If there are groups of people who cannot be reached digitally, our clients may resort to interviews or paper surveying as well.

Receiving healthcare is a ‘high involvement’ interaction (compared to buying a burger or shopping at a hardware store) so patients are more likely to take part and to work their way diligently through quite long surveys. You’d be amazed at the richness and depth of the comments that people provide, especially when they are reminded that their feedback will help the people who come after them.

With this mixed-mode approach and respondents’ high involvement, we are able to achieve consistently good response rates (up to 63% for PREMs) which results in large sample sizes. This means clients can then segment and analyse their results with good confidence that the samples are safe to work with.

We are always conscious that the people who haven’t responded may have different views from those that do, but this is no reason not to act on the feedback you have received if the profile of respondents spans the demographics of your patient population. (While non-response bias in patient experience surveying is a challenging area to explore, here is one published paper on the topic from the NZ national inpatient experience survey that determined no significant difference in the experiences of people who responded from those that didn’t.)

It’s a fascinating area and if you’d like to read more, click here to download our Whitepaper “Determining good and statistically significant response rates”.

THOUGHT LEADERSHIP

Determining good and statistically significant response rates

Share this on social media!