We always get a lot of questions about when and how to send surveys (i.e., survey timing post-discharge, email vs SMS), which questions to use (i.e., general, or condition-specific validated question sets) and what they should look like to be mobile and patient-friendly. But above all else, the questions we seem to get the most are always: “What kind of responses rates do you get?” and “What does a good response rate look like?”

Now, these are fair questions. Considerable research is devoted to understanding the importance of response rates and the extent to which factors like selection bias and non-response bias can influence the validity of survey data [1-7]. Some authors, however, disagree on its importance as recent research and meta-analyses have found low response rates to be a poor predictor of non-response bias and only produce modest differences in the feedback submitted by responders and non-responders [8,9].

As much has already been covered about these topics elsewhere and as we’ve previously outlined the value of Patient-Reported Outcomes Measures (PROMs) in clinical settings, I won’t be discussing this any further. Instead, I would like to highlight several examples where PROMs programmes implemented with thoughtful strategies aimed at supplementing the capture of patient responses have achieved high response rates and patient uptake and show that 95%+ participation is indeed possible.

The importance of going digital

When it comes to collecting PROMs, their benefits to clinical care are already widely accepted [10-12]. They improve patient experience, enable early screening and are incredibly informative for clinicians [13-16]. However, capturing data at regular intervals in some settings has been challenging meaning response rates in some cases have been highly variable [17,18].

While numerous studies have compared the success of PROMs captured via paper and web (i.e., digital) surveys, their focus is often limited and does not adequately address issues inherently associated with traditional, paper-based PROMs and the cost required to manage such an approach [19,20].

We facilitate the collection of PROMs through an approach informed by our own experiences and the ever-growing body of Literature which shows that you can dramatically improve and achieve high response rates through the implementation of carefully considered strategies coupled with a digital-first focus to improve ongoing costs and efficiency.

Overcoming the barriers to high response rates

Studies have already shown that focusing on digital mechanisms, such as email and SMS, can achieve up to 97% response rates and increased by supplementing the existing process with on-site options. Instead of traditional paper offerings, which increase the cost and effort, we facilitate this using our platform on tablet devices [21].

Similar studies have also found that staff education, real-time monitoring, and communications to the patient about the use of the PROMs in their treatment lead to response rates of 95%. It is important to note that these should not be viewed as outliers and more an example of what can be achieved when creating a system which is integrated seamlessly into a clinical setting and openly encourages active patient participation [22].

At Cemplicity, we use an amalgam of the above to achieve our results.
We use several modes to reach patients electronically, offer multi-lingual options, use carefully timed reminders via email and SMS, use real-time monitoring to alert staff to changes in patient feedback from one response to another, and allow staff to review who has responded before their next scheduled survey to ensure that compliance remains high. This is what is needed to succeed and make 95%+ possible.

[1] Hartge P. Raising response rates: Getting to yes. Epidemiology. 1999; 10:105–107.[2] Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Q. 2006; 70:646–675.[3] Draugalis JR, Plaza CM. Best practices for survey research reports revisited: Implications of target population, probability sampling, and response rate. Am J Pharm Educ. 2009; 73:142.[4] Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. J Am Med Assoc. 2012; 307:1805–1806.[5] Mazor, K. M., Clauser, B. E., Field, T., Yood, R. A., & Gurwitz, J. H. (2002). A demonstration of the impact of response bias on the results of patient satisfaction surveys. Health services research, 37(5), 1403–1417. https://doi.org/10.1111/1475-6773.11194[6] Tyser, A.R., Abtahi, A.M., McFadden, M. et al. Evidence of non-response bias in the Press-Ganey patient satisfaction survey. BMC Health Serv Res 16, 350 (2016). https://doi.org/10.1186/s12913-016-1595-z[7] Compton, J., Glass, N., & Fowler, T. (2019). Evidence of Selection Bias and Non-Response Bias in Patient Satisfaction Surveys. The Iowa orthopaedic journal, 39(1), 195–201.[8] Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Q. 2006; 70:646–675.[9] Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Q. 2008; 72:167–189.[10] Black N. Patient reported outcome measures could help transform healthcare. BMJ. 2013;346:f167. doi:10.1136/bmj.f167[11] Davis JC, Bryan S, Patient Reported Outcome Measures (PROMs) have arrived in sports and exercise medicine: why do they matter? Br J Sports Med. 2015;49(24):1545–1546. doi:10.1136/bjsports-2014-093707[12] Weldring T, Smith SMS. Patient-Reported Outcomes (PROs) and Patient-Reported Outcome Measures (PROMs). Heal Serv Insights. 2013;6:61–68[13] Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract. 1999;5(4):401–416.[14] Skevington SM, Day R, Chisholm A, Trueman P. How much do doctors use quality of life information in primary care? Testing the trans-theoretical model of behaviour change. Qual Life Res. 2005;14(4):911–922.[15] Chen J, Ou L, Hollis SJ. A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Serv Res. 2013;13:211. doi:10.1186/1472-6963-13-438[16] Chow A, Mayer EK, Darzi AW, Athanasiou T, Patient-reported outcome measures: the importance of patient satisfaction in surgery. Surgery. 2009;146(3):435–443. doi:10.1016/j.surg.2009.03.019[17] Hutchings A, Neuburger J, Grosse Frie K, Black N, van der Meulen J. Factors associated with non-response in routine use of patient reported outcome measures after elective surgery in England. Health Qual Life Outcomes. 2012;10:34. doi:10.1186/1477-7525-10-34[18] Schamber EM, Takemoto SK, Chenok KE, Bozic KJ, Barriers to completion of patient reported outcome measures. J Arthroplasty. 2013;28(9):1449–1453. doi:10.1016/j.arth.2013.06.025[19] Pronk, Y., Pilot, P., Brinkman, J. M., van Heerwaarden, R. J., & van der Weegen, W. (2019). Response rate and costs for automated patient-reported outcomes collection alone compared to combined automated and manual collection. Journal of patient-reported outcomes, 3(1), 31. https://doi.org/10.1186/s41687-019-0121-6[20] Jacob J. Triplet, Enesi Momoh, Jennifer Kurowicki, Leonardo D. Villarroel, Tsun yee Law, Jonathan C. Levy, E-mail reminders improve completion rates of patient-reported outcome measures, JSES Open Access, Volume 1, Issue 1, 2017, Pages 25-28, ISSN 2468-6026, https://doi.org/10.1016/j.jses.2017.03.002.[21] Nielsen, L.K., King, M., Möller, S. et al. Strategies to improve patient-reported outcome completion rates in longitudinal studies. Qual Life Res 29, 335–346 (2020). https://doi.org/10.1007/s11136-019-02304-8[22] Wang K, Eftang CN, Jakobsen RB, et al Review of response rates over time in registry-based studies using patient-reported outcome measures. BMJ Open 2020;10:e030808. doi: 10.1136/bmjopen-2019-030808

Making Patient Outcome Measures work

Download our paper to learn how to run programmes for impact vs. compliance.

cemplicity magnifying glass icon