Managing an Acute Care Continuum physician services organization that sees approximately five million patient encounters annually requires data. Lots of data. This includes information on meeting defined clinical guidelines, such as the Centers for Medicaid and Medical Services (CMS) performance metrics, resource utilization, practitioner productivity, patient flow, quality metrics and risk are all necessary.
Physicians are trained to first ‘do no harm’; and we typically don’t adopt medical practices until they have been proven through research that shows “statistically significant” improvement in clinical outcomes. We expect and demand a high degree of certainty that recommended changes in our clinical practices will make a meaningful difference in patient care and outcomes. Our practice lexicon is peppered with terms such as “triple-blinded,” “evidence-based” and “meta-analysis.”
However, we often can’t - and shouldn’t - apply this same methodology of data analysis to practice issues pertaining to operational improvements. Here, it is not only accepted, but necessary, to rely on an approach more akin to a “case study.” One such area of our practice is that of improving the patient experience.
For the past fourteen years, Vituity has utilized our own patient experience survey, in addition to Press-Ganey and other hospital surveys. Patient satisfaction data not only tells us how well we are meeting patient needs and expectations, but also can be a leading indicator of overall emergency department performance. Patient satisfaction is dependent on such factors as room cleanliness, wait times, perceived provider competence, team relationships, and effectiveness of communication.
Unlike controlled clinical trials, patient satisfaction survey responses are not random, are certainly not blinded, and due to the number of responses, they often do not meet “statistical significance,”- at least not at the 95 percent Confidence Level.
Because of this, I occasionally hear physicians questioning the validity of patient satisfaction surveys. This is a mistake. Surveys that are neither random nor weighted can still provide useful information regarding important issues that require addressing, and can shed light on what or who should be singled out for praise and why. They can generate understanding about patient perceptions and how they vary by practice location, season of the year, time of day, or even presenting medical condition. They can give insight into what is working well and what types of problems are most significant – precisely the data needed to help manage our clinical practices.
While statistical significance is often necessary for the development of clinical algorithms, it is not necessary for operational significance, and waiting for such data may hinder valuable operational improvements. In much of business, and life, it is not necessary to have 95% confidence in order to glean important information. Operational decisions are often based on the impact of failure to intervene rather than the statistical significance of the occurrence.
Even in medicine, we often make changes based on exception reporting and not statistical significance. Risk, patient safety and quality cases are all examples of single events used as leading indicators of possible areas for improvement. For example, when a physician misses tombstone T-waves, we don’t need to go back and look at all of that practitioner’s ECGs to determine whether that practitioner needs intervention. If someone misses a single ECG interpretation, we may find that one error important enough to intervene. Similarly, implementing process improvement as a result of a patient survey response is a reflection of how highly we value the patient’s perception of care. Survey information can give strategic guidance to improving the organization before problems reach statistical significance.
I recall reading a number of years ago a study of top business executives that concluded that at more successful companies, executives made decisions based upon general understandings of their business and the environment. On the flip side, companies that did comparatively worse were those whose executives were inclined to not make decisive decisions until more definitive data was available. (I have no idea if that study was statistically significant.) As medical managers, I think most of us are not surprised by that result. Let’s not wait for Godot before implementing process changes to improve the patient experience.