Quality Chasm? An Alternative Interpretation of ACEP's Emergency Medicine Report Card

Prentice Tom

Prentice Tom , MD

Vituity Futurist

Published February 14, 2014

On Jan. 16, 2014, ACEP released the latest version of its National Report Card on the State of Emergency Medicine. This report supposedly rates the nation and individual states in five areas:

  • Access to Emergency Care
  • Quality & Patient Safety Environment
  • Medical Liability Environment
  • Public Health & Injury Prevention
  • Disaster Preparedness

 

Overall, the report awarded our country's emergency medical system low marks. Findings were announced in a press release titled "Nation's Grade Drops to a Dismal 'D+' for Failure to Support Emergency Patients."

In the area of "Quality & Patient Safety Environment," the report gave our nation a grade of "C" (down from a "C+" in 2009). Two-thirds of this grade is based upon state systems criteria such as funding for emergency medical services (EMS), presence of a state-level EMS medical director and mandated reporting of adverse events. The remaining third of the quality grade is based on institution-level indicators, including such factors as use of computerized order entry and number of joint commission sentinel events.

As a practicing emergency physician for over 20 years and Chief Medical Officer for Vituity, a physician-services Partnership seeing over 5,000,000 patients annually, I respectfully but strongly disagree with this report's conclusions and believe it greatly understates both hospitals' ability to provide necessary emergency medical care as well as the quality of care provided.

To say that the safety and quality of American emergency departments deserves a "C" rating defies not only my experience but (I believe) the experience of a large majority of patients who rely on these services. I am confident that if I were suffering from an acute myocardial infarction, having a stroke or suffering respiratory distress from any number of conditions — asthma, COPD, pneumonia or CHF — I could walk into almost any ED in the United States and receive excellent, state-of-the-art medical care. I would have access to some of the most sophisticated medical technology available and would receive treatment that was considered standard-of-practice. And for truly emergent care, this service would typically be delivered within minutes of my arrival.

I work in a "typical" emergency department. At my hospital, we have stroke teams, sepsis/shock teams and regularly get our myocardial infarction patients from the door to the cath lab within 45 minutes. Patients have almost immediate access to technologies such as MRI, CT and ultrasound. We have 24/7 access to intensivists, surgeons and other specialists, plus the ability to provide rapid and appropriate care for all life-and-limb-threatening emergencies. And we are not alone.

I am not saying that ED crowding is not a problem or that care quality and access could not be improved. Like the authors of the report, I believe that emergency departments play a vital role in healthcare delivery — not only for our sickest citizens, but also as a safety net providing episodic primary care for many who would otherwise be without access. I fully acknowledge that there are times when patients wait sometimes hours to access care through a crowded emergency department. There are certainly instances of patients not receiving timely care which in turn resulted in significant adverse outcomes. But, such events are fairly rare, and the large majority of emergency patients receive both timely and appropriate care. Our data suggests that the large majority of emergency patients see a healthcare provider within minutes of entry and that the total time spent in an emergency department is usually less than four hours.

So, I am prompted to wonder why the authors would paint such a negative picture of emergency medical care in the U.S. By giving poor marks to our emergency services system and emergency departments, are the authors trying to bring attention to issues impacting the delivery of emergency care in the United States? Are they trying to increase resources available for the provision of emergency medical services by emphasizing areas of deficiency? I don't believe the way to bring attention to the issues facing emergency medical care is to paint a falsely negative interpretation of the state of our EMS system. This attempt is misguided, could result in lack of confidence in our emergency system and be deleterious to the field of emergency medicine.

Below are some of my concerns with the methodology used in this report.

1. Lack of data relating to clinical quality or patient outcomes. The "Quality & Safety Environment" indicators encompass very little clinical data — and no outcome data. Although we have made some real progress towards defining clinical quality indicators, of the dozens of measures used in this report, there are only two clinical quality measures: percentage of patients with AMI given PCI within 90 minutes of arrival and percentage of patients with AMI given aspirin within 24 hours.

The report is full of structural and operational process measures, but it lacks any real assessment of either clinical quality (e.g.: What percentage of patients with pneumonia went home from the ED with the appropriate medications? What percentage of patients with surgical abdomen was diagnosed on the first ED visit?) or patient outcomes (e.g.: How long did it take for people with an ankle fracture to return work? What is the mortality for patients with heart attacks? What was the long-term functional outcome of stroke patients?).

It is impossible to make any meaningful judgments about healthcare safety and quality without knowing the end result of that care.

2. Over reliance on systems and process data that may not reflect the quality of care. If we accept that patient outcomes (and to a lesser degree physicians' clinical practices) are the most meaningful measures of "safety and quality" available, we must question ACEP's decisions to give twice as much weight to the "state systems" criterion, which accounts for 66 percent of the safety and quality grade.

While system improvements are worthy aims and can certainly enhance safety and quality in some cases, they don't constitute "quality and safety." Having a statewide EMS medical director or a prescription drug-monitoring program in place certainly has value. However, having such resources and programs in place is no guarantee of quality or safety.

I believe that the emergency medical teams throughout the United States do an exceptional job of ensuring both the quality and safety of our emergency care, and I believe that more rigorous outcome data would strongly support this position.

3. The grading system used in this report is misleading, and the authors have assigned subjective grades under the guise of using a quantitative methodology. ACEP lays out its formula for arriving at individual state grades on page 129 of the report. But is it really meaningful to assign a single letter grade for all of the indicators in the "Safety & Quality Environment" criterion (or any criterion)? Not only are the criteria selected questionable, but the authors have also attempted to provide "scientific validation" by applying a quantitative methodology to a subjective process.

4. Many measures are not related to Emergency Medicine, but all the public will remember is the "D+." Many of the measure used in this report are no doubt valuable in reviewing our nation's health (e.g.: percent of adults who smoke, percent of adults with BMI > 30). However, these are not measures of the "State of Emergency Medicine." Few will read this entire report to understand the authors are assessing a wide range of public health issues. Yet, the authors incorporate these measures into determining their overall grade for emergency medicine. All that much of the public will remember is that our professional organization, ACEP, rated Emergency Medicine in the U.S. as a "D+."

In summary, I wholly support ACEP's goal of strengthening our country's emergency care system and drawing attention to the many issues EDs and emergency physicians face. And some of my colleagues would argue that a negative message will have more impact on policymakers than a measured one, or that such a message is a key defensive measure against further cuts to the system.

But I disagree that characterizing America's emergency care system as "barely adequate" or "close to failure" is fair or appropriate. There is a real risk that such an approach could erode our profession's long-term credibility or undermine public faith in our system. And it can also undermine the credibility and respect that the many dedicated emergency physicians, nurses, paramedics and others deserve for the exceptional job they do day-in and day-out.

Partnering to improve patient lives

Vituity branding orange wave pattern background