Here's a dirty little secret that I'll share with you: the clinical usability of current-generation electronic medical record (EMR) systems is nothing short of atrocious.
If the Geneva Convention's proscription against torture extended to healthcare information technology (HIT), most vendors would be out of business and behind bars.
But you probably already knew that: a November 2013 article in the American Journal of Emergency Medicine (AJEM) found that community emergency physicians spend 44 percent of their time interacting with EMRs and click up to 4,000 times in a 10-hour shift.
A follow-up in Emergency Physicians Monthly extrapolated the AJEM data and pegged the cost of those clicks at $168,000 per year in lost physician productivity for a typical emergency department (ED).
Little wonder then that a recent report commissioned by the American Medical Association and authored by the RAND Corporation found that "…for many physicians, the current state of [EMR] technology appeared to significantly worsen professional satisfaction in multiple ways."
How did we get to the point where we click away unhappily every nine seconds of every single clinical shift? Here are some relevant facts:
- Clinical EMR systems grew, for the most part, out of practice management and billing systems. Clinical record-keeping systems were bolted on to these financial systems ex post facto.
- Adoption of these clinical systems was fairly slow for many years, and spiked when the federal government introduced financial incentives for the "Meaningful Use" (MU) of EMRs.
- Meaningful Use is defined by CMS and is a moving target that increases in complexity and stringency over time. (Whether these criteria are truly clinically meaningful is a debate that is beyond the scope of this article.)
- Vendors have been sprinting like mad to add the requisite MU functionality to their systems so that their customers become (and remain) eligible for incentive dollars. This ties up their software engineers so that building in clinical usability becomes an afterthought to ticking off the MU boxes. In a very real sense, MU has become both the driving force for EMR adoption, and one of the main reasons why EMRs remain so painful for their users.
- The MU push aside, most clinical systems are designed by non-clinicians, and implemented by software engineers with limited knowledge of clinical workflows. So the CBC you're trying to look up gets buried behind six mouse clicks, and if you want to compare it to last visit's CBC, that'll cost you another four clicks.
But is this current state of affairs the way it has to be? Is our drudging EMR demise a foregone conclusion?
I strongly believe the answer to both questions is "no."
We're in a kind of gangly, pimply, awkward-teen phase in the evolution of EMR technology, a phase which perpetually annoys its clinician parents, but which fortunately will not (must not!) last forever.
So from whence will come our deliverance from these seven circles of adolescent EMR hell?
The answer lies in a field known as "human factors engineering" (HFE), whose focus is the creation of technology to serve the humans who employ it. This represents a figure-ground reversal in the current metaphor upon which many health IT (HIT) systems are built, with their implicit "my way or the highway" hubris.
HFE has long been employed in the design of many types of complex systems, such as in the cockpits of jetliners and fighter planes, where the consequences of poor design are potentially disastrous.
In a healthcare context, HFE takes into account the complex physical and intellectual tasks that practicing clinicians are charged with and designs systems to help us perform them, while minimizing avoidable risk, reducing cognitive overload and optimizing outcomes. Well-designed systems allow our attention to remain focused where it belongs, i.e., on taking care of patients.
Incorporating these human factors into the design of complex HIT systems focuses on several areas:
- Relevant information should be easily accessible. If the information is important enough, push it to the user rather than having them pull it from the EMR.
- Information should be clearly presented, with the most relevant information highlighted and clutter avoided. Dashboards are useful for this.
- Utilize alerts judiciously. Over-alerting is self-defeating: if everything sets off an alarm, how do you discriminate what truly needs your attention from what's merely informational? You don't.
- Make use of context. Location, time of day, patient lists, appointments and calendar items can all be used to customize the information being presented.
- Use the right interaction method for the job. The mouse-keyboard paradigm is overdone. Voice and gesture control can save time over point-click-text.
- Make it easy for the user to get what they want. Ideally, provide several methods to do accomplish a task.
I've become intrigued with the idea of using wearable computers to address many of the human factors gaps in EMRs. Wearables come in many forms, including fitness trackers, smart watches and immersive head-mounted displays that create virtual realities.
But when it comes to interfacing with the EMR, discreet head-mounted displays like Google Glass and its ilk (Vuzix M100 and Recon Jet) seem to hit a sweet spot: they're out of the way, but there to consult when you need them. They offer a small screen to display text and images as well as a microphone, camera, speaker, a variety of sensors (accelerometers, gyroscopes) and connectivity through Bluetooth and WiFi.
What these devices bring to the table (or to the head) has the potential to revolutionize the way we interact with EMRs and patients:
- We can use geolocation and knowledge of your schedule or patient roster to push timely and critical information (such as lab and radiology results or patient-centric task reminders) to your heads-up display so that you don't have to pull them out of the EMR.
- The microphone recognizes your speech, allowing you to navigate the EMR, enter orders, document your care and communicate with other care team members using your voice.
- Images and text that don't fit on the tiny screen can be sent to other displays for easier reading.
- The integrated camera can be used to provide photographic and video documentation of wounds, injuries and other lesions and to transmit those images to others in real-time.
- You can get real-time checklists for critical procedures, leaving your hands free for central lines, intubations and so on.
The possibilities are endless.
What's more, you're free from the bonds of your mouse-keyboard-computer and can spend more time directly interacting with your patients.
While there will doubtless be an awkward period of time during which the etiquette and protocol for using these devices in clinical settings becomes codified, I have no doubt that wearables will soon become an acceptable part of medical practice, much as smartphones and other mobile devices already have.