Individually, they are significant forces, but together they reinforce each other, and, although it will take years, these ideas will likely lead to disruptive changes in all medical specialties and how they are practiced.
In the next ten years, we’ll see diagnostics as well as medicine change. And that will come from both medicine and devices, and the information systems that support them. – Jonathon Rothberg; CEO Butterfly Network; Adjunct Professor, Yale School of Medicine
Underlying all of these ideas is “big data,” which refers to our increasing ability to collect, store, and analyze information on things that were previously thought of as “non-measurable.”
For example, video data, the binary description of the pictures on our screen, can be stored and automatically analyzed to determine what the video is about, to identify specific people in the video, or even the emotions they are expressing. As doctors, who strongly rely on what we observe, the increasing ability for computers to accurately analyze video and photographic data will eventually present new opportunities — for example, faster, more accurate melanoma detection.
Similarly, auditory data can be collected and analyzed across the acoustic spectrum from subsonic to ultrasonic, allowing recognition of not only speech content but also “acoustic fingerprints” outside of human hearing. Industries outside of medicine are already taking advantage of novel ways to analyze acoustic data, such as identifying the location from which a gun was fired and the type of ammunition used. This technology is already deployed in cities across the United States. Going forward, it may provide ways to reduce injuries and deaths associated with guns.
Medicine has increasingly been using acoustic technology in the form of ultrasound imaging. A fascinating example is the tech startup Butterfly Network, which hopes to design an artificially intelligent ultrasound system that can automate the diagnosis of some conditions. When integrated with telemedicine, this platform could expand global access to diagnostic imaging by providing quality care in remote clinics that don't otherwise have access to a trained technician.
Making Machines Smarter
"I’m convinced that if it’s not already the world’s best diagnostician, it will be soon."
- Andrew McAfee, MIT, on the IBM Watson supercomputer
Of course, the increase in the amount of data we collect, including the digitalization of our vision, hearing, and other senses, creates new problems. Namely, once enormous sets of data have been collected, what do we do with them? How do we write programs to analyze data when the data in question is too complex for us to understand?
That’s where machine learning makes its impact. It gives us the ability to examine data sets that are so complex, our human brains struggle to create algorithms that make sense of them.
Machine learning allows computers to improve their own understanding of data sets, find interconnections between discrete data, and to make predictions and recommendations based on the connections within those immense data sets.
I want to emphasize a point that surprised me. Machine learning is exactly how it sounds. The machine (i.e., the computer program) is teaching itself how to analyze the data and deal with complex situations for which there may be no “right” answer.
In other words, machine learning is not getting the program to follow a complex algorithm written by humans. It is having the computer write its own program based on its ability to examine, trial, and iterate at an amazing speed. And this is not just a fancy calculator. Although, humans are involved in creating the situation that allows the “machine” to learn, the improvements that occur happen at a pace and scale that only computers can achieve
Machine learning is already being put to use in playfully creative applications like recipe design and games. But, more importantly, it is increasingly being used in fields like medicine and transportation. Physicians will eventually need to know how to effectively deal with machine learning applications so we can provide the best patient care.
Ramping Up Robotics
Autonomous robotic surgery—removing the surgeon’s hands—promises enhanced efficacy, safety, and improved access to optimized surgical techniques. – Azad Shademan and colleagues, Science and Translational Medicine
No matter how far artificial intelligence has advanced, humans have always had an advantage: we have hands connected to our minds that give us the ability to touch and manipulate our surroundings. When it comes to interacting with the environment, we win. However, with advancements in robotics, that may be changing.
A recent article in The Economist describes the Smart Tissue Autonomous Robot, or STAR, an autonomous surgical robot created at the National Children’s Health System in Washington D.C. In lab trials, STAR has been trained to use haptic feedback (touch data) along with visual feedback and other inputs to re-anastomose severed pig intestines… by itself.
This is in contrast to the robots currently used in surgery, which are controlled by a surgeon, usually seated ten feet away from the patient.
It will be still quite a while before we see any autonomous surgical robots in hospitals or clinics. Although STAR successfully reconnected the severed pig intestines, the entire process was set up by humans, which allowed the robot to successfully complete a very specific and limited task.
However, anesthesiologists have already faced a similar development: the Sedasys machine.
Sedasys was an automated system for delivering sedation to patients, typically for colonoscopies. Sedasys could measure various patient vital signs to control sedation levels, keeping patients comfortable for their procedure but not extending the level of sedation into general anesthetic.
The American Society of Anesthesiologists opposed Sedasys' approval by the FDA, claiming machine intelligence would be ill-equipped to make nuanced clinical decisions or cope with emergencies. Some hospitals using Sedasys reported cost savings and improved efficiencies. However, Sedasys may ultimately have been ahead of its time. The machine was recently withdrawn from the market because of sales difficulties.
Bracing for Disruption?
Despite the removal of Sedasys from the market, it would be a mistake to dismiss it — or other nascent technologies.
Disruptive innovation depends on the incumbents ignoring technological upstarts as “not ready for prime time.” This gives new technologies time and space to become effective. Even more importantly, it gives new companies time to develop new, effective business models.
The combination of innovative technologies and innovative business models has led to disruption for other industries. The practice of medicine and the business of healthcare are not immune.
Whether one describes AI in terms of potential hype, or considers its arrival to be wonderful to catastrophic, reality is likely to be somewhere in the middle. The technologies that are being created today have a potential that is mind-bogglingly amazing… but they are not a guaranteed panacea for patient care.
There is a significant amount of trial and error left to go before we can reliably and easily use AI in medicine. However, based on the trajectory and pace of its development, it seems inevitable that artificially intelligent decision support tools will be part of our patient care, especially for those of us whose remaining careers are measured in decades.
Despite the justified concerns and appropriate skepticism about artificial intelligence in medicine, I remain cautiously optimistic. I believe the advances in big data, machine learning, and robotics will allow us to practice in ways we never expected, and to treat patients more effectively than we thought possible. Which, in my opinion, makes big data a big deal.