The Droids You’re Looking for: The Slow Pace of AI Implementation in Healthcare

The Droids You’re Looking for: The Slow Pace of AI Implementation in Healthcare

October 30, 2019

No one can deny that we are witnessing tectonic changes in the area of personal and business technology.  Most of us have grown dependent on digital devices, and they will only become more sophisticated, more efficient and more ubiquitous as we enter a new decade.  That goes for the healthcare space, as well.  We’ve already seen evidence of this reliance on new technologies in medical facilities.  Robotic surgery has been a reality for years, and artificial intelligence (AI) promises to dramatically enhance the healing profession in the years to come.

For many, “AI” is a term we increasingly hear about, but uncertainty still surrounds it.  Many are unaware that it refers to machines that have the capacity to learn and make judgment calls—machines that can out-think humans, at least from a standpoint of pure speed.  Some experts envision not only robots that can walk and talk, but cybernetic systems that can think on the fly as they perform life-saving surgery.  Clearly, these are the droids that some in the healthcare industry are looking for—to borrow a classic line and growing meme from the Star Wars franchise.  The problem is they have yet to be found.

The Big Letdown

If the healthcare industry is so wide-eyed with wonder about AI’s ability to revolutionize the practice of medicine, you would think we would already have these modern marvels in place; yet, for the most part, we don’t.  So, why is that?  It’s not as if these AI solutions haven’t already been developed or don’t already exist.  They have and they do.  Devices containing this technology are currently found in sectors ranging from banking to insurance to government.  One 2019 survey of 1,000 senior executives from varying industries indicated that 72 percent are now using AI in their business practices, over against only 48 percent in 2018.

Why, then, have we seen so little progress in the actual installation and implementation of AI solutions in the local hospital?  The fact is that some of these new technologies are on a deliberately slow path to deployment in health venues; and, there are clear reasons for this.

Impediments to Progress

According to Roger Kuan, an attorney in the health and life sciences sector, there are three factors that are contributing to the general go-slow approach in AI implementation.  In a Harvard Business Review article, released earlier this month, Kuan addresses each of these factors, which we will briefly outline below:

  1. Undeveloped Regulatory Frameworks.  The U.S. Food and Drug Administration (FDA) has taken incremental steps in the last few years to update its regulations, doing its best to keep up with ongoing advances in the digital healthcare market.  For example, in 2017, the FDA released its Digital Health Innovation Action Plan; and, last month, the agency published its “Policy for Device Software Functions and Mobile Medical Applications.”  Going forward, the FDA plans to create regulations governing “higher-risk software functions,” to include those utilizing machine learning (ML)-based algorithms.All of these bureaucratic assessments and regulatory efforts take time; and more time will be needed.  Regulators are racing to catch up in their understanding of the nature and capabilities of these new and evolving technologies.  This explains, in part, the slow progress currently being made relative to their real-world application in the clinical setting.
  2. Obstacles to Official Approval.  As alluded to immediately above, AI-driven health tools will continue to evolve.  Given the inevitable product updates, AI vendors may risk having to renew FDA approval with each iteration of the product—again slowing the overall implementation process.  According to attorney Kuan, taking a “version-based” approach in the FDA approval process might be in the developer’s best interest.  Elaborating on this suggestion, Kuan states:

    In this approach, a new version of software is created each time the software’s internal ML algorithm(s) is trained by a new set of data, with each new version being subjected to independent FDA approval.  Although cumbersome, this approach sidesteps FDA concerns about approving software products that functionally change post-FDA approval.

  3. Technical Uncertainties.  One of the challenges to utilizing AI in the healthcare space is our current uncertainty over its safety, security and reliability.  For example:
    • Tracking.  Ideally, the user of the technology would like to be able to determine the root cause of any decision-driven process produced by the AI application, so that the technology can be continually improved.  For example, should the technology lead to a negative outcome for a patient, the ability to track the cause of that event would be critical to understanding and correcting any flaw in the device’s programming.  This way, similar errors can be more readily prevented going forward.Unfortunately, our current ability to track root causes of AI output has been called into question.  So, until AI developers can assure clinicians, facility managers, and government overseers as to the viability of such a tracking mechanism, AI’s role in the health setting will continue to be confined to only modest and mundane tasks.
    • Hacking.  Another concern raised by attorney Kuan is that errant data or programming might be entered into the system—by mistake or deliberate act—that could ultimately produce negative outcomes for the patient.  As sophisticated as these systems are, they are still dependent, at some point, on human interaction; and humans are imperfect.
    • Backing.  Many physicians and physician certifying bodies have been reluctant to jump on the AI bandwagon due to a fear of untoward events, as well as a general lack of understanding concerning how this technology works.  For example, the American College of Radiology (ACR) has only recently issued formal guidelines on how AI software tools can be reliably used.  Patients are also hesitant to back a “decision-making machine,” according to Mr. Kuan.

A Look at the Future

Though obstacles currently exist for AI’s wider adoption, they will not be there forever.  One by one, each impediment will be removed; and providers and patients alike will face a brave new world of machine-driven healing.  This gradual introduction of AI technology in the practice of medicine will be transformative, to be sure.  At the very least, it will (a) bring continual shifts in the standard of care, (b) improve the efficient delivery of diagnostic and therapeutic modalities, and (c) assist clinicians in making more informed decisions.  For many, these are reasons enough to be bullish, if not ebullient, about the future of healthcare.

To sum up, physician groups and facility administrators should continue to acquaint themselves with this growing technology and discuss how to integrate these AI solutions when they finally become available in the OR, the exam room, or the lab.  Besides their clinical applications, executives will want to assess the business impact of these AI devices and solutions.  As their presence and capabilities increase, this will surely impact staffing and budgets.  As with previous shifts in technology, we will simply need to adapt.  Those who are most agile in doing so stand a better chance at prospering.

No, the droids you’re looking for aren’t likely to be found at the local health facility just yet.  However, you can rest assured that they are being readied to take their place among us.  It’s enough to make a gold-plated robot smile.

We want to hear from you. Do you have a topic you would like to see covered in an ABC eAlert? Please send your suggestions to info@anesthesiallc.com.

With best wishes,

Tony Mira
President and CEO