This blog post is part of a series about artificial intelligence in healthcare.
Data-rich, technology-based solutions like artificial intelligence are lining up to transform many facets of healthcare, but before we start planning for the transformation that this technology can bring, we might want to do a reality check: Will AI truly be disruptive in an industry that has been slow to adopt new technologies, or will the change be incremental? And even if AI can improve care delivery, what do doctors—and patients—think about inviting AI into the exam room?
My colleagues and I currently are conducting some research on how healthcare stakeholders feel about AI. As part of our study, we talked to doctors and patients to get their thoughts on AI’s use in healthcare delivery, and on the inherent data use and privacy issues as data-driven care becomes more the rule than the exception.
Early findings from our physician survey have revealed some interesting insights—chief among them the physicians’ concerns about handing over patient-related decisions to a machine. In fact, according to our research, most physicians draw a hard line between AI’s capacity to tackle administrative and operational tasks and the role that AI could play in actual care delivery. What it all boils down to is a lack of trust. But if AI can demonstrate its medical worth—and that’s a big if—physicians may learn to open the exam room door to robot “assistants.”
What’s the Role of the Robot vs. the MD?
Many healthcare players are betting on technologies like AI to go beyond lifting the administrative burden by helping to make the shift toward value-based care possible. We’ve seen point-of-care applications bubbling up with increasing frequency. AI is being used to create new care paths. Telehealth continues to gain a foothold. New technologies like 3-D printers are making their way into operating rooms and surgical residents are being trained in robotic procedures with varying degrees of success. Data-savvy corporations are getting in on the game, too, looking to AI and other technology to help tackle some of healthcare’s biggest challenges.
As the conversation gains momentum, the promise of AI becomes more and more grandiose. But before we begin scripting AI as a given at the point of care, let’s consider where doctors land on the spectrum of AI adoption—and whether demographics like age have anything to do with it. After all, this is a group of professionals who are having trouble parting with pagers and needed CMS to impose a deadline (2020) to remove fax machines from their arsenal of communication tools. It could take years for AI to reach the level of “disruptor” in care delivery.
There seems to be some low-hanging fruit, though. Many providers are comfortable engaging AI to improve the efficiency of administrative tasks like note taking and appointment scheduling. Preliminary survey results indicate that physicians see AI as a way to increase efficiency in their healthcare practice, allow for more personal time with their patients and improve the personalized care that patients receive. Two-thirds of the doctors we spoke to would like AI to help with workflow management, administrative assistant tasks, and patient experience analysis.
Even when technology could help alleviate physician burnout, physicians would want clear lines drawn regarding AI-patient interactions. We found that physicians are twice as likely to allow AI to serve as a voice-activated medical assistant that looks up information than to give AI license to interact directly with patients in a virtual care capacity. Conversely, only one-quarter of doctors believe that AI should help with diagnosis or genetic analysis. In these cases, doctors were adamant that AI could offer suggestions while the doctor should own the final decision. Clearly, a line is being drawn: AI can help at the front desk, in the back office or with research, but it will not be invited into the examination room.
How Will AI Overcome the Hurdles?
There’s no quick path to getting doctors to warm up to the idea of AI in care delivery. Instead, the journey will start with laying the foundation for incremental changes at a pace that matches physician adoption of other technologies.
First, we have to consider the physician’s motivations and fears. For example, the physician could say that AI enables her to provide higher-quality care or she might have trepidations of a disaster akin to the emergency caused by Pentium’s floating point error. There’s also a great opportunity to use AI to directly lower costs for the health system, but it remains to be seen whether that will be enough of a motivator for physicians, given their care-related concerns.
Even if the physician is gung-ho on AI, her employer might not share in her enthusiasm—or isn’t equipped to. There are a variety of factors that might block AI’s entry into a health system, not the least of which is budget. Conversations about how to solve data privacy issues are ongoing. Then there’s the issue of interoperability, a widely known concern among provider organizations. The successful introduction of new technology, particularly one as complex as AI, hinges on its ability to communicate and exchange information with existing electronic systems.
There’s also the question of whether AI makes the provider-patient experience more impersonal. Will AI’s introduction into healthcare give more weight to the technical and “accurate” over the human and interpersonal? Or does AI free up the physician to spend more time with the patient?
If AI lives up to the hype, it could propel healthcare forward. But at this point, we’re faced with the dilemma of just how much AI could change the patient-provider experience, and whether it’ll be a better version of what we know today. How we decide to contend with the tension building between human judgment and robotic efficiency and knowledge just may determine the trajectory that AI takes in healthcare.