Navigating the Risks: AI’s Rapid Rise in Health Care
Artificial intelligence — and its supposed influence — is seemingly everywhere. For health care, an industry that houses the most sensitive data about a person, the rate of adoption is slower. But that doesn’t mean it’s not in place at your local hospital.
In fact, implementation across the sector, including for patient care, is growing fast, which has both positive and negative implications for patients.
“Technically, artificial intelligence is the ability or capacity for machines to look at massive amounts of data and use algorithms and a type of reasoning to come up with a kind of output,” explained Bill Bower, SVP and director of health care at Gallagher Specialty.
“Within that is a component that I call ‘intelligent automation.’ If you think of AI as replicating the way that humans think, intelligent automation replicates how we behave — things like rules-based tasks, chatbots, administrative-type functions.”
Emerging Technology, Emerging Regulation
The “intelligent automation” area is the most prevalent right now in health care, but as tools become more advanced, that is poised to change rapidly.
Bower’s career prior to joining Gallagher was as an executive in a large hospital system. He describes the health care industry’s functions as existing in four buckets: administration, operations, finance and clinical care. AI adoption varies across each of those four components.
He noted that automation requiring zero human intervention, like admissions processes or appointment-making processes, is already in place across the board and will continue to grow as the industry faces increased pressure to drive efficiencies. The operations pillar is similarly easy to understand as a prime opportunity for low-risk AI. Supply chain and materials management solutions can take into account patient census and ensure optimum levels of support. Even finance can benefit readily through automated billing.
Where most of the excitement and concern lies is the final and most important aspect: patient care.
“Right now, artificial intelligence is not well regulated at all in the U.S.,” said Bower. “The EU is just now getting on board. Where you find crossover of regulations into health care is when a product or medical device employs AI and so has to be regulated by the FDA. That’s a small subset.”
Indeed, a 2021 FDA release (introducing what the industry calls “precise regulation” of AI in medicine) notes that “the FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies.”
This does not mean that there isn’t a plan for it, however, as premarket reviews and a commitment to applied real-world performance monitoring could change algorithmic protocol to safeguard all parties involved.
“What we’re seeing a lot of right now is those things that are under development and being beta tested,” said Bower.
“There’s a real desperate want for AI to address care provision or for AI to assist in analysis or decision support. Some is there, but it’s not to the level of adoption that we see in the other aspects of health care. It’ll get there.” Addressing care provision would include processes like analyzing symptoms or reading chest X-rays for abnormalities.
Mapping AI’s Health Care Risks
Despite the nascence of the technology, health care executives are betting big on AI in the near term. Market research from Morgan Stanley published last August indicated that 94% of health care companies already have some form of AI, and that “the industry’s average estimated budget allocation to these technologies is projected to grow from 5.7% in 2022 to 10.5% in 2024.”
This level of rapid investment belies the fact that concerns about AI’s efficacy and actual efficiency remain, not to mention its more insidious and persistent risks.
“My top concern for AI is actually one that already existed in the health care industry, and that’s IT and cybersecurity,” said Bower.
Health care institutions — like all businesses, but especially because of the personal health information (PHI) they produce — are trying to adopt ever more impenetrable IT infrastructure so they can thwart threat actors from getting into their systems.
“Historically, we worry about PHI. We worry about anybody going in and seeing PHI. What I have a concern with [when it comes to] AI is the contamination of data,” said Bower. “That data is what the AI is drawing from to analyze and assist in providing care. It’s an old concern with a new risk associated with it. So many health care organizations have large enterprise data warehouses that are fed every night with patient information, financial information and operational information.”
Data analysts work within the warehouses and apply AI, and if the data is contaminated as a result, the output — meaning the decision support that could in some cases influence patient care — is unreliable.
“Organizations need to understand that the weakest link within their system is still someone falling victim to a phishing campaign. With all of this data, they need to make sure that not only do they have the systems in place, but also that they know when they’ve been breached and have the right detection systems,” said Bower. A threat actor could be in the system for months contaminating data before they make themselves known, shut everything down and hold it for ransom.”
Bower emphasized the basics of cybersecurity as a good starting place: risk assessment, audit of security measures and detection.
Slow and Steady
The second major risk in bringing AI to health care is a similarly broad one — ensuring that the pace of adoption is measured and deliberate. “In patient care, we’ve seen this in the radiology world, where algorithms and machine learning have been able to replicate a chest X-ray reading from a radiologist. Is that impressive? Yes. Are we ready for the machines to take over? Not even close. But the breathless desire to do it is there,” said Bower.
“Not even close” is right. Much has been made of IBM Watson’s face-plant on cancer care, as well as studies indicating that radiologists actually still perform better than AI on chest X-ray assessment. The technology didn’t meet the high expectations of replacing humans, although it could assist. “The conservative support of AI with human intervention at the end is the state of play,” said Bower.
However, the possibilities of AI are exciting for patient care, especially as it relates to discharge care. “Thirty-day readmissions, when someone is discharged from a hospital and returns in short order — this is a big problem for hospitals, and we’ll see AI targeting that. More and more algorithms will be able to identify those patients that are at high risk for readmission so that we can apply a different modality of follow-up,” said Bower. This in combination with AI tracking data we currently get from wearables, like oxygen saturation and heart rate, could mean the difference between life and death in an emergency.
Then what’s next on the horizon? The frontier of precision medicine, taking into account a person’s genome. “Rather than putting a patient on a statin because their cholesterol is high, AI will be able to take into account a person’s genomic makeup and accompanying studies in similar populations to come up with a care provision that is tailored to that individual.”
That level of individualized care could have benefits up and down the chain, including improving efficiency — as long as it’s appropriately controlled through IT risk management. &