How is Artificial Intelligence Affecting Health Care?

Business leaders across nearly every sector are bewitched by AI’s potential to automate repetitive tasks, improve efficiency and save money. Health care is no exception.
Doctors and other providers are curious about the ways AI can be used to automate patient communication, improve response times, manage tasks like scheduling and inventory and help improve care outcomes in radiology and imaging.
These tools are promising, but like any new technology they could expose health care firms to unexpected risk. Right now, it’s a little unclear how insurance policies will cover AI, should it make a mistake that results in a claim.
As hospitals and others in the sector adopt other tools, it will be important to carefully understand how the AI works, when it’s appropriate to use it and what the potential ramifications could be. It will be important for everyone involved to remember that these tools aren’t a replacement for the years of education doctors, nurses and other medical professionals bring to the table.
“AI is not meant to wholly replace physician independent judgment, and we don’t see a space for that to happen anytime soon. There is no true substitute for the training, skills, expertise, and experience of any individual physician,” said Jennifer Freeden, JD, CPHRM, Southwest Regional Risk Manager, ProAssurance. AI is simply just another tool in the toolbox of a physician.”
“The human component cannot be replicated,” added Bradley E. Byrne, Jr., JD, Southeast Regional Risk Manager, ProAssurance.
The Risks of AI Implementation in Health Care

Jennifer Freeden, JD, CPHRM, Southwest Regional Risk Manager, ProAssurance
Right now, health care facilities of various sizes and specialties and in most parts of the country, are primarily using AI administratively to help with things like “scheduling, medical supply inventory, staffing needs, and surgery and exam room availability,” Freeden explained. “These implementations of AI are lower risk while still enhancing patient care and satisfaction, with measurable benefit for future planning.”
These tools also have advantages in responding to patient requests through a health system’s online portal. If a patient needs a record of a particular treatment they received, for instance, the system can find it and upload swiftly.
“It never sleeps. It never tires.” Byrne said. “If a patient sends in a request in the patient portal to get a copy of their record, artificial intelligence can generate a custom response and supply the requested records in real-time.”
Others are seeing AI’s potential for more risky tasks, like taking notes during patient visits or evaluating medical imaging. AI has shown some success here, but it’s important that doctors confirm its outputs to make sure there are no errors. Over-relying on these tools could invite claims.
“Humans tend to become overly reliant on technology, especially the longer that we are exposed to it, the more comfortable we get with it,” Byrne said. “People, including physicians, who use these technologies may focus primarily on the positives, without a full understanding of their limitations.”
One reason why AI might provide inaccurate results is data. Any artificial intelligence system is only as good as the data it trains on. If the data set is biased or incomplete, it could give health care providers inaccurate results.
“If the underlying dataset is based on an adult population, a pediatrician probably should not be utilizing that particular AI solution when serving pediatric patients under the age of 18, and it is the responsibility of the clinician to understand the AI model they are using,” Freeden said.
Going forward, it will be important for doctors to ask: “Is our underlying dataset diversified and robust enough to take into account every patient that’s in our demographic set?” Byrne said.
AI and Insurance

Bradley E. Byrne, Jr., JD, Southeast Regional Risk Manager, ProAssurance
To address the risks AI poses in the health care space, many enterprises will turn to their insurance policies. Health care providers are covered by medical malpractice and there are product liability policies that cover new technologies. Certainly, something will protect against the risks AI poses.
Right now, it’s unclear whether the risks of AI are covered under general liability or medical malpractice policies. If an AI system gets something wrong is that the fault of the product? Or the physician who relied on its judgement? Right now there’s not a standalone, AI-specific insurance product that could clear up some of these questions. Nor are AI-specific policy exclusions widespread.
“An emerging gray area is the intersection between a products issue and a medical malpractice issue, and where the liability will fall in the event of patient harm,” Freeden said. “Liability will likely be apportioned between medical device tools and functions that may be specifically identified as defective, juxtaposed against whether the standard of medical care was met via independent judgment with the use of these tools. Courts face a difficult and nuanced task of differentiating these liabilities.”
On the other end, some in health care are hopeful that AI can help health care providers avoid mistakes that could lead to claims. “There’s probably a little bit of optimism that AI may ultimately help prevent adverse events and claims that could turn into a nuclear verdict,” Byrne said.
What Are the Best Risk Management Practices?
Like any new technology, AI comes with risks, especially in these early days when everything is rapidly changing. But many in the health care industry are optimistic about the technology’s potential. That’s part of why so many hospitals and other health care systems, even smaller, rural facilities, have been quick to embrace it.
“It’s probably going to be part of the solution with regard to the physician shortage issues that are on the horizon,” Byrne said.
Health care companies that are considering implementing AI might feel inundated by the quick developments and the number of products on the market. “It’s very easy to be overwhelmed by the sheer amount of new and evolving information about AI in the healthcare space,” Freeden said. “It would be prudent to have dedicated members within your office charged with staying current with AI healthcare developments.”
In addition to keeping up with general trends, it’s important for health care firms to know their particular AI systems in-and-out. Knowing what data it’s been trained on and what use cases are appropriate will reduce exposures.
“Intimately knowing the underlying AI to be used in clinical decision-making is going to be crucial for practices and hospitals moving forward. It won’t be enough to just say ‘because the AI told me so,’” Freeden said. “The physician will need to be comfortable sitting down with a patient and discussing how or she will be using AI to help develop a diagnosis or treatment plan.’”
Doctors who are using AI as part of imaging analysis or even just as a note taker should inform their patients and make sure they understand and agree to the parameters in which the tech will be used. People also want to know that their sensitive medical data will be safely stored and protected.
“The informed consent process does not go away with the utilization of AI in patient care. In fact, the physician’s ethical obligations remain the same, to keep patients apprised of the tools used to determine their treatment recommendations,” Freeden said.
What’s most important is ensuring these tools are always being carefully used and evaluated by a human. That will help patients to feel more secure and doctors to avoid the risks that can come with overly relying on technology.
“We’re seeing that while there is certainly reason to be optimistic about what AI brings to healthcare, most patients are still more comfortable with a human being,” Freeden said. &