Why Harnessing AI’s Potential in Health Care Is a Double-Edged Sword

AI’s transformation of health care practice comes with great potential and significant consequences, and it will require careful consideration of its impact.

While generative artificial intelligence (GenAI) has dominated headlines lately, AI is hardly new, especially in the health care industry.

AI applications were first used in the 1970s to help with biomedical issues such as blood infections. In the 1980s and 1990s, AI systems helped power advancements in faster data collection, surgical procedures and the implementation of electronic health records.

Today, more than one-third of U.S. hospitals and imaging centers report using AI in patient care. AI is now used in many specialties, including radiology, psychiatry, primary care and disease diagnosis.

Although employing AI has changed and continues to change health care, providers would be well advised to weigh its pros and cons.

The Pros of AI in Health Care

AI brings undeniably positive tangible change to the delivery of health care, helping providers improve patient outcomes.

First, AI helps improve the quality and accessibility of medical information.

Today, providers turn to AI to assist with processing medical information. In some respects, this marks a move away from the reliance on colleagues and online medical research tools for information and data. A direct benefit is that providers gain more well-rounded advice and recommendations while gathering information specific to conditions related to a particular patient.

Noelle Sheehan, partner, Wilson Elser

AI further improves the accessibility of medical information with 24/7/365 availability, usually with no delay in retrieving that information. In addition, AI applications are constantly updated, giving providers access to real-time information.

Second, AI helps improve how providers process information.

Providers can use the information AI provides as a learning and brainstorming tool. This translates into more vigorous medical thinking that can inspire out-of-the-box medical suggestions and opinions. Since AI can help shed light on a patient’s particular condition, it also can alert physicians to potential issues with treatment, pain and drug tolerance.

Third, AI can improve providers’ interactions with their patients. There are times when AI can outperform nurses and physicians in explaining medical matters, which can improve patient understanding and confidence, which in turn could help with informed consent issues.

Fourth, AI can ingest and process large amounts of data in connection with laboratory results, imaging studies, radiology, fitness tracker data, genetic testing and other sources, and then connect dots that might not be evident to even the most experienced physician.

In this way, AI can help diagnose current illnesses, predict the development of future ones and provide clinical decision support.

Fifth, AI can assist with robotic surgery by helping surgeons navigate patients’ vital areas with less dissection, and by helping them position their instruments. These features can improve surgery success while reducing the invasiveness of surgery techniques.

The Cons of AI in Health Care

Despite the benefits AI offers health care providers, its use is not without risks and concerns.

First, AI can inaccurately analyze data. While AI can help with clinical decision support systems, the humans who enter the data that AI analyzes may introduce errors. Health care data is complex and not as clean or structured as data in other domains. Also, there is an interim nature to some medical data, as medicine constantly evolves.

Together, these factors can lead AI to generate inaccuracies and even errors when interpreting information, which could expose providers to malpractice or other legal claims if their reliance on AI causes injuries to patients.

Second, using AI can lead providers to act in discriminatory or unlawful ways.

If built by humans with biases, AI systems can amplify those biases, potentially resulting in the underrepresentation in datasets of some populations and even non-representative data collection, which could lead to uninformed or underinformed responses to queries, including recommendations for diagnoses or treatments.

Dov Sternberg, partner, Wilson Elser

Importantly, AI can lead to unlawful activities, such as pharmaceutical and medical supply companies using AI for corrupt purposes. For example, these parties (or others) could unlawfully access AI-powered clinical decision support tools to direct physicians to prescribe their products.

Third, when providers use AI, they risk compromising the human component of medicine. As providers increasingly rely on AI to save time and resources, they may begin to defer to it.

This could cause AI to replace human intelligence in the provision of health care, resulting in the loss of the “human nature” aspect of medicine.

Finally, providers’ use of AI can cause confidentiality issues and ethical dilemmas.

When providers enter patient-related information into an AI system, they could be intruding on that patient’s privacy, in violation of HIPAA. Beyond this confidentiality issue, AI also poses ethical dilemmas.

For example, to what extent should providers rely on AI regarding monumental considerations such as end-of-life decisions? In addition, the use of AI could lead to difficult ethical and policy dilemmas if small providers in low-income regions struggle to afford to deploy AI, resulting in a lower level of care than well-funded providers can offer.

An Industry-Changing Tool That’s Not Without Risks

AI is already leaving its mark on the health care industry, yet we’re still in the early days of witnessing how drastically AI might change it.

Advancements in AI’s capabilities bring not only groundbreaking potential to health care but also significant consequences.

As health care providers evaluate how incorporating AI into health care delivery can help improve patient outcomes, and do so more efficiently, they would be prudent to consider the positives and negatives these advancements impose on their patients, their organizations, their physicians and their ethical obligations. &

Noelle Sheehan is a partner in the Orlando, Florida, office of national law firm Wilson Elser. She focuses her practice on complex civil litigation matters involving insurance and general liability defense of matters including nursing home negligence and medical malpractice, personal injury, premises liability, product liability, wrongful death, automobile/trucking liability, negligent security, contract disputes, indemnification disputes, and Americans with Disabilities Act compliance. Dov Sternberg is a partner in the New York office of national law firm Wilson Elser. He represents hospitals, physicians, and other medical and health care providers in state courts throughout the metropolitan area as well as in New York federal district court. His personal injury defense practice includes general and premises liability claims as well.

More from Risk & Insurance