Supercharge Your WC Strategy with Group Health Data

Milliman's Mike Paczolt discusses an AI-based claims solution that uses group health benchmarking data to help workers’ comp payers and TPAs reduce costs and get injured workers the targeted care they need — faster.
By: | April 25, 2023

Workers’ comp pays far more than group health plans for injured workers to receive treatment for the exact same injuries.

This startling realization is part of what is driving Mike Paczolt to help workers’ comp payers apply artificial intelligence and natural language processing to advance claims handling processes — a topic of keen interest for the National Comp conference, and a topic to be discussed further on this year’s program agenda.

Paczolt, principal and consulting actuary, property & casualty for Milliman, will present Leverage Group Health Data with the Power of AI and Predictive Analytics on Sept. 21st.at National Comp in Las Vegas.

Paczolt shared some of his thoughts with Risk & Insurance in advance of the presentation, including how a mere 10% of workers’ comp claims end up accounting for 80% of costs. Through the use of text mining, high-risk claims can be identified early on, improving outcomes for both patients and payers.

This conversation has been edited for length and clarity.

Risk & Insurance: Can you tell us about your role at Milliman and how this area of study came across your desk?

Mike Paczolt: I manage our claims AI solution, which we call Milliman Nodal. Early in my career as a consulting actuary, I got to work on unique projects that involved lots of different data and predictive modeling.

Through those experiences, I got to see the firsthand challenges that a lot of our clients face when it comes to claims and how important data and analytics are in addressing those issues. And that’s really what inspired me to come up with Nodal.

I wanted to reduce the cost of claims by harnessing the power of natural language processing (NLP) and machine learning on top of a very rich data set in claims.

R&I: In your work, you’ve been able to highlight stark differences in costs between injuries covered with workers’ comp and those paid for with group health coverage. Which organizations in the workers’ comp arena are really able to access and distill group health data into something that can be immediately put to use to improve injured worker recoveries?

MP: It [can be used by] a variety of groups depending on exactly what problem we’re trying to solve.

Mike Paczolt, principal and consulting actuary, property & casualty, Milliman

Part of what we do is on the predictive side of identifying claims early on.

But then there’s other elements to Nodal as well where we’re benchmarking the cost of medical claims compared to those outcomes in group health.

That sort of information is valuable to any organization that participates in worker’s comp: For employers, because it’s their claims, understanding how to prevent costs [is important] as is knowing what the appropriate cost is for a claim.

If insurers are writing first-dollar coverage, they’re going to be very interested in that. TPAs handling the claims, obviously, it’s interesting to them.

But even for providers and other stakeholders, having insight into either predictive analytics, predictive modeling, or just benchmarks based on the group health data really gives them a much better view.

And on the medical side, we integrate their data against our benchmarks. Once they’ve made that comparison, we help them develop a strategy to improve patient outcomes.

The whole idea is to make sure that claims end up with the right provider, the right treatment plan, and at the right price. Those are really the three critical things we view.

R&I: In your National Comp session overview, you mention the value of tapping into analytic platforms with specific capacities such as text mining and machine learning. Why did you highlight those two elements?

MP: Those are absolutely missing from the industry — for a few reasons.

One, performing text mining, which is a type of natural language processing, is not the most straightforward thing to do.

The algorithms are constantly improving, but all that work falls within that family of what we call large language models, which you probably have seen in the news with ChatGPT — that’s a generative text tool.

What we’re doing with NLP is trying to understand and see valuable information within the adjuster notes and other unstructured data. It’s a lot faster than [using] the structured data — things like date of birth, state, body part, nature of injury, as well as all the transaction data.

We’ll see things happening in the notes before that structured data even gets created.

Getting that information as soon as possible and using predictive models will allow you to identify claims a lot faster than waiting for the treatment to actually happen and for the subsequent medical coding to come through potentially weeks after that treatment occurred.

Once you start to see high-risk characteristics, we need more focus on that claim. If you think about a broad set of workers’ compensation claims, really the top 10% of those claims are 80% of cost, so it’s a very skewed distribution.

To the degree that we can, we find that 10% and focus all of our cost containment strategies on it. It’s critical because a predictive model that identifies a claim when it’s 60-90 days or six months old, is not very useful.

By that point, there’s not much that we can do to control the cost of that claim, so it’s all about identifying it as soon as possible, getting it to an adjuster with experience in that jurisdiction with enough seniority and a good handle on how to treat one of these claims.

R&I: What are some of the other ways that NLP can have a long-term positive impact on claims organizations and employers?

For a long time, adjusters have had to subjectively use all the tools in their toolbelts. Now, we’re trying to make this more objective [and] based on data.

NLP is also generally more consistent because the algorithm is reviewing all of the data in the same way, whereas when you have humans either coding data or making decisions about claims, we all have our own inherent biases, and we may code different things differently.

By leveraging the unstructured data, we’re able, in some ways, to handle and treat high risk claims consistently.

And then lastly, NLP is more robust. Structured data can be incomplete or even outdated. So a contusion, for example, may get coded initially as a contusion because they don’t know the diagnosis.

We’ll see in the notes on day two, this is a fracture. And if you’re doing your analytics based on just the structured data, you’re not going to be bucketing claims in the right way. A contusion claim may be $500, but in reality, it’s a fracture, which can be thousands of dollars.

The structured data also oftentimes does not have pre-existing medical conditions, so that could be comorbidities like obesity, diabetes, hypertension, smoking, etc.

There are a few claim systems that try to track that information, but it’s generally inconsistent. But when we [use NLP] to read through the notes, we can see a lot of that data.

So, it really gives you a much more complete picture: It tells you a story about the claim. And where the technology is now, we’re able to really read that story and teach it to the algorithm so that it can learn from that and inform the adjuster.

The technology is not going to handle the claim for the adjuster, we never tell clients that the technology is here to replace people. It’s a supplement.

What we’re trying to do is use this technology to make people more efficient, uncover some things that maybe they wouldn’t have thought about, to let supervisors know that if they have a junior adjuster on a claim and high-risk things are emerging, let’s notify that supervisor as soon as it happens.

What we find is that the combination of using natural language processing along with the machine learning, really allows you to predict [high-risk] claims much faster and much more accurately.

Our whole objective is to give these tools to everyone within the industry, not just the insurance companies or the TPAs. The employers should have access to this. The providers, really all the stakeholders within the workers’ comp system, can get a lot of value out of this type of analytics.

I think the usage is honestly, much lower than it should be in the industry, but I do think that this will be leveraged more and more. &


Want to learn more about how group health data can be used to achieve savings in workers’ comp? Mike Paczolt will share more about the latest advancements in AI and predictive analytics and how they can be used to better compare group health and workers’ comp data at National Comp 2023.

Raquel Moreno is a staff writer with Risk & Insurance. She can be reached at [email protected].

More from Risk & Insurance