The AI Balancing Act: Navigating Innovation in Workers’ Compensation

In the rapidly evolving landscape of workers’ compensation, artificial intelligence emerges as both a powerful ally and a potential pitfall. Michael Cwynar, Senior Vice President of Product Delivery for Enlyte’s Casualty Solutions Group, offers a nuanced perspective on how technology is reshaping an industry built on precision and human expertise.
The world of workers’ compensation has always been defined by intricate details, complex regulations, and the critical need for accuracy. Now, generative AI promises to transform how professionals approach everything from claims processing to fraud detection. But this technological revolution comes with significant caveats—a reality Cwynar understands intimately.
The Promise of Intelligent Assistance
“One of the big potential benefits is using generative AI in a controlled and governed manner,” he explains. Traditionally, extracting meaningful insights from massive datasets required highly specialized technical skills—extensive SQL programming or advanced Python knowledge. Now, tools like Microsoft Copilot for Power BI are democratizing data exploration, allowing domain experts to ask sophisticated questions without deep technical training.
Imagine being able to simply query, “Give me the top ten procedures by provider in California for the last three years,” and receive immediate, actionable insights. This represents a fundamental shift in how professionals interact with complex information. Yet Cwynar is quick to temper excitement with a dose of pragmatic caution.
The Hallucination Hazard
The risks of AI are as significant as its potential. He recalls a stark example from the legal world that illustrates the technology’s most dangerous pitfall: a lawyer in New York who used a generative AI engine to cite case law. The citations sounded so compelling that even the attorney didn’t realize they were entirely fabricated. “That’s the danger right now,” Cwynar warns. “The reasoning logic sounds so accurate that if you didn’t know better, you would literally not know any better.”
This “hallucination” phenomenon becomes especially treacherous in workers’ compensation, where decisions can directly impact an individual’s medical treatment and financial stability. Cwynar is unequivocal: “When it comes to adjusting claims or deciding treatment or giving care, there should always be a human in the loop.”
The regulatory landscape is rapidly evolving to address these emerging challenges. The National Association of Insurance Commissioners (NAIC) is developing a governance framework, with approximately 40 states already moving to adopt portions of these guidelines. For Cwynar, transparency isn’t just a buzzword—it’s a fundamental requirement. “If my personal medical information is in a language model, I want to know about it,” he states firmly.
The Chatbot Conundrum
Chatbot technology represents another complex frontier. Unlike traditional robotic process automation with predefined parameters, generative AI introduces unprecedented variability. A chatbot designed for a specific, controlled environment—like a provider call center with a curated dataset—might provide reliable information. But a broader application, such as a chatbot accessible to an injured worker, could potentially generate wildly inconsistent or misleading responses.
“Depending on how we ask a question, we could get very different answers,” Cwynar notes. The challenge becomes particularly acute when considering nuanced scenarios like explaining complex billing procedures or treatment recommendations.
A Strategic Approach: Crawl, Walk, Run
His recommended approach is methodical: the “crawl, walk, run” strategy. Organizations should start by clearly defining the problems they aim to solve, and even test the technology against scenarios they already understand. This approach serves as both a validation method and a familiarization process.
A critical underlying concern is the fundamental opacity of generative AI models. “Some of these generative AI models have trillions of records of data, and none of us have any idea where it came from,” he explains. The same query could yield dramatically different results based on subtle variations in phrasing—a reality that demands constant vigilance.
Fraud Detection: A New Frontier
The most promising application might well be fraud detection. By analyzing patterns in temporary procedure codes, Cwynar discovered billions being billed through specific providers—insights that would have previously required extensive, time-consuming technical investigation.
“The beauty is that as a domain expert, you can now explore data in ways you couldn’t before,” he reflects. The ability to ask open-ended questions and then iteratively refine them represents a transformative approach to data analysis.
Looking Forward: A Balanced Perspective
As industries across the board grapple with AI integration, workers’ compensation stands at a critical juncture. The technology offers unprecedented potential for efficiency and insight—but only when implemented with careful, strategic human guidance.
Ultimately, artificial intelligence in workers’ compensation isn’t about wholesale replacement of human expertise, but intelligent augmentation. It’s a powerful tool that requires sophisticated, ethical handling. The most successful organizations will be those that view AI not as a solution, but as a sophisticated assistant—always subordinate to human judgment, expertise, and compassion.
Top Takeaway from Enlyte’s Michael Cwynar: Artificial intelligence is a powerful tool in workers’ compensation, but it remains just that—a tool. Human expertise, critical thinking, and ethical considerations must remain at the forefront of its application. &