Risk Insider: Jason Beans
When Yelp Reviews Are Better Than Hospital Rating Systems
There is widespread industry agreement that moving towards reimbursing quality versus quantity of care is an important means for controlling medical costs. But how do we define “quality?” And, how do we quantify “quality”?
A recent Health Affairs study illustrates the difficulty of those questions.
The study reviewed four popular hospital rating services (Consumer Reports, Leapfrog, Healthgrades, U.S. News & World Report), and the measures they used were so divergent that their rankings became strikingly different:
- Not one hospital received high marks from all services.
- Only 10 percent of the hospitals rated highly by one service also received top marks from another.
- Twenty-seven hospitals were simultaneously rated among the nation’s best and worst by different services.
We deal with this frequently in our networks. We’ll have one client “absolutely” refuse to work with a provider, while another “absolutely” demands that same provider in their network.
Why such amazing disparity? It’s apparent that both hospital rating services and our clients utilize different factors to measure quality, and weigh those factors differently.
One scoring system may value cost per episode, while another values cost per diem. Another system might reward great valet parking, while another focuses on infection rates. Even slight variances can massively impact ratings. At this point, a Yelp review is likely just as good … or better.
So how do we get to meaningful provider ratings? It’s clearly a pervasive problem. In Rising’s 2014 Workers’ Compensation Benchmarking Study, medical management ranked as the top core competency impacting claim outcomes, yet only 29 percent of respondents rate their medical providers. As demonstrated by the Health Affairs study, it’s really hard to delineate the best from the worst, and trying to make those determinations can cause organizational paralysis.
So, I recommend starting simple. First evaluate what outcomes are most important. Do you value customer experience, clinical, or financial outcomes and to what degree? Do you weigh factors differently by service type (e.g., MRIs weigh convenience highly; surgeries weigh clinical outcomes highly)? If your measurements don’t correlate with your goals, your process won’t produce valuable results.
Even slight variances can massively impact ratings. At this point, a Yelp review is likely just as good … or better.
After determining your most important factors, then your second step is to carve providers from the bottom. This avoids the inertia that can come from trying to rate “top” providers too soon. It’s much easier to eliminate the outlier providers that cause the majority of bad outcomes to instantly improve your program.
Only after these steps would I recommend trying to establish the “best” providers. The “best” often deal with the most difficult cases, with the longest recovery periods or possibly the “worst outcomes.” It’s easy to see how a gifted surgeon might suffer under many quality rating systems. On a positive note, the transition to ICD-10 will allow provider quality comparisons at a deeper level of specificity never possible with ICD-9. In other words, we’ll actually be able to compare apples to apples over time.
With this three-step iterative approach, you can create and refine measurements that bring real, long-term value to your organization…making your system better than Yelp.