Getting quality measures right: Solving the maze of MIPS, HEDIS, NQF, numerators, and denominators

Kanav Hasija
Share

Pre-Text

Innovaccer has dedicated itself to aiding healthcare organizations on their journey towards providing high quality, value-based care through the right use of technology and data. For the past three years, we have been crunching over two million managed lives data for more than a dozen value-based care organizations and dozens of value-based contracts with a team of over one hundred engineers, data scientists, product managers, and customer relationship managers. The time has come to disseminate our knowledge, best practices, and most importantly, the best use of our products to efficiently drive value-based care. This blog series is our attempt to reach out to customers and share our stories.

 

Background

I have sat in on various ACO leadership meetings, and interestingly enough, half of the time was often spent on the postmortem discussion of measure numbers. This observation led us to incorporate more transparency in how the measures are computed and further dissect the outputs to find the root cause of problems. Regardless, one problem still exists for our customers: how do we select the correct definition to use in each scenario and for each individual payer and are we getting the right numbers? Thus, I attempt to utilize anecdotes to demystify the framework for the appropriate choice of measure definitions, as well as a scientific framework to analyze why measure numbers are not what we think they should be. This will cover both analytics and workflow designs to be on top of quality measures.

 

Importance of quality measures

I will not spend much time on the “why” of quality measures, as I expect many readers would be well versed in this, however, I do want to highlight its financial implications. Getting a good quality score in value-based contracts does not only help to increase your share of savings or to get pay-for-performance dollars, but our research suggests that performing well in quality measures also reduces the total cost of care for your population, which leads us back to the sole premise of why quality measures were built in the first place.

 

Share in Shared Savings Pay for Performance (P4P) Reduced cost, thus higher savings
Many value-based contracts like MSSP and UHC have share of savings proportional to quality measure performance Various P4P contracts payout dollars for every patient who met a quality measure or based on population-level quality score As illustrated below, improvement in quality measures reduces the total cost of care, thus increasing the probability of shared savings

 

Graph 1. MSSP ACO Performance: Risk-adjusted PMPY vs. Composite Quality Score

The data represented above is for 476 MSSP ACOs for 2015-17. Each dot represents an ACO and performance year, with over 990 ACO – year pairs in total. The  horizontal axis is the composite quality score of 33 ACO measures while, the vertical axis is risk-adjusted PMPY for the ACO population, i.e. avg. expenditure per person year divided by avg. risk score. As we observed, ACOs with a higher composite quality score usually have lower risk-adjusted PMPY and the trend line has a downward slope.

 

Why a measure output might be wrong?

Usually, the hunch that a measure output might be wrong comes from two places: a) Another payer or vendor might have reported a different output, or b) You have a hunch that this number should be based on past experience in your health network. There are only three possible reasons why a measure output might be wrong:

  1. Using a wrong measure definition: We have observed cases wherein the payer has reported a measure number based on a claims-based measure definition while we are trying to compare it with a clinical-based measure definition, which will always yield different results. Also, in some cases, payers deviate from standard definitions, like HEDIS for commercial payers, and thus differences in measure outputs occur.
  2. Data is not standardized: Even if you might have evidence of the occurrence of an event which is used in qualifying a measure, it might not be associated with the right codes as per the measure definition, or in other words structured clinical data does not exist, but the unstructured data such as notes is what can be found in the EMR.

“Various MIPS measures, for example, Diabetes Eye Exam recommends finding eye exam using SNOMED codes in clinical data. If your clinical systems do not generate SNOMED codes from provider notes, you will miss qualifying a lot of eye exams to satisfy this particular measure.”

 Data is not comprehensive: Your population health data system might not be fetching data from all clinical or claims sources, thus leading to lower results in measure output. In addition, sometimes you may think the data is present, but it might not be useful.

“Clinical data from a practice is consumed by population health system using C-CDA export. C-CDA which is generated from the practice EMR does not link procedure with a provider. HEDIS definition of Diabetes Eye Exam, which is used by most commercial payers, recommends finding evidence of eye exam using CPT/HCPCS procedure codes, but for only providers who are optometrists. If the C-CDA does not link a procedure with provider, population healthsystem will never be able to report this measure using that practice’s clinical data.”

I would like to address ways to overcome these pitfalls in the following sections.

Is your data comprehensive, recent, and standardized?

There are three facets of data quality for quality measures:

  1. Comprehensiveness: Are we fetching all data from clinical and claims sources?
  2. Recency: Is it fetched frequently, and it is recent enough?
  3. Standardization: Are we getting standardized codes for diagnoses, procedures, labs, vitals, medications, social history, assessments, location of service, revenue center codes, etc?

 

Innovaccer has curated a feature called “Data Gaps” to be on top of this issue.

Clinical vs. Claims vs. Hybrid measure definitions

We have all heard the known acronyms for quality measures: GPRO, MIPS, HEDIS, and NQF. It is important to understand which definition covers what and differences across these measure authoring bodies. There are three kinds of quality measure definitions:

  1. Clinical: Definitions geared to pick up evidence from clinical data sources only
  2. Claims: Definitions geared to pick up evidence from claims data sources only
  3. Hybrid: Definitions geared to pick up evidence from both data sources

 

Authoring Organization Type of Definition Use Case/ Population
GPRO/Web Interface Clinical Category MSSP/Medicare
NQF Claims Category Gov’t and Commercial
HEDIS Hybrid (Claims/Clinical) Commercial/MA primarily

 

MSSP ACOs have to report quality measures back to CMS based on Web Interface definitions during the Jan-Mar reporting period following a calendar year performance period. Let us take the example of the Diabetes Eye Exam measure again. The definition of Web Interface suggests that an eye exam can be validated using SNOMED codes. What if your clinical network is not good at converting provider notes to SNOMED codes? What about eye exams done outside of your clinically integrated network, especially for eye exams which could be completed by any optometrist?

 

The big question in front of us is: “How do you track measure performance during the year?” If we use Web Interface definitions and track month-by-month improvement on that particular measure, it will usually be underreported for the reasons mentioned above.

Differences in measure definition for diabetes eye exam – Case in point, codes used to identify retinal screening

GPRO / Web Interface NQF HEDIS
252779009, 252780007, 252781006, 252782004, 252783009, 252784003, 252788000, 252789008, 252790004, 274795007, 274798009, 308110009, 314971001, 314972008, 410451008, 410452001, 410453006, 410455004, 420213007, 425816006, 427478009, 6615001 (SNOMED Codes) 2022F, 2024F, 2026F, 3072F (CPT / HCPCS Codes) 67028, 67030, 67031, 67036, 67039, 67040, 67041, 67042, 67043, 67101, 67105, 67107, 67108, 67110, 67112, 67113, 67121, 67141, 67145, 67208, 67210, 67218, 67220, 67221, 67227, 67228, 92002, 92004, 92012, 92014, 92018, 92019, 92134, 92225, 92226, 92227, 92228, 92230, 92235, 92240, 92250, 92260, 99203, 99204, 99205, 99213, 99214, 99215, 99242, 99243, 99244, 99245

(CPT / HCPCS Codes)

 

Tracking versus reporting: The special mention for MSSP contracts

Innovaccer does not recommend Web Interface definitions to track how well you are doing on quality measures throughout the year, but to only use Web Interface definitions for reporting purposes from Jan-Mar for the previous year’s reporting. For tracking purposes, Innovaccer finds that combining definitions from Web Interface, HEDIS, and NQF makes the most sense. Combining these definitions gives a wider net to trap data and a true sense of how well off you are on a measure; however, such a solution curtails a workflow discussion. We call it a “Hybrid Definition”.

 

Workflow design

Hybrid definitions give a true sense of measure performance; however, if the workflow is not designed well, this will add more pressure during the Web Interface reporting period. Thus, Innovaccer recommends the following workflow for members who choose to meet measures purely from claims data:

 

Was the service location of the encounter which helped meet the measure outside of the network? If so, then a care coordinator should obtain evidence from the out-of-network practice and immediately document it in the EHR with the relevant SNOMED code.

 

Was the service location in-network? If so, then the relevant SNOMED code must be documented within the EHR after reviewing the corresponding provider note. This is done by opening up the encounter based on the service date from which the measure was met.

 

We would love to know your thoughts on this new approach. Also, if you have any stories to share about quality management, please feel free to write in the comments section below or reach out to me at kanav.hasija@innovaccer.com

 

 

Kanav Hasija

Chief Product Officer @ Innovaccer

Share