Health Dialog Connections

Considerations in Evaluating your Chronic Care Management and Other Population Health Programs

In a previous blog post, we emphasized the three pronged strategy that risk-bearing organizations should employ when implementing chronic care management and other population health management programs. The second post in the series discussed the ID and Strat analytic. In this post, I will review the second component of the strategy we have outlined: Evaluation–the hygiene factor.

Part of an analytic team’s role is to launch an unbiased evaluation of your intervention program. A review of the Population Health Alliance/Health Enhancement Research Organization’s publication “Core Metrics for Employee Health Management” is worth your time.  You will find a comprehensive assessment of the various methodologies for evaluating the performance of a program and how to interpret changes in the health status of a population (these methods apply for other groups in the same way that they apply to employers, so don’t let the name of the publication deter you).

In order to determine the right evaluation methodology for your group, you must balance:

  1. The goal of the program – Are you evaluating qualitative or quantitative outcomes? If you are looking at qualitative outcomes, such as whether your employees feel more loyalty toward you as a result of the program, an employee survey may best suit your needs. If you would like to understand an outcome such as cost savings or quality of care improvements, you will need to collect and analyze clinical or administrative data.
  2. The data at your disposal –If you are evaluating quantitative outcomes, you will need to understand what data you have available or can gain access to. For example, paid administrative claims data is required to evaluate cost and utilization outcomes, whereas biometric and/or HRA data is required to evaluate health and behavioral outcomes.
  3. The certainty of the result – The “gold standard” in evaluative research is a double blinded randomized control trial, where neither participants nor evaluators know which arm of the trial they are participating in. For many obvious reasons, this is a difficult or impossible process to implement in commercial settings. As outlined in the PHA/HERO publication, there are other options that may provide less certain result but are more realistic in terms of implementation, such as participant vs. non-participant comparison or an assessment with matched controls in a non-exposed population.
  4. The time/resources you are willing to commit –The analytic staff resources at your disposal will play a key factor in the type of evaluation you will be able to produce. The time frame of the data available will also play a factor. For example, a cost savings analysis often requires a year of data and at least three months of run out before a thorough evaluation can be completed, whereas for a behavioral outcomes analysis, biometric data at two time points (Pre vs. Post) may be sufficient.

As the total population health management services industry has evolved, the application of evaluation has also evolved. The old way of doing things was to implement an intervention and then follow it up at least 12 months later with a look back at the cost effectiveness of the program. 

While cost effectiveness is still important, risk bearing entities today want proof, or leading indicators, that change in terms of engagement and/or health status is happening in their populations. Evaluation of these “intermediate” metrics (such as engagement, goal-setting, and behavioral impacts) is important, and we will review that in detail for the final blog post in this series next month.

Check out Health Dialog’s Care Pathways analytic framework, and for even more insight, read our latest white paper: Delaying Disease Progression Across a Population.

Add new comment

Connect

Twitter Feed

Subscribe

Sign up for news, updates and the latest information.

Questions & Comments

Info@healthdialog.com

617.406.5200

Blog Authors