March 17, 2026 | CQL and FHIR, Digital Quality Measures (DQM), Digital Quality Transformation, Quality Improvement, Star Ratings, Uncategorized, Year Round Prospective HEDIS®
For years, the annual cycle for National Committee for Quality Assurance (NCQA) HEDIS® measure development and deployment has been optimized for retrospective measurement. That design made sense when quality measurement was largely about looking backward at performance. But as health plans increasingly invest in digital HEDIS® measurement, the timeline that governs measure releases and implementations must change too.
Rebecca Jacobson, MD, MS, FACMI
Co-Founder, CEO, and President
This two-part blog addresses two related “time” problems facing health plans pursuing digital HEDIS® measurement:
These two timelines are deeply connected. Without a stable annual cycle, the industry will struggle to operationalize digital measurement. And without a realistic multi-year transition plan, health plans will never reach the point where they can leverage the full benefits of digital quality.
The traditional HEDIS® timeline was designed for retrospective reporting. That made it operationally manageable but limited quality teams’ ability to intervene during the measurement year.
Digital measurement using technologies like FHIR and Clinical Quality Language (CQL) creates the opportunity for much faster quality improvement cycles. But those cycles only work if the operational timeline enables measures to be deployed early enough in the measurement year to influence care.
NCQA has already made important improvements toward prospective digital quality workflows. However, the current digital timeline still reflects many of the constraints of the legacy retrospective model.
To unlock the real value of digital measurement, the industry needs a stable and predictable annual cadence for:
Vendors especially need a reliable, well-understood timeline aligned with their own development cycles and customer implementations.
The new digital ecosystem enables much faster deployment cycles. Ideally, health plans should receive validated measures as close as possible to the start of the measurement year, giving them the time and data needed to drive prospective compliance.
Here’s how the three timelines compare:
| Milestone | Timing |
|---|---|
| Initial spec released | August 1 of year preceding Measurement Year |
| Final spec released | March 31 of Measurement Year |
| Vendors convert to code | Up to July 1 of Measurement Year |
| Certification deadline | July 1 of Measurement Year |
| Certified measures released to customers | July through October of Measurement Year |
In this model, many health plans do not receive final operational measures until well into the measurement year, making prospective intervention nearly impossible. Plans often try to anticipate measure changes and run prior-year logic with adjusted dates, but this adds administrative burden and manual effort.
| Milestone | Timing |
|---|---|
| Initial spec released | August 1 of year preceding Measurement Year |
| New IG released | March of Measurement Year |
| CQL released with test decks | June of Measurement Year |
| Validation begins | June of Measurement Year |
| Vendors release validated CQL to customers | As early as July of Measurement Year |
This is a meaningful step forward, but it still delivers operational measures too late to maximize prospective impact. The architecture and process are right; everything just needs to happen a few months earlier.
| Milestone | Timing |
|---|---|
| Spec released | August 1 of year preceding Measurement Year |
| New IG released | August 1 of year preceding Measurement Year |
| CQL released with test decks | Initial: October of year preceding Measurement Year / Final: January of Measurement Year |
| Validation begins | January of Measurement Year |
| Vendors release validated CQL to customers | As early as February of Measurement Year |
This approach would allow health plans to begin the measurement year with validated measures already deployed, unlocking the full promise of digital quality.
Several practical factors make this shift essential.
New implementation guides should ideally be released several months before measure logic so health plans can adjust their data pipelines. If that isn’t possible, the IG should at minimum arrive alongside the initial measure release.
Today, NCQA releases all measures simultaneously. But in reality, most annual updates are relatively simple, while a smaller subset involves major logic changes or entirely new measures. A staged release model could accelerate the entire ecosystem.
A practical approach might look like this:
This would allow vendors and health plans to begin validation and implementation on the majority of measures much earlier, rather than waiting for everything to be finalized at once.
This approach would allow health plans to begin the measurement year with validated measures already deployed, unlocking the full promise of digital quality.
In Part 2, we’ll break down what a realistic transition from traditional HEDIS® to digital quality measurement by 2030 looks like for health plans.
The path to digital quality measurement depends on solving these two interrelated time problems simultaneously.
First, the industry needs a stable annual timeline that puts validated measures in health plans’ hands early enough to support prospective care improvement, not just retrospective reporting.
Second, health plans need a deliberate multi-year transition strategy that systematically moves them from traditional HEDIS® operations to a fully digital measurement ecosystem.
Solve both, and digital measurement can finally deliver on its promise: using quality data during the measurement year to improve care, not just to report on it after the fact.