burger
Why Interoperability Still Fails and How We Can Fix It  - image

Why Interoperability Still Fails and How We Can Fix It

For nearly two decades, interoperability has been described as healthcare’s great unlock - the prerequisite for coordinated care, AI-driven insights, population health, and a functioning digital ecosystem. Policymakers have strengthened the pressure at every turn. The 21st Century Cures Act outlawed information blocking. USCDI created a national floor for what counts as essential clinical data. And in December 2023, TEFCA finalized a long-awaited national “network of networks,” aiming to make nationwide exchange predictable and standardized.

On paper, the United States has never been closer to seamless data exchange. Yet in practice, clinicians still wait for faxed notes, patients still carry discharge summaries in folders, and health systems still spend millions reconciling duplicate records. The gap between legal interoperability and functional interoperability remains stubbornly wide.

To understand why, we need to stop framing interoperability as a purely technical problem. The real blockers are semantic mismatches, workflow failures, governance gaps, and misaligned incentives that undermine even the most sophisticated technical exchange frameworks. And if healthcare wants to move from a model of “data movement” to “data usefulness,” the industry needs a fundamentally different approach, one that prioritizes usability, trust, and real-world integration over compliance checklists.

The Road to Interoperability: a Decade of Ambition, Half a Decade of Acceleration

It’s easy to forget how far the industry has come. In the early 2010s, interoperability meant connecting local systems using C-CDA documents - often multi-page PDFs dressed up as structured data. Health information exchanges (HIEs) attempted to unify regional data, but struggled with sustainability and uneven participation.
Then, the 21st Century Cures Act took a more forceful stance: no more information blocking; patient access is mandatory; and certified health IT must support a national standard for core data elements.

USCDI emerged from this mandate, expanding annually to include more detailed, computable data, from clinical notes to social determinants, diagnostic imaging references, payer information, and, as of the 2025 update, metadata around functional status and insurance coverage. This expansion was meant to increase the granularity and utility of exchanged records, not simply their volume.

The culmination of these efforts came with TEFCA. Launched formally in 2022 and strengthened through the 2024 HTI-2 rule, TEFCA established a governance blueprint and a new class of entities, QHINs, or Qualified Health Information Networks, responsible for enabling trusted national exchange. In theory, connecting to a QHIN means connecting to the nation.

But as of early 2025, TEFCA still doesn’t guarantee consistency in data formats, completeness of clinical histories, or meaningful integration into care workflows. It solves the “pipes” but not the “plumbing.”

Why Interoperability Keeps Failing Despite Progress


Despite a decade of policy pressure and a rapidly maturing technical landscape, interoperability continues to fall short because the industry has consistently addressed the problem at the infrastructure level rather than at the meaning-making level. The most persistent barrier is semantic misalignment: even when systems exchange data according to FHIR or USCDI requirements, they rarely interpret that data in the same way. A lab result might appear as a clean, LOINC-coded entry in one EHR, a shorthand custom value in another, and unstructured text in a clinical note somewhere else. The container is standardized, but the content remains fragmented, forcing clinicians to manually reinterpret information that should have been computable.

Workflow integration further complicates the picture. Health systems often treat interoperability as a technology milestone rather than a clinical one, satisfying compliance requirements by enabling data retrieval without ensuring that external information fits naturally into clinical routines. As a result, incoming data frequently lands in isolated tabs, external viewers, or cluttered timelines where it adds cognitive burden rather than reducing it. In many organizations, the exchange of data is considered the final goal, even if the people delivering care cannot meaningfully use what arrives.

Compounding these issues are identity and consent challenges that undermine the reliability of shared information. The absence of a national patient identifier forces organizations to rely on probabilistic matching based on inconsistent demographic data, leading to duplicates, overlays, and mismatched records. Consent rules add another layer of complexity: while TEFCA attempts to harmonize policies nationally, state-based protections around behavioral health, reproductive care, HIV status, or minors still require granular filtering. When clinicians cannot trust the provenance, identity, or completeness of imported data, they will avoid relying on it, regardless of how technically interoperable the system may be.

Incentives also skew behavior. Vendors are rewarded for meeting certification checkboxes, not for delivering high-value interoperability experiences. Providers face penalties for information blocking but rarely receive meaningful benefits for creating clinically impactful exchange workflows. The result is a culture of minimal compliance, where interoperability is treated as a requirement to fulfill rather than a capability to optimize.

Finally, the pace of standards evolution continues to create uneven readiness across the ecosystem. USCDI expands every year, FHIR releases accumulate, and TEFCA deadlines stretch into 2026 and beyond. Large vendors can keep up with this cadence; smaller organizations struggle, often running mismatched versions of standards that break expectations between systems. Even when the technical pipes connect, the underlying data models rarely align.

Taken together, these issues reveal that interoperability has never been a purely technical challenge. It fails not because data cannot move, but because the surrounding ecosystem - semantics, workflows, identity, consent, incentives, and operational maturity - remains misaligned. Until the industry shifts its focus from data movement to data usefulness, interoperability will continue to deliver far less than what policymakers envisioned.

Conclusion: Interoperability Is Not a Technical Problem, It's the Next Major Investment Frontier

After two decades of federal mandates, standards frameworks, and half-implemented data exchange networks, the lesson is clear: interoperability fails not because health data cannot move, but because the systems meant to use that data remain fundamentally misaligned. Yet within this persistent failure lies one of the most overlooked opportunities in digital healthcare.

The market is shifting. Health systems are under pressure to reduce administrative waste, expand virtual care, and manage increasingly complex patient populations across fragmented ecosystems. Payers are accelerating value-based care models. Life sciences companies depend on real-world data streams for drug development and post-market surveillance. Regulators are tightening expectations around information blocking, FHIR APIs, and cross-network data exchange. This convergence of needs creates a rare moment - a structural demand for solutions that go beyond compliance and deliver true operational intelligence.

Interoperability, once viewed as a bureaucratic requirement, is becoming a strategic differentiator. The next generation of digital health winners will not be defined by elegant interfaces or isolated AI features but by their ability to unify, interpret, and operationalize clinical data at scale. Investors looking closely at the landscape can already see the pattern: companies that succeed in harmonizing data, whether through semantic normalization, longitudinal patient records, identity resolution, or real-time analytics, rapidly increase their value, reduce integration friction, and become indispensable partners to health systems and payers.

This is especially true as AI transitions from static models to adaptive clinical intelligence. Without high-quality interoperable data, healthcare AI will remain siloed, brittle, and clinically unreliable. But with it, the entire industry moves closer to safe automation, predictive care, and continuous-learning health infrastructure. Interoperability is the substrate that allows AI to function in real-world medicine, transforming what today is a technical barrier into tomorrow’s innovation engine.

For investors, the message is unambiguous: the companies capable of solving the “last mile” of interoperability will define the next era of digital health. They will capture the value currently lost in redundant workflows, incompatible systems, and data trapped behind hospital walls. They will unlock the trust required for scalable AI adoption. And they will build the connective tissue that enables every major healthcare trend, from virtual care to decentralized trials to population health analytics, to operate with precision rather than guesswork.

Interoperability has often failed. That is precisely why it is ripe for reinvention. The sector is ready for solutions that treat data not as a compliance artifact but as an actionable, continuously flowing asset. The organizations that deliver this shift will not simply support the healthcare system; they will transform it. And for those willing to invest in that transformation, the upside is not incremental; it is foundational.

Authors

Kateryna Churkina
Kateryna Churkina (Copywriter) Technical translator/writer in BeKey

Tell us about your project

Fill out the form or contact us

Go Up

Tell us about your project