ConnectSphere

Resources / the-challenge

Why integration costs don't come down

Most enterprises spend more on integration this year than last, even without a proportional increase in source-system count. Most CIOs accept this as a given. The line item is "integration and middleware," it grew 12% this year, 14% the year before, and it will grow again next year, and that's just how it is.

It isn't. The growth is structural, and the structure can be changed. This essay is about why integration spend behaves the way it does, and what to look for if you want to break the curve.

The maintenance economy

Across most large enterprises, somewhere between 60% and 80% of IT spend goes to keeping the lights on — maintaining systems that already exist rather than building new capabilities. Within IT, integration is one of the heaviest contributors to that ratio. Each connector, ETL job, and middleware route is a long-tail liability: it works on Monday, drifts on Tuesday, breaks on Wednesday, gets a change order on Thursday.

The integration line item is the integral of point-to-point complexity, not the derivative. Adding a new system this year produces a one-time integration cost plus a stream of recurring maintenance cost that persists for the system's lifetime. The recurring cost compounds. Year-over-year integration spend is the sum of every connector ever built, weighted by its current change rate.

This explains the shape of the curve. Integration spend rises roughly with the count of integrations, not with the revenue the integrations support. As long as the underlying architecture is point-to-point — system A talking directly to systems B, C, D, with bespoke logic for each pair — the integration line item is structurally bound to grow.

The engagement model

Most enterprise data integration is sold under one of two contract types: time-and-materials, where the seller is paid by the hour, or fixed-price-with-change-orders, where the seller is paid the original bid plus a stream of additions for everything the bid didn't anticipate. Both models reward systems that need ongoing attention. A system that ships clean and stays clean produces no follow-on revenue.

This is not a claim about the people doing integration work. Most of them are competent and careful, and they are doing what the contract pays them to do. It is a claim about the contract.

An engagement structured around hours billed for maintenance work will, on average, produce systems that need maintenance work. The selection effect runs in both directions: vendors that build self-stabilizing systems lose money under these contracts and exit the market; vendors that build maintenance-prone systems are profitable and persist. Over time, the population of integration vendors converges to the engagement model that pays them.

The result is a market in which the structurally cheaper architecture — one that requires less ongoing attention — is structurally underrepresented in vendor proposals.

Why more middleware doesn't help

The reflex when integration spend grows is to add a layer that promises to manage it: a fabric, a hub, an ETL platform, a modernization roadmap. Each of these is sold as an answer to integration sprawl. None of them remove the underlying point-to-point relationships; they translate them into a different syntax. The new layer adds its own maintenance surface to the existing one.

After the modernization, the organization runs the same N×N integration topology with one more abstraction layer between source and destination. The change-order density doesn't drop. The integration line item doesn't bend. And the new layer's maintenance contract becomes its own line item, on top of the ones it was meant to retire.

The deeper structural cause is described in The messy middle: contradictions in the source data force every integration to encode interpretation logic, and the interpretation logic is what generates the change orders.

The architecture that breaks the curve

The integration line item shrinks only when the underlying topology changes. A single normalized substrate — one model that all sources are normalized into — replaces an N×N web of bespoke bridges with N integrations into a shared schema. The math changes from quadratic to linear in source count. The follow-on cost per added system drops because the substrate absorbs the new source as a structural extension rather than as a new bilateral relationship.

This isn't an integration tool with better UX. It's a different shape of integration. How the substrate gets built is covered in the cardinality-driven normalization deep-dive.

The reason most integration vendors don't lead with this architecture is the same selection effect described earlier. The architecture that compresses integration cost compresses the seller's revenue along with it.

What a buyer can look for

You don't need to know the architecture under your integration program to recognize whether it's stuck in the maintenance pattern. The diagnostic signs are observable from the budget side.

Signs you're in the maintenance economy:

  • Year-over-year integration spend grows faster than the count of new business systems. If you added two new systems and integration spend grew by more than the cost of two integrations, the existing surface is generating change orders that exceed its own baseline.
  • Vendor proposals grow rather than shrink as you add detail. A proposal that gets larger when you ask clarifying questions is selling complexity. A proposal that gets smaller is selling structure.
  • Integration team headcount scales with system count, not with revenue. Linear scaling is the symptom of point-to-point architecture; sub-linear scaling indicates the team is leveraging shared structure.
  • Change-order density is high relative to baseline scope. A program where 30%+ of total spend lands as change orders is one where the original scope didn't anticipate the actual maintenance surface.

Signs you've broken the curve:

  • The integration line item is roughly flat year-over-year despite new systems being added.
  • New sources are template-able from day one — adding a system looks more like configuration than like a project.
  • Vendor proposals propose to shrink the seller's own scope over time, because the substrate is absorbing work the seller used to be paid to do.

What this changes

The maintenance economy is not a moral failing. It is the natural outcome of an engagement model that pays for hours of attention, applied to an architecture that requires perpetual attention. Changing it requires changing the architecture, which forces the engagement to change with it.

ConnectSphere addresses the architectural side directly: a normalized substrate that turns the N×N web of bilateral integrations into N→1 connections to a shared schema. The 6-month production POC delivers a working version of that substrate, after which the cost shape of new integrations is structurally different from what the prior topology produced.

Integration costs grow because the architecture rewards growth. The architecture is the only thing that can shrink them.

Related

Ready to Map Your Fragmented Landscape — and See the Path to One Logical Truth?

In a 30-minute diagnostic call, we:

  • Review your current data landscape for redundancy hotspots and contradictions
  • Show a high-level redundancy map tailored to your systems
  • Outline your exact 6-month POC timeline and expected outcomes

No slides. No sales pitch. Just honest architecture insight to decide if this keystone makes sense for your environment.

Prefer email first? hello@connect-sphere.ai

Or message us on LinkedIn

We typically respond within 24 hours and work with enterprises ready for architectural change.