Why the Hardest Part of Every Implementation Is Never the Software

66% of technology projects end in partial or total failure — and after years of directing CRM and ERP implementations, the failure point is almost always the same. Not the build. Not the testing. The data mapping. Here's why reference data is the structural choke point of every enterprise migration, and what an actual fix looks like.

JW

James Watt

28 April 2026·5 min read
Why the Hardest Part of Every Implementation Is Never the Software

66% of technology projects end in partial or total failure. That figure has been cited so often it's starting to lose its impact. But as someone who spent years directing CRM and ERP implementations, I can tell you the number isn't the interesting part. The interesting part is where the failure actually happens.

It isn't the software.

Key Takeaways

  • Implementation projects fail at the same point across different vendors, sectors, and partners — and it's almost never the build, the configuration, or the testing
  • The real choke point is reference data mapping: the codes, classifications, and hierarchies that need to be reconciled between old and new systems
  • The tool that gets used for this work is, almost universally, a spreadsheet — with no version control, no audit trail, and no way for resolved data to flow consistently
  • The problem isn't a failure of effort or intent. It's structural: when reference data improvement only happens through a project, it stops happening when the project ends
  • Fixing this requires a platform purpose-built for ongoing reference data governance — not another spreadsheet, and not another time-boxed project

The same pattern, across every sector

I've worked across mid-market and large enterprise organisations — different vendors, different sectors, different implementation partners. And the projects that ran into serious trouble almost always ran into it at the same point.

Not in the build phase. Not in the configuration. Not in testing.

In the data mapping.

Specifically: taking every meaningful piece of data from the old system and establishing where it lives in the new one. Not just technically — which field maps to which field — but semantically. What does this code actually mean? Is this hierarchy still the right structure? Who owns this classification, and do they agree on the definition?

These questions don't come up in the sales conversation. They don't feature prominently in the statement of work. They surface six weeks before go-live, when the migration extract arrives and the project team discovers that nobody can agree on what a "customer" actually is across the old system.

Why this work is harder than the project plan assumes

The data in question is what I'd call reference data: the classifications, hierarchies, codes, and definitions that the system relies on to function. Customer types. Product categories. Cost centres. Region codes. The controlled vocabulary that everything else hangs off.

In most mature organisations, this data has never been formally governed. It accumulated over years — in different systems, managed by different people, for different purposes. Nobody needed to define it rigorously, because nobody ever had to move it.

Then the migration arrives. And the problems appear: three departments with three different answers to what a "customer" is. Fields repurposed years ago that still carry the original field name. Codes added by someone who's since left, still active in the source system. Classification hierarchies that made sense in 2018 that don't map cleanly onto the structure the new platform requires.

Every one of those questions needs a human decision. The decisions need to be documented. And there are thousands of them.

The tool everyone reaches for

Almost without exception, the tool that gets used for this work is a spreadsheet.

The team works hard. Decisions get made. But by go-live, the process has accumulated months of informal decisions with no formal record. Six months later, when a regulator, an auditor, or a programme director asks why a particular field was mapped a particular way, the honest answer is usually: nobody knows.

What doesn't change after go-live

This isn't a story about teams not trying hard enough. Every implementation team I've worked with treated the data mapping as genuinely important work.

The problem is structural.

The data problem doesn't disappear at go-live. It just stops being anyone's formal responsibility.

And so the cycle repeats. A new system. A new project. The same data, in slightly better shape this time, migrated the same way. The spreadsheet reappears. The go-live pressure returns. The audit trail is missing again.

Why I changed sides

I spent long enough watching this pattern repeat — across different industries, different systems, different teams — before I concluded that the right response wasn't to keep finding workarounds for it.

The problem is real. It's structural. And it doesn't fix itself through effort alone.

It needs a platform purpose-built for it. One that governs reference data properly — with a full audit trail, version control, a configuration-based interface that lets data stewards manage changes directly without raising IT tickets, and a mechanism for the resolved data to flow consistently to every system that depends on it.

That's what we're building at Graphex. Graphshare is the reference data management platform that we wished existed during every implementation we've been part of.

If your organisation is mid-implementation, heading into one, or managing the aftermath of one — and the data is where the pressure is — it's worth a conversation. We deploy in weeks, without custom code, and work alongside your existing migration rather than requiring you to stop and restart.


James Watt is Product Owner at Graphex Software. Before joining Graphex, he directed CRM and ERP implementations for mid-market and large organisations across multiple sectors.

JW

Written by

James Watt

Graphex Software — the affordance-driven data platform.

Want to learn more?

Get in touch to see how we can help your organisation harness the power of connected data.

Get in Touch

Keep Reading

Related Articles

The Critical Role of Reference Data Management for CxOs in the AI Era

Have you ever wondered why it takes months to produce a new management report or corporate dashboard? These frustrating delays, endless debates over data definitions, and inconsistent metrics often point to a hidden issue: poorly managed reference data.

Read more
Your AI Agents Need a Harness: Why Enterprise AI Workflows Demand Finite State Machines

Deploying LLM-based agents into enterprise workflows without a finite state machine governing their behaviour produces systems that fail unpredictably, can't be audited, and can't be explained to a regulator. Here's the architecture that actually works — and why every Pattern B system inevitably evolves toward it.

Read more
Vector Databases vs. Knowledge Graphs for RAG

Large Language Models demonstrate impressive capabilities in natural language understanding and generation. However, they operate as closed systems trained on static datasets, lacking real-time awareness of new information and struggling with factual accuracy. Two prominent approaches have emerged for RAG: Vector Databases and Knowledge Graphs.

Read more