Skip to main content
WhitepaperUpdated April 2026·7 min read

Rescuing a Stalled CRM or ERP Implementation: Triage Before Re-Scope

When a Salesforce, NetSuite, or Dynamics implementation has stalled, over budget, behind schedule, with users threatening to revolt, the first instinct is usually to re-scope or replace the integrator. Both moves are almost always premature. This paper covers the triage framework that names the actual stall cause in two weeks, separates rescuable programs from genuinely failed ones, and produces the small, decisive intervention that gets the program back to delivering value.

SalesforceNetSuiteERPCRMImplementation RescueProject RecoveryVendor Management

Whitepaper · Implementation Rescue · ~12 min read

A stalled CRM or ERP implementation almost never fails the way the steering committee thinks it is failing. The visible symptoms, missed dates, scope creep, executive frustration, are real, but they are downstream of one of four root causes that the program leadership has typically not yet named. Naming the cause is what allows a rescue; without that, every intervention either rebuilds momentum that the underlying problem then erodes again, or replaces an integrator who was not actually the source of the failure.

This paper covers the two-week triage we run with executives staring at a stalled implementation: the four root causes, the diagnostic that separates them, and the rescue patterns that work for each.

The wrong rescues

There are two rescues that executives reach for first. Both are almost always wrong.

Re-scope. Cut features, slip dates, reset expectations. This works for the moment of the announcement and rarely past the second sprint. If the underlying cause was structural, the re-scoped program will stall again at the smaller scope, and the credibility cost of doing it twice is steeper than doing it once.

Replace the integrator. Fire the SI partner, bring in a new one, restart. This works when the SI was genuinely incompetent (it happens) and is a disaster when the SI was competent but mismanaged from the client side. Replacing a competent SI in a struggling program loses the institutional knowledge they have built and resets the team-formation cycle, both of which are large losses.

The correct sequence is triage first, intervention second. The triage takes about two weeks and is small enough that even an executive with no patience left can authorize it.

The four root causes

Every stalled implementation we have triaged reduces to one of four root causes, sometimes in combination. Each has a different rescue pattern.

1. Requirements never converged

The program is building against a requirements set that the business never actually agreed to. Different stakeholders have different mental models, the SI is building to whichever model was loudest in the last requirements session, and rework cycles eat the timeline.

Diagnostic signals: requirements documents have changed materially in the last quarter, multiple stakeholders contradict each other in the same workshop, "we didn't ask for it that way" is a common phrase in UAT.

Rescue pattern: a focused re-baselining of requirements with a single named business owner per process area, signed off in writing, with explicit out-of-scope language. The SI does not lead this; the client does, with the SI in the room as a translator. Until the requirements are baselined, no further build work has a stable target.

2. Configuration was treated as code

The program has accumulated heavy custom development against a platform that was supposed to be configured. Every customer-specific exception was implemented as code, the codebase has grown into a parallel ERP layered on top of the actual ERP, and now upgrades, support, and even basic changes are expensive and slow.

Diagnostic signals: the SI's developer count is 3x or more the configurator count, custom code makes up a meaningful percentage of the total platform surface, vendor support tickets routinely come back as "not supported in customized configuration."

Rescue pattern: a config-vs-code audit. For every custom-coded capability, ask whether a process change, a configuration option, or a different process design would deliver the same business value. Most of the time, 30-60% of the custom code can be removed in a controlled refactor that takes one to two quarters and dramatically reduces the run-rate cost of the platform thereafter.

3. Data migration was never validated

The program will not be able to go live because the data migration plan does not survive contact with the production data. Every test cutover surfaces a new class of data issue, the timeline for "data ready" keeps slipping, and the team is rebuilding migration logic instead of finishing build.

Diagnostic signals: the data team is talking about "exception cases" that turn out to represent 20-40% of the records, the source system has fields nobody can definitively explain, the migration code has been rewritten more than twice for the same entities.

Rescue pattern: stop. Run a focused data quality and reconciliation effort against the source system before any further migration work. Most of the time, the source system is dirtier than the program assumed, and cleaning it in place is faster than perpetually building migration code to handle the dirt. Once the source is clean, the migration becomes tractable.

4. Governance was theatrical

The program has a steering committee, a status report, and a RAID log. None of them are actually used to make decisions. Risks accumulate without resolution, escalations are political rather than substantive, and the program leader is making the hard calls in the hallway because the committee cannot.

Diagnostic signals: steering committee meetings produce updates rather than decisions, the same red items appear week after week without action, "the committee has not yet decided" is the answer to multiple program-blocking questions.

Rescue pattern: replace the steering committee with a smaller decision body, three to five executives with clear decision rights, that meets weekly with a published decision log. Move the broader steering committee to a monthly cadence as an information forum rather than a decision body. Most stalled programs gain a quarter of velocity from this change alone.

The two-week triage

The triage is intentionally fast and intentionally narrow. Its only deliverable is a named root cause and a recommended rescue. It is not a re-scope, not a re-architecture, not an integrator replacement.

Week 1, days 1-3, Stakeholder interviews. Twenty to thirty 45-minute conversations across the executive sponsor, business process owners, the program leadership team, the SI partner leadership, and a representative sample of end users. The same five questions in each: what is the program supposed to deliver, what is it actually delivering, what is in the way, what would success look like next quarter, and what have you stopped saying out loud.

Week 1, days 4-5, Document and artifact review. Requirements baseline, change log, RAID log, status reports, the last two months of steering committee minutes, the codebase health report (if available), the migration test results. The goal is to corroborate or contradict the stakeholder narrative.

Week 2, days 1-3, Synthesis. The four root causes are scored against the evidence. Most stalled programs land cleanly on one or two; a small minority are mixtures. The triage names the dominant cause and the rescue pattern.

Week 2, days 4-5, Recommendation and decision. A single document, no more than fifteen pages, presented to the executive sponsor and the program leadership team. The recommendation is a small, concrete intervention. The decision is whether to authorize it.

Rescuable programs and genuinely failed programs

Most stalled programs are rescuable. Some are not. Naming the difference early is part of what the triage is for.

A program is rescuable when:

  • The platform choice is fundamentally fit for the business. Implementations of platforms that were genuinely wrong for the use case do not become right ones with a rescue.
  • The executive sponsor still has the credibility to authorize and protect a rescue. Sponsors who have lost credibility on the program cannot drive its recovery.
  • The SI partner, if competent, is willing to participate in the rescue, including the uncomfortable conversations about what they themselves got wrong. SI partners who refuse to engage in the diagnostic are typically replaced.
  • The end-user community has not yet hardened into refusal. Users who have lost faith but are willing to be re-engaged can be re-engaged. Users who have built workarounds and are now defending them are a much harder problem.

A program is genuinely failed when the platform choice was wrong from the start, the sponsor cannot get one more cycle of organizational support, the SI is not engageable, or the user community has actively moved on. Each of these is identifiable in week one of the triage. Naming the failure early is the kindest move; the alternative is to spend another year and another budget on a program that cannot recover.

What a leader can do this week

Three concrete moves:

  1. Stop authorizing more scope, more contractors, or more workstreams until the root cause is named. Each of these spends money against the wrong diagnosis. The triage costs less than two weeks of the existing program's burn.

  2. Pick the diagnostic question that bothers you the most. Requirements convergence, custom code volume, data quality, governance discipline. The one that bothers you is usually the right place to start the triage.

  3. Authorize the triage as an explicit pause, not as one more workstream layered on top of the current program. A two-week triage with the current program continuing in full motion is a worse triage. The pause itself is part of the rescue.

If a program needs an outside set of eyes for the triage, the Salesforce Solutions, CRM Solutions, and ERP Solutions practices run rescue triages as a fixed two-week engagement. The deliverable is the diagnostic and the recommendation; the rescue itself is a separate decision the executive owns.

RBI

Rex Black, Inc.

Enterprise technology consulting · Dallas, Texas

Related reading

Other articles, talks, guides, and case studies tagged for the same audience.

Working on something like this?

Whether you are scoping an architecture, shipping an agent, or sizing a QA program — we can help.