"We need more testers." "We already spend too much on QA." Neither argument wins without a frame.
Cost of Quality is the frame.
Every dollar you spend preventing bugs reduces the dollars you spend paying for bugs. Up to a point.
Past that point, prevention gets disproportionately expensive and returns shrink. The total cost of quality curve is U-shaped, and the bottom of the U is what an enterprise quality function should target.
This piece gives you:
What you pay to prevent and detect defects. Test labor, tooling, environments, reviews, training.
Rises as DDP rises.
What you pay because of defects. Rework, support, warranty, lost CLV, regulatory fines.
Falls as DDP rises.
CoQ = CoC + CoN. The function to minimize. That's the whole framework.
Sweep DDP from 0 to 1. The sum of CoC and CoN bottoms out, then explodes.
CoN dominates. Customers find the defects. Support, rework, churn, all high. The cheapest option is more testing.
CoC and CoN cross in economic equilibrium. Total CoQ minimized. The answer to "how much testing is enough."
Over-invested (DDP > 0.85). CoC explodes. The last defects are the hardest to find, you pay exponentially more for each additional percent. For most commercial software this is value-negative.
Aerospace, medical, automotive. CoN includes human-life cost. Optimum moves to 0.95+.
Feature velocity compounds; CoN per escape is low. Optimum moves to ~0.7.
Regulated financial services sit near 0.85 because CoN includes fines and brand damage. Know your band. Don't argue the generic curve with a CFO, argue your industry band.
Static says what the target is. Dynamic says what it costs to hit it.
Programs whose cumulative CoQ rises steeply after month 3 are buying less marginal quality per dollar than the model predicted. That is a signal to re-examine the test basis, not to add more testers.
Low-investment program. Month 0 CoC is lower. By month 6, cumulative CoQ is higher because CoN balloons.
Higher-investment program. Month 0 CoC is higher. By month 6, cumulative CoQ is lower because CoN is much smaller.
The shape that surprises non-practitioners: the program that spent more on testing cost less overall. The Metrics Part 4 whitepaper has the same result. Pay now or pay more later.
Use the Internal / External Failure worksheet in the test budget template as the calibration path. Industry averages are sanity checks, not substitutes for your data.
This number is critical for the static model, and surprisingly few organizations compute it. Knowing this one ratio unlocks every COQ argument.
Per-test-case CoC goes down. The optimum shifts right, higher DDP is newly economical.
Production telemetry catches escapes before customers. CoN goes down. Optimum shifts left, less test needed to hit the same risk level.
AI-assisted test authoring pushes per-test-case CoC down further. But introduces a new CoN category (AI-system unsafe outputs) that needs its own DDP target. Net effect: the curve is still U-shaped, at a lower altitude for most programs than it was 10 years ago.