Skip to main content
Template · Cost of Quality

The economic proof that more testing is not always more value.
The static COQ curve finds the economic optimum; the dynamic curve tracks what it costs in practice.

A worksheet pairing the classic static cost-of-quality curve (where increasing defect-detection percentage first reduces total cost, then increases it) with a dynamic month-over-month model comparing a low-investment program to a high-investment program. Use the static model to set DDP targets; use the dynamic model to track actuals against plan.

Models
Static + dynamic
Economic optimum
~0.7–0.8 DDP
Horizon
Month-by-month

Cost of Quality answers the board-level question: "how much testing is enough?" It quantifies the two halves of the trade-off — cost of conformance (testing, reviews, prevention) and cost of nonconformance (failures, rework, support, lost customers) — and finds the defect-detection percentage that minimizes their sum. Without the CoQ frame, testing budgets are argued in absolute terms; with it, they are defended as economic optimization.

Key Takeaways

Four things to remember.

01

CoN falls, CoC rises, CoQ is the sum

As DDP climbs from 0 to 1, cost of nonconformance falls (fewer bugs escape). But cost of conformance rises — slowly at first, then exponentially as the team pursues the last few percent of defects. Total CoQ is U-shaped; the minimum is the economic optimum.

02

The optimum is usually 0.7–0.8 DDP, not 1.0

For most commercial software, pursuing DDP above ~0.8 costs more in test effort than it saves in support / rework / revenue loss. Safety-critical and regulated contexts push the optimum higher (0.9–0.95). Knowing the industry band of your product sets the DDP target.

03

Dynamic CoQ tracks what the static model predicted

The static model sets the target; the dynamic model (month-by-month CoC and CoN) is what you report against. A program whose CoQ curve rises steeply after month 3 is buying less marginal quality per dollar than the model predicted — that is a signal to re-examine the test basis, not to add more testers.

04

Invest earlier, pay less total

The worked "Cad" vs. "Heroine" comparison shows the same effect the Metrics Part 4 whitepaper documents: programs with higher DDP targets from month 0 accumulate lower total CoQ over the program life, even when month-0 CoC is higher. Pay now or pay more later.

Why this exists

What this template is for.

The downloaded .xls has three sheets: the static COQ table (CoN, CoC, CoQ across 11 DDP levels from 0 to 1), a dynamic curve for a lower-DDP program, and a dynamic curve for a higher-DDP program. The column reference below documents every field; the instructions tell you how to calibrate the model against your own cost structure.

The columns

What each field means.

Defects delivered (to test)

Expected latent defects entering the test phase. The static model normalizes to 1,000 for cleanliness; substitute your estimated count (see the bug-find-fix estimation template for this input).

Defects removed (by test)

Delivered × DDP. The count the test function actually finds.

DDP — Defect Detection Percentage

Defects removed by test ÷ defects delivered to test. The single independent variable in the static model; sweep from 0 to 1 to trace the curve.

Quality (scalar 0–1)

Normalized measure of customer-perceived quality. In the worksheet, quality maps monotonically to DDP but is lower than DDP because some escaped defects are low-impact. Use as directional, not absolute.

Cost of Nonconformance (CoN)

Costs you pay because of defects: rework during development, support costs, warranty, lost customer lifetime value, regulatory fines. Falls as DDP rises. Calibrate against your prior programs — CoN per escaped defect typically ranges $500–$10,000 depending on industry.

Cost of Conformance (CoC)

Costs you pay to prevent and detect defects: test labor, tooling, environments, reviews, training. Rises as DDP rises; rises exponentially near DDP = 1 because the last defects are the hardest to find.

Cost of Quality (CoQ)

CoN + CoC. The function to minimize. The worksheet's core output.

Month (dynamic only)

Calendar position within the program. Month 0 = program start; each row represents a reporting period.

Defects remaining (dynamic only)

Latent defects still present at month M. Decays as testing removes defects; simplest model is exponential decay at the program's DDP rate, compounded by cycle.

Static model · the U-curve

CoN falls. CoC rises. CoQ is U-shaped.

Sweep DDP from 0 to 1. The sum of conformance and nonconformance costs bottoms out around DDP 0.7–0.8 for most commercial software, then explodes as the last few defects get disproportionately expensive to catch. The optimum is rarely DDP = 1.

Static COQ model

CoC / CoN / CoQ across DDP — illustrative $K

Optimum band at DDP ≈ 0.89; right of that the CoC curve goes near-vertical.

04.0k8.0k12k16k20kCost ($K, log-ish scale in practice)0.460.580.670.740.790.840.890.930.971.00optimum
CoN (cost of nonconformance)
CoC (cost of conformance)
CoQ total

The knee at DDP ≈ 0.93 is the inflection point — past it, you pay exponentially more per marginal percent. Safety-critical and regulated contexts push the knee right; consumer SaaS pushes it left.

Live preview

What it looks like populated.

Static COQ curve — sum is minimized between DDP 0.7 and 0.8 for most programs, then rises steeply as CoC explodes.

DDPQualityCoNCoCCoQ
0.460.1$553$46$599
0.580.2$437$58$495
0.670.3$355$67$422
0.740.4$290$74$364
0.790.5$236$79$315
0.840.6$188$85$272
0.890.7$145$96$241 ← optimum band
0.930.8$106$200$306
0.970.9$70$1,270$1,341
1.001.0$37$10,100$10,137

Dynamic model · Cad vs. Heroine

Pay now or pay more later.

Two programs on the same product. “The Cad” targets DDP 0.46 — low monthly CoC in early months, but CoN grows every month because defects accumulate. “The Heroine” targets DDP 0.67 — higher monthly CoC from the start, but stays flatter over the program life. By month 6, cumulative CoQ tells the story.

Dynamic COQ · cumulative over 6 months

Cumulative CoQ — Cad vs. Heroine

Lower-investment program (Cad) ends with higher total. Counterintuitive until you draw it.

02004006008001.0kCumulative CoQ ($K)M1M2M3M4M5M6
The Cad (DDP 0.46)
The Heroine (DDP 0.67)

Same result documented in the Metrics Part 4 whitepaper: programs that invest earlier for higher DDP accumulate less total CoQ over the program life. The static model predicts the target; the dynamic model tracks what it actually costs.

How to use it

6 steps, in order.

  1. 1

    Calibrate CoN per escaped defect. Use the Internal / External Failure worksheet from the test budget template as a starting point: test-cost-per-bug (internal) + maintenance-cost-per-bug (external) × likelihood-of-escape. Industry ranges are a sanity check only.

  2. 2

    Calibrate CoC per detected defect. Total test-function investment ÷ total defects found = per-defect CoC. Expect higher per-defect CoC in your first year of measurement because fixed-cost investments amortize over fewer finds.

  3. 3

    Sweep the static model across DDP 0 to 1 in 0.1 increments. Plot the three curves (CoN, CoC, CoQ) and mark the minimum. The minimum is your DDP target.

  4. 4

    Set the dynamic model: define CoC as a steady monthly spend (your test function budget / months) and CoN as (defects-remaining-at-month-M × per-defect-CoN). Month-over-month CoQ should track the projection within ±10%.

  5. 5

    If actuals diverge from the model after month 2, re-examine inputs: a runaway CoC says the program is investing in low-yield test activity; a stubborn CoN says DDP is lower than projected (see the Metrics Part 4 whitepaper for diagnosis).

  6. 6

    Compare programs in the same portfolio using the dynamic model. The two worked examples ("Cad" and "Heroine") show the typical pattern: higher-DDP program has higher CoC from month 0 but lower cumulative CoQ by month 6 — earlier investment, lower total cost.

Context knobs · where does your band sit

Not one optimum. Three bands.

The U-curve is universal. Its minimum is not. Know which industry band your product sits in before you defend a DDP target to finance.

~0.70
Consumer SaaS
Feature velocity compounds, CoN per escape is low, observability catches most escapes. Optimum sits lower.
~0.85
Regulated financial
CoN includes fines, brand damage, and remediation cost. Optimum sits mid-band.
≥0.95
Safety-critical
Aerospace, medical, automotive. CoN includes human-life cost. Optimum sits high.
re-calibrate
2026 shift
Automation lowers per-test CoC. Observability lowers CoN. AI unsafe outputs add a new CoN category. Curve still U-shaped, at lower altitude.

Methodology

The thinking behind it.

Cost of Quality originated in manufacturing quality economics (Crosby, Juran, Deming). Software engineering adopted it through Kaplan, Krishnan, and others in the 1980s–90s. The U-curve shape is empirical across industries; the position of the optimum varies.

The near-DDP = 1 asymptote is real. The last defects are disproportionately expensive because (a) they evade every test case you have designed, (b) they require new test-design thinking (mutation testing, property-based testing, chaos engineering, adversarial testing for AI systems), (c) they may require environments (production-scale load, rare inputs) that are expensive to construct. Most commercial software is value-negative past DDP 0.85–0.9.

Different product contexts move the optimum: safety-critical (aerospace, medical, automotive) moves it to DDP 0.95+ because CoN includes human-life cost; consumer SaaS moves it to DDP 0.7 because feature velocity compounds and CoN-per-escape is low; regulated financial services sit near 0.85 because CoN includes fines and brand damage.

In 2026 the static model still holds, but three things have changed the shape: (a) automation drives per-test-case CoC down, shifting the optimum right (higher DDP is newly economical); (b) production-telemetry and observability drive CoN down by catching escapes before customers do, shifting the optimum left (less test needed to hit the same risk level); (c) AI-assisted test authoring is pushing per-test-case CoC down further but introducing a new CoN category — AI-system unsafe outputs — that needs its own DDP target. Net effect: the curve is still U-shaped, but at a lower altitude for most programs than it was 10 years ago.

Take it with you

Download the piece you just read.

We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.

Need a QA program to back this up in your organization?

If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.

Related reading

Articles, talks, guides, and case studies tagged for the same audience.