Skip to main content
Talk · Rex Black, Inc.

Four ways testing adds value. Bugs fixed, bugs found, risks mitigated, projects guided.

A smart test group can add measurable value in four ways, not one: by finding bugs that get fixed (or are prevented entirely), by finding bugs that don't get fixed but are documented and worked around, by mitigating expensive quality risks like insurance does, and by providing credible information that guides the project to success. This talk walks through each of the four with real numbers, building up from the ROI calculation in "Investing in Software Testing" into a composite case that — in the worked example — lands at 738% ROI.

Slides
20
Composite ROI
738%
Value categories
4
Format
Analyst talk

Abstract

Piet or Bonnie?

Piet says: "Testing is just a big black hole at the end of the project. The more money we throw at it, the more it consumes." Bonnie says: "Testing is a value-adding activity that occurs throughout the project. By making smart test investments, we reap big rewards." Most test managers want to work for Bonnie. The hard part is convincing Piet — and Piet's boss, and Piet's CFO — that Bonnie is right. The only way to do that is to speak the same language management speaks: return on investment.

This talk gives you that language, decomposed into four ways a smart test group adds measurable value. (1) Finding bugs that get fixed — the classic "cost of quality" argument; in the worked example, layered manual + automated + static testing reaches 627% ROI from this category alone. (2) Finding bugs that don't get fixed — known bugs documented for support, workarounds published for users, support-call time shortened — adds another $32,500 to the ROI at no increased investment. (3) Mitigating expensive quality risks, as insurance would — another $26,000 equivalent. (4) Providing credible, timely information for project tracking — another $33,000 against a medium-sized project budget. The composite in the case study: 738%.

The specific numbers are illustrative. The structure — four value categories, each independently quantifiable — is what matters. Build your own version of this table before the budget conversation, and you go into the room with a defensible ROI, not a request.

Without a measurement, we have no solid sense of our work's value. In many cases, managers will not fund work with no measurable ROI. Learning to estimate testing ROI is a critical success factor for testers.

Rex Black, Inc.

Outline

What the talk covers, in order.

01

The base case — finding bugs that get fixed

Cost of (poor) quality = cost of conformance + cost of nonconformance. Conformance costs include testing and quality assurance. Nonconformance costs include fixing bugs, retesting, dealing with angry customers, damage to corporate reputation, lost business. The classic cost-of-quality escalation is $1 to find a bug in review, $10 in programmer testing, $100 in tester testing, $1,000 in customer usage. In a hypothetical quarterly release with 1,000 must-fix bugs and no formal testing, every bug that ships is a $1,000 bug — quality costs exceed $750,000, customers are furious, and the testing budget is zero. The opportunity is obvious.

02

Stage 1 — manual testing (ROI 350%)

Add an independent test team. Developers find 250 bugs pre-release; testers find another 350. Customers find ~40% fewer bugs. Quality costs drop by roughly a third. ROI on the test investment: 350%. Definition reminder: ROI = (benefit − cost) / cost. The denominator is what the test team costs; the numerator is the difference between the old quality cost and the new.

03

Stage 2 — manual + automated (ROI 445%)

Add $150,000 of tools investment, amortized over twelve quarterly releases. Complement the manual team with automation where it pays — regression, load, performance, structural API checks. Quality costs halved versus baseline. Customers find ~66% fewer bugs than baseline. Test ROI: 445%. This is where the argument in Investing in Software Testing ends.

04

Stage 3 — manual + automated + static (ROI 627%)

Layer static testing on top: testers review design and requirements specs, ask smart questions, and prevent ~150 bugs from ever being built. Customers now find ~90% fewer bugs than baseline. Quality costs down by two-thirds. Testing ROI: 627%. The static-testing layer is cheap per prevented bug (review-stage cost at $1 per bug versus customer-stage $1,000) and the math moves sharply in its favor.

  • Prevented bugs are the cheapest bugs.
  • Static testing is the highest-leverage test investment per dollar, because it acts earliest in the pipeline.
  • A good test group runs static, dynamic manual, and automated — not any one of the three alone.
05

Value category 2 — bugs found that don't get fixed

"What the heck good is that?" — the objection every test group has heard. The answer: if we know where a bug is, even if we don't fix it, we can (1) prevent the user from encountering it through documentation, UI changes, or defaults, (2) warn users in the release notes so they can avoid it, (3) provide workarounds and tips to help/support so calls get shorter and users get answers. The value is real; the trick is measuring it. The deck's worked example does that via one of the most tractable mechanisms: tech-support call time saved.

06

Quantifying the "not-fixed" value — the support-call math

Assume a call for a known bug is 15 minutes shorter than a call for an unknown bug; each bug generates five calls to support on average; a support person costs the organization $40 per hour fully loaded. Then each known-but-unfixed bug saves $40 × (15/60) × 5 = $50 in support time alone. Over 650 additional bugs found during the same test cycle, that's $32,500 of value at no additional investment in testing — ROI rises from 627% to 666%. And that's just the support-call-shortening lever; the user-time-saved lever and the UI-mitigation lever compound on top.

  • $40/hour × 0.25 hour saved × 5 calls per bug = $50 per known bug.
  • 650 extra bugs documented × $50 = $32,500 saved at zero marginal test cost.
  • Composite ROI at this stage: 666%.
07

Value category 3 — risk mitigation as insurance

Testing reduces what risk managers call the "cost of exposure." That's structurally identical to what insurance does — a statistical mechanism for pooling risk, where expected payout = probability of loss × cost of loss. You can substitute testing for insurance against quality risks. Estimate the likely cost and likelihood of each quality-risk category from comparable projects: performance problems ($100k × 10%), functionality problems ($5k × 50%), security problems ($250k × 5%), other problems ($10k × 10%). The "insurance premium" for these risks is $100,000 × 0.1 + $5,000 × 0.5 + $250,000 × 0.05 + $10,000 × 0.1 = $26,000. A test program that already covers these areas is already providing $26,000 of insurance value. ROI rises to 698%.

  • Three caveats: estimates come from small samples; some orgs aren't risk-averse enough to buy quality insurance even at fair price; testers don't actually cover losses the way an insurer would.
  • Still, the substitution argument holds — and it's the language CFOs understand.
08

Value category 4 — information for project tracking

Capers Jones' Estimating Software Costs identifies poor project tracking as a primary cause of project failure. The risk of a medium-sized project failing drops from ~40% to ~20% with good tracking; for very small projects the risk is 2% with tracking, and for very large projects it reaches 85% without. Accurate, credible, timely testing metrics — defects, tests completed, coverage by risk area — are a key part of the tracking information. If good testing provides half of the tracking risk-reduction benefit, testing claims 10% of the project's at-risk value. On a project with $82,500 of testing budget and $247,500 of development budget, that's ($82,500 + $247,500) × 0.1 = $33,000 of tracking value. Composite ROI at this stage: 738%.

09

Looking at the composite

The four categories independently quantified: bug-fix value (stages 1–3), information-about-known-bugs value ($32,500), risk-mitigation insurance value ($26,000), project-tracking information value ($33,000). Add them and the composite test ROI in the worked example is 738%. Industry analyst estimates from around the same period put testing ROI at ~800% — in the same order of magnitude. Your own numbers will differ, perhaps significantly. The practice is what transfers, not the exact values.

10

Do the math — even a bad metric beats no metric

Many testers are allergic to finance. That's a career-limiting instinct. Without a measurement we have no defensible sense of the work's value, and in most organizations managers will not fund work with no measurable ROI. Start the ROI calculation. Be conservative so the number survives scrutiny. Apply Gilb's Law: even a bad metric is better than no metric, because you can iteratively improve a bad metric into a good one. You cannot improve what you don't measure.

Key takeaways

Four things to remember.

01

Testing adds value in at least four distinct ways.

Bugs fixed, bugs found-not-fixed, risks mitigated, and projects guided. Quantify each separately and sum them — that's how you get from a ~450% bug-finding ROI to a composite 700%+ ROI argument.

02

Static testing is the highest-leverage test investment.

A bug caught in review costs $1 to fix. The same bug caught by customers costs $1,000. Pushing testing left into design and requirements reviews is the single biggest ROI move available.

03

Documented known bugs have measurable value.

Support calls get shorter, users get workarounds, UI mitigations steer people away from landmines. The "we found it but didn't fix it" work has a defensible dollar figure.

04

Testing metrics are project tracking.

Defect trends, completed-test counts, and coverage by risk area aren't QA trivia — they're the principal tracking signal for the project as a whole. Treat them accordingly.

Worked examples

One bug. Eight drafts.

The composite ROI ladder — 350% to 738%.

The running case study, stage by stage. Investment is held constant after stage 3 — stages 4–6 add measurable value without adding cost, because the testing that produces the value is already being done for the bug-finding argument.

Stages 1–3 — bugs fixed

Stage 1: manual test team → ROI 350% (customers find ~40% fewer bugs).

Stage 2: + $150k automation / 12 releases → ROI 445% (customers find ~66% fewer).

Stage 3: + static testing (reviews) → ROI 627% (customers find ~90% fewer, quality costs −2/3).

Stage 4 — bugs found, not fixed

Testers document 650 non-must-fix bugs.

Tech support: 5 calls per bug × 15 min shorter per call × $40/hr = $50/bug saved.

650 bugs × $50 = $32,500 value at no additional cost.

Composite ROI → 666%.

Stage 5 — risks mitigated

Quality-risk insurance premium estimate: performance ($100k × 10%), functionality ($5k × 50%), security ($250k × 5%), other ($10k × 10%).

Total "premium" = $26,000.

Test program already covers these areas — no added cost.

Composite ROI → 698%.

Stage 6 — projects guided

Medium-sized project, tracking reduces failure risk 40% → 20%; testing claims half of that reduction.

10% of (test + dev budget) at risk = 10% × ($82,500 + $247,500) = $33,000.

Composite ROI → 738%.

Industry analyst benchmarks put testing ROI at ~800% in the same era — same order of magnitude.

Closing

Two practical notes to close. First, run this calculation with your own numbers. The shape of the ladder is robust; the absolute values are yours to fill in. Substitute your product's bug count per release, your team's cost per stage, your support-call economics, your risk-category estimates, your project's size and tracking baseline. The exercise itself changes how you think about what the test group is actually producing.

Second, be conservative. The goal of the ROI argument is not to maximize the headline number — it is to survive the first pushback from someone who wants to cut the budget. A number you can defend with specific assumptions is worth more than a number you can't. Apply Gilb's Law: even a bad ROI metric beats no ROI metric, because a bad metric can be iteratively improved. A missing one cannot.

More for this audience

Articles, guides, and case studies tagged for the same readers.

Want this talk delivered in-house?

Rex Black, Inc. delivers every talk on this site as a live workshop, a keynote, or a conference session. Tailored to your stack, your team, and your timeline.