Charting the progress of system development. Five bug-data charts every test lead should run every week.
Bug data is the most honest project dashboard you will ever build. Five simple charts — cumulative opened vs. closed, closure period, daily vs. rolling, root-cause breakdown, subsystem breakdown — give you a crisp read on product stability, developer responsiveness, and where the project is going sideways. This talk is the method, the SpeedyWriter worked example, and the interpretive patterns you'll see in real data.
Abstract
Bug data is the most honest dashboard you have.
Every project runs some kind of status report. Most of them are green because the author wants them to be. Bug data is different — the numbers are what they are, and the patterns they form on a chart tell you things the weekly status email will never tell you.
This talk walks through five charts that a test lead can pull out of any bug-tracking database in about an hour a week. Cumulative opened versus closed shows whether the product is stabilizing. Closure period shows whether bugs are actually getting fixed. Root-cause and subsystem breakdowns show where to invest next. None of the math is complicated — a COUNTIF in Excel plus two running totals gets most of it. The interpretive patterns are what take practice.
We use SpeedyWriter as the worked example throughout — three cycles each of component, integration, and system test running from mid-July through a 13 September first customer ship. The numbers are modest on purpose. The point is to see what each chart says when you have 100 bugs, not 10,000.
“Manage key indicators, not the crisis du jour.”
— Rex Black, International Software Quality Week
Outline
What the talk covers, in order.
Why chart bug data at all
Two reasons. First, to assess product, process, and project quality — stability, defect-removal trends, root cause, bug management, and hot spots. Second, to communicate status to peers and management in a way that summarizes key facts, surfaces underlying trends, and gets the point across quickly.
- Summarize key facts and underlying trends, not incident counts.
- Get the point across quickly — one chart, one argument.
- Manage the key indicators, not the crisis du jour.
The SpeedyWriter worked example
SpeedyWriter runs three cycles each of component, integration, and system test. Component test: weeks of 7/19, 7/26, 8/2 — 25, 20, 5 bugs. Integration test: 8/2, 8/9, 8/16 — 20, 15, 5 bugs. System test: 8/16, 8/23, 8/30 — 10, 5, 0 bugs. First customer ship: 13 September. That's ~105 bugs over two months. The tiny system-test-cycle-3 total of zero is the first real stability signal; everything else on the chart is context.
- Two fields from bug tracking carry most of the signal: opened date and closed date.
- Export to a spreadsheet to chart.
- Bug reports approximate underlying bugs linearly (roughly 27% high because of duplicates and rejections) — charts hold up.
Chart 1 — cumulative opened vs. closed
The flagship chart. The cumulative opened curve flattens as the system stabilizes and the test team finds what it can find. The cumulative closed curve converges to the opened curve as the product becomes customer-ready. A closed curve that tracks the opened curve closely indicates crisp bug management. Milestones in the project show up as changes in the shape of the curves — phase transitions, drop-and-recover events around major merges, and the final close-out.
- In Excel: COUNTIF by date for opened and closed, then two running totals.
- Watch the gap between the two curves; that gap is the quality backlog.
- The derivative of the opened curve is your daily find rate — also worth plotting.
Chart 1 — the three trouble patterns
Three distinct failure modes show up unmistakably on the opened/closed chart: endless bug discovery (the opened curve never flattens — the product is not stabilizing), ignored bug reports (a persistent, growing gap between opened and closed — the dev team is not keeping up), and poor report management (the gap is noisy and non-monotonic — reports are opening, closing, and re-opening sloppily). Each has a different remedy, which is the whole point of looking at the pattern rather than the totals.
Chart 2 — closure period
Closure period is the age of a bug at the moment it closes. Measures developer responsiveness. Two versions: daily closure period is the average age of all bugs closed on a given day; rolling closure period is the average age of all bugs closed to date. A stable closure period means a smooth rolling line and daily values bounded at the top by two to three test-cycle durations. In the SpeedyWriter example, each cycle is one week, so a well-run project sees daily closure periods mostly under three weeks.
How to calculate closure period
Four mechanical steps. (1) For each closed bug, compute closure period = closed_date − opened_date. (2) Sum closure periods by closed_date. (3) Count the bugs closed on each date. (4) Divide sum by count for the daily closure period; maintain a running total of sum and count for the rolling closure period. All four steps are pivot-table primitives in any modern spreadsheet.
Chart 3 — root cause breakdown
Tag every bug with a root-cause code at close — requirements, design, coding, configuration, environment, documentation, test case. The distribution tells you where the mistakes are being made, which drives both course corrections during the project and long-term development-process improvements afterwards. You're not naming names; you're naming phases.
Chart 4 — subsystem breakdown
Tag every bug with the affected subsystem. Two actionable patterns fall out. First, the subsystems with the most bug reports should get more testing — where there is one bug, there is often another. Second, the subsystems that are most error-prone are candidates for development-process improvement too, because preventing the additional bugs upstream is cheaper than finding them downstream.
Key takeaways
6 things to remember.
Five charts, one hour a week
Opened vs. closed, closure period (daily + rolling), root cause, subsystem. That's it. Anything more is a project for the data team; anything less is inadequate.
The shape of the curve carries the signal
Totals lie. Curves do not. Learn the three failure patterns on the opened/closed chart and you'll spot them on every project for the rest of your career.
Track the quality backlog, not just the find rate
Cumulative opened minus cumulative closed is your quality backlog. It should trend toward zero before release. If it doesn't, nothing else matters.
Closure period is developer responsiveness
Stable daily closure period bounded by two or three test cycles means the development team is keeping up. Exploding closure period means they are not — regardless of how green the status report is.
Root cause drives process improvement
Tag causes at close, not at triage. Requirements-driven bugs have a different remedy than coding-driven bugs, and neither shows up in the raw find count.
Subsystem breakdown tells you where to re-invest
Hot subsystems get more testing now, and more development attention next release. The Pareto holds in almost every system you'll ever ship.
Worked examples
One bug. Eight drafts.
SpeedyWriter — the weekly find count
The raw data behind every chart in this talk. Three cycles of component test, three of integration, three of system test. Bugs trail off cleanly toward first customer ship on 13 September.
Cycle 1 · 7/19–7/25 · 25 bugs
Cycle 2 · 7/26–8/1 · 20 bugs
Cycle 3 · 8/2–8/8 · 5 bugs
Drop from 20 → 5 in cycle 3 is the first "this subsystem is stabilizing" signal. Expect to see the same pattern across the other phases with a one- to two-week lag.
Cycle 1 · 8/2–8/8 · 20 bugs
Cycle 2 · 8/9–8/15 · 15 bugs
Cycle 3 · 8/16–8/22 · 5 bugs
Integration test starts on top of component test, so cycle 1 overlaps cycle 3 of component. Expected. The 20 → 15 → 5 taper mirrors component test — good news.
Cycle 1 · 8/16–8/22 · 10 bugs
Cycle 2 · 8/23–8/29 · 5 bugs
Cycle 3 · 8/30–9/5 · 0 bugs
First customer ship on 9/13. The zero in cycle 3 is the release signal — but only because the opened/closed gap closed at the same time. Without that, zero means a tired test team, not a stable product.
Reading the opened/closed chart — three trouble patterns
Same chart template, three different projects. Each pattern has a different remedy. Name the shape before you debate the totals.
Shape: the opened curve never flattens. Through the planned release date, test is still finding bugs at near-peak rate.
Reading: the product is not stabilizing. Either the test team is exploring net-new territory each cycle, or every build is introducing new defects faster than the old ones are fixed.
Remedy: stop the feature stream, hold an integration line, run a full regression against the last good build, and compare.
Shape: the opened curve flattens normally, but the closed curve lags farther and farther behind. The gap between the two curves widens instead of closing.
Reading: the development team is not keeping up with bug closure, for any of the usual reasons — understaffed, overcommitted to features, or not incentivized to close.
Remedy: rebalance dev effort toward close-out, or formally accept the quality backlog as release risk. Do not hide it.
Shape: both curves are noisy. The closed curve jumps around. Bugs keep re-opening.
Reading: the bug-report process itself is breaking down — sloppy verification, disputed closes, missing regression tests.
Remedy: tighten the close-out criteria, add a re-verify step before close, and consider running the `bug-reporting-process` checklist from the QA Library with the team.
Closing
Five charts, one hour a week. That's all this is. The reason it works is that bug data is the most honest dashboard you will ever build — the numbers are what they are, the curves form what they form, and the patterns they form are durable across every project you will ever run.
Learn the three trouble patterns, learn how to calculate closure period, and tag every bug with a root-cause code at close. The rest is repetition. And at the next project status meeting, you will be the one with an argument that management can actually make decisions from.
Keep reading
Related pieces.
More for this audience
Articles, guides, and case studies tagged for the same readers.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →
Want this talk delivered in-house?
Rex Black, Inc. delivers every talk on this site as a live workshop, a keynote, or a conference session. Tailored to your stack, your team, and your timeline.