Skip to main content
Talk · International Software Quality Week, San Francisco

Charting the progress of system development. Five bug-data charts every test lead should run every week.

Bug data is the most honest project dashboard you will ever build. Five simple charts — cumulative opened vs. closed, closure period, daily vs. rolling, root-cause breakdown, subsystem breakdown — give you a crisp read on product stability, developer responsiveness, and where the project is going sideways. This talk is the method, the SpeedyWriter worked example, and the interpretive patterns you'll see in real data.

Slides
10
Charts
5
Worked example
SpeedyWriter

Abstract

Bug data is the most honest dashboard you have.

Every project runs some kind of status report. Most of them are green because the author wants them to be. Bug data is different — the numbers are what they are, and the patterns they form on a chart tell you things the weekly status email will never tell you.

This talk walks through five charts that a test lead can pull out of any bug-tracking database in about an hour a week. Cumulative opened versus closed shows whether the product is stabilizing. Closure period shows whether bugs are actually getting fixed. Root-cause and subsystem breakdowns show where to invest next. None of the math is complicated — a COUNTIF in Excel plus two running totals gets most of it. The interpretive patterns are what take practice.

We use SpeedyWriter as the worked example throughout — three cycles each of component, integration, and system test running from mid-July through a 13 September first customer ship. The numbers are modest on purpose. The point is to see what each chart says when you have 100 bugs, not 10,000.

Manage key indicators, not the crisis du jour.

Rex Black, International Software Quality Week

Outline

What the talk covers, in order.

01

Why chart bug data at all

Two reasons. First, to assess product, process, and project quality — stability, defect-removal trends, root cause, bug management, and hot spots. Second, to communicate status to peers and management in a way that summarizes key facts, surfaces underlying trends, and gets the point across quickly.

  • Summarize key facts and underlying trends, not incident counts.
  • Get the point across quickly — one chart, one argument.
  • Manage the key indicators, not the crisis du jour.
02

The SpeedyWriter worked example

SpeedyWriter runs three cycles each of component, integration, and system test. Component test: weeks of 7/19, 7/26, 8/2 — 25, 20, 5 bugs. Integration test: 8/2, 8/9, 8/16 — 20, 15, 5 bugs. System test: 8/16, 8/23, 8/30 — 10, 5, 0 bugs. First customer ship: 13 September. That's ~105 bugs over two months. The tiny system-test-cycle-3 total of zero is the first real stability signal; everything else on the chart is context.

  • Two fields from bug tracking carry most of the signal: opened date and closed date.
  • Export to a spreadsheet to chart.
  • Bug reports approximate underlying bugs linearly (roughly 27% high because of duplicates and rejections) — charts hold up.
03

Chart 1 — cumulative opened vs. closed

The flagship chart. The cumulative opened curve flattens as the system stabilizes and the test team finds what it can find. The cumulative closed curve converges to the opened curve as the product becomes customer-ready. A closed curve that tracks the opened curve closely indicates crisp bug management. Milestones in the project show up as changes in the shape of the curves — phase transitions, drop-and-recover events around major merges, and the final close-out.

  • In Excel: COUNTIF by date for opened and closed, then two running totals.
  • Watch the gap between the two curves; that gap is the quality backlog.
  • The derivative of the opened curve is your daily find rate — also worth plotting.
04

Chart 1 — the three trouble patterns

Three distinct failure modes show up unmistakably on the opened/closed chart: endless bug discovery (the opened curve never flattens — the product is not stabilizing), ignored bug reports (a persistent, growing gap between opened and closed — the dev team is not keeping up), and poor report management (the gap is noisy and non-monotonic — reports are opening, closing, and re-opening sloppily). Each has a different remedy, which is the whole point of looking at the pattern rather than the totals.

05

Chart 2 — closure period

Closure period is the age of a bug at the moment it closes. Measures developer responsiveness. Two versions: daily closure period is the average age of all bugs closed on a given day; rolling closure period is the average age of all bugs closed to date. A stable closure period means a smooth rolling line and daily values bounded at the top by two to three test-cycle durations. In the SpeedyWriter example, each cycle is one week, so a well-run project sees daily closure periods mostly under three weeks.

06

How to calculate closure period

Four mechanical steps. (1) For each closed bug, compute closure period = closed_date − opened_date. (2) Sum closure periods by closed_date. (3) Count the bugs closed on each date. (4) Divide sum by count for the daily closure period; maintain a running total of sum and count for the rolling closure period. All four steps are pivot-table primitives in any modern spreadsheet.

07

Chart 3 — root cause breakdown

Tag every bug with a root-cause code at close — requirements, design, coding, configuration, environment, documentation, test case. The distribution tells you where the mistakes are being made, which drives both course corrections during the project and long-term development-process improvements afterwards. You're not naming names; you're naming phases.

08

Chart 4 — subsystem breakdown

Tag every bug with the affected subsystem. Two actionable patterns fall out. First, the subsystems with the most bug reports should get more testing — where there is one bug, there is often another. Second, the subsystems that are most error-prone are candidates for development-process improvement too, because preventing the additional bugs upstream is cheaper than finding them downstream.

Key takeaways

6 things to remember.

01

Five charts, one hour a week

Opened vs. closed, closure period (daily + rolling), root cause, subsystem. That's it. Anything more is a project for the data team; anything less is inadequate.

02

The shape of the curve carries the signal

Totals lie. Curves do not. Learn the three failure patterns on the opened/closed chart and you'll spot them on every project for the rest of your career.

03

Track the quality backlog, not just the find rate

Cumulative opened minus cumulative closed is your quality backlog. It should trend toward zero before release. If it doesn't, nothing else matters.

04

Closure period is developer responsiveness

Stable daily closure period bounded by two or three test cycles means the development team is keeping up. Exploding closure period means they are not — regardless of how green the status report is.

05

Root cause drives process improvement

Tag causes at close, not at triage. Requirements-driven bugs have a different remedy than coding-driven bugs, and neither shows up in the raw find count.

06

Subsystem breakdown tells you where to re-invest

Hot subsystems get more testing now, and more development attention next release. The Pareto holds in almost every system you'll ever ship.

Worked examples

One bug. Eight drafts.

SpeedyWriter — the weekly find count

The raw data behind every chart in this talk. Three cycles of component test, three of integration, three of system test. Bugs trail off cleanly toward first customer ship on 13 September.

Component test

Cycle 1 · 7/19–7/25 · 25 bugs

Cycle 2 · 7/26–8/1 · 20 bugs

Cycle 3 · 8/2–8/8 · 5 bugs

Drop from 20 → 5 in cycle 3 is the first "this subsystem is stabilizing" signal. Expect to see the same pattern across the other phases with a one- to two-week lag.

Integration test

Cycle 1 · 8/2–8/8 · 20 bugs

Cycle 2 · 8/9–8/15 · 15 bugs

Cycle 3 · 8/16–8/22 · 5 bugs

Integration test starts on top of component test, so cycle 1 overlaps cycle 3 of component. Expected. The 20 → 15 → 5 taper mirrors component test — good news.

System test

Cycle 1 · 8/16–8/22 · 10 bugs

Cycle 2 · 8/23–8/29 · 5 bugs

Cycle 3 · 8/30–9/5 · 0 bugs

First customer ship on 9/13. The zero in cycle 3 is the release signal — but only because the opened/closed gap closed at the same time. Without that, zero means a tired test team, not a stable product.

Reading the opened/closed chart — three trouble patterns

Same chart template, three different projects. Each pattern has a different remedy. Name the shape before you debate the totals.

Endless bug discovery

Shape: the opened curve never flattens. Through the planned release date, test is still finding bugs at near-peak rate.

Reading: the product is not stabilizing. Either the test team is exploring net-new territory each cycle, or every build is introducing new defects faster than the old ones are fixed.

Remedy: stop the feature stream, hold an integration line, run a full regression against the last good build, and compare.

Ignored bug reports

Shape: the opened curve flattens normally, but the closed curve lags farther and farther behind. The gap between the two curves widens instead of closing.

Reading: the development team is not keeping up with bug closure, for any of the usual reasons — understaffed, overcommitted to features, or not incentivized to close.

Remedy: rebalance dev effort toward close-out, or formally accept the quality backlog as release risk. Do not hide it.

Poor report management

Shape: both curves are noisy. The closed curve jumps around. Bugs keep re-opening.

Reading: the bug-report process itself is breaking down — sloppy verification, disputed closes, missing regression tests.

Remedy: tighten the close-out criteria, add a re-verify step before close, and consider running the `bug-reporting-process` checklist from the QA Library with the team.

Closing

Five charts, one hour a week. That's all this is. The reason it works is that bug data is the most honest dashboard you will ever build — the numbers are what they are, the curves form what they form, and the patterns they form are durable across every project you will ever run.

Learn the three trouble patterns, learn how to calculate closure period, and tag every bug with a root-cause code at close. The rest is repetition. And at the next project status meeting, you will be the one with an argument that management can actually make decisions from.

More for this audience

Articles, guides, and case studies tagged for the same readers.

Want this talk delivered in-house?

Rex Black, Inc. delivers every talk on this site as a live workshop, a keynote, or a conference session. Tailored to your stack, your team, and your timeline.