The fine art of writing a good bug report. Ten tips that turn sloppy notes into a document developers will act on.
A bug report is a technical document — the only tangible product of testing. This talk walks through the ten tips Rex Black teaches every test team, anchored by a single worked example that evolves from a one-line rant into a clear, neutral, reproducible defect report that management can read and developers can fix.
Abstract
Why bug reports are worth the effort
Most bug reports that get returned as unreproducible are not unreproducible at all. They are poorly conceived and poorly written. The symptom was real. The report was bad. And a bad report costs the tester the time they took to write it, costs the developer the time it takes to bounce it, and costs the product nothing — because the bug does not get fixed.
A good bug report is a technical document. It describes the failure mode of the system under test. It is written to increase product quality, not to escalate a schedule slip or vent at a developer. It is read by management, by development, and often by people outside the engineering organization. It is, in almost every test effort, the only tangible artifact the test team ever ships.
This talk walks through the ten tips we teach every test team at Rex Black, Inc., and shows how a single real-feeling bug grows from a one-line complaint into a crisp, neutral, reproducible report. You will not agree with every wording choice in the worked example — two good bug reports on the same problem can differ in style and content without differing in substance. That is fine. The point is the ten habits underneath.
“Software testing should be more like a carefully designed laboratory experiment than a random walk in an electrical storm. Testers are engineers, after all.”
— Rex Black, Quality Week
Outline
What the talk covers, in order.
Structure — test carefully
Good bug reports rest on structured testing. If the underlying testing is sloppy — no written test cases, no notes, no process — the report will be sloppy too. Bug reporting begins the moment expected and observed results diverge, which means you need to know the expected result in writing before you run the test.
- Use a deliberate, documented approach to testing.
- Follow written test cases, or run automated ones against written procedures.
- Take notes as you go. The note is the raw material of the report.
Reproduce — test it again
Always verify the failure is reproducible as part of writing the report. Three tries is a reasonable rule of thumb. Document a crisp sequence of actions that reproduces the failure, and state the incidence rate openly if the failure is intermittent. Clean steps to reproduce head off the "cannot reproduce" bounce before it starts.
Isolate — test it differently
Change one variable at a time and watch whether the symptom changes. Isolation takes thought and an understanding of the system under test, and it can be as much work as the original test run. Match effort to severity. The goal is not to debug — the goal is to give the developer a head start by eliminating the obvious dead ends.
Generalize — test it elsewhere
Look for related failures. Does the same bug show up in other modules, with other data, on other platforms? Are there more severe occurrences of the same fault? Generalizing reduces duplicate reports, and it often reveals that the real bug is two layers down from the symptom you first caught.
Compare — review results of similar tests
Check whether the failure is a regression. Did the same test pass against an earlier build? On which build did it start failing? Not always possible — the feature may be new, or the earlier build may be impractical to reinstall — but when you can answer "new to build X," you have just cut the developer's search space in half.
Summarize — relate the test to the customer
Put a short, bumper-sticker summary on every report. Capture the failure and its impact on the customer in one line. Harder than it looks: testers need to spend real time on this sentence. A good summary gets management attention, names the bug for developers, and helps priority sort itself out without a meeting.
Condense — trim unnecessary information
Reread the report and cut extraneous words or steps. Everyone's time is precious. Don't waste any of it on verbiage — and don't cut any meat either. Cryptic one-liners are as bad as droning on. The target is clear prose at minimum length.
Disambiguate — use clear words
Remove, rephrase, or expand vague, misleading, or subjective statements. The goal is clear, indisputable statements of fact that cannot be misread. Lead the developer by the hand to the bug. "Trashed the text" is rhetoric. "Converted all text to control characters, numbers, and apparently random binary data" is a statement of fact.
Neutralize — express the problem impartially
Deliver bad news gently. Be fair-minded in wording and implications. Do not attack developers, do not criticize the underlying error, do not try to be funny, do not use sarcasm. Confine the report to statements of fact. You never know who will read it — executives, auditors, customers, lawyers. Write for that audience every time.
Review — be sure
Every tester should submit each bug report to one or more peers for a review. Reviewers make suggestions, ask clarifying questions, and challenge whether the thing really is a bug when that's warranted. The test team should only submit the best possible report, given the time budget appropriate to the bug's priority.
Key takeaways
6 things to remember.
A bug report is a technical document
Not an escalation tool, not a vent, not a joke. Accurate, concise, well-conceived, high-quality — those are the four words to hold the work against.
The summary is half the value
A strong one-line summary gets management attention, names the bug for developers, and sets priority without a meeting. Spend real time on it.
Isolate before you submit
Change one variable at a time. The developer should inherit eliminations, not a list of everything you tried in a panic.
Neutralize — always
Strip rhetoric, sarcasm, and humor. You never know who reads the report. Executives, auditors, and customers read bug databases more often than you think.
Review each other's reports
Peer review on reports is the single cheapest quality lever on a test team. If you do nothing else from this talk, start tomorrow.
Process pays off
Faster bug lifecycles. Fewer reopens. Better tester–developer relationships. More credibility with senior management. All of it compounds.
Worked examples
One bug. Eight drafts.
The SpeedyWriter font-corruption bug
One bug, eight drafts. Each draft adds one of the ten tips. This is the worked example from the original deck, rebuilt as readable prose.
Summary: Nasty bug trashed contents of new file that I created by formatting some text in Arial font, wasting my time.
Steps to reproduce:
1. Started SpeedyWriter. Created a new file.
2. Typed four lines of "The quick fox jumps over the lazy brown dog," using bold, italic, strikethrough, and underline in turn.
3. Highlighted the text, pulled down the font menu, selected Arial.
4. This nasty bug trashed all the text into meaningless garbage, wasting the user's time.
5. Reproduced three out of three tries.
Adds: On the vague suspicion that this was a formatting problem, I saved the file, closed SpeedyWriter, and reopened it. The garbage remained.
If you save the file before Arializing the contents, the bug does not occur.
The bug does not occur with existing files.
This only happens under Windows 98.
Adds: Also happens with Wingdings and Symbol fonts.
Rewords the isolation: If you save the file before changing the font of the contents, the bug does not occur.
The same bug, continued
The next four drafts add comparison, summary, condensation, and disambiguation. The final version reads like a document a developer can act on without asking a single clarifying question.
Adds: New to build 1.1.018. Same test case passed against builds 1.1.007 (System Test entry) through 1.1.017.
Summary: Arial, Wingdings, and Symbol fonts corrupt new files.
— the single line that management will read, that developers will use as the bug's name, and that helps the triage meeting prioritize without asking.
Trims pronouns and filler. Replaces rhetoric ("nasty bug trashed all text") with fact ("all text converted to control characters, numbers, and other apparently random binary data"). Fixes "highlighted the text" to "highlighted all four lines of text."
Summary: Arial, Wingdings, and Symbol fonts corrupt new files.
Steps to reproduce:
1. Started SpeedyWriter. Created a new file.
2. Typed four lines of "The quick fox jumps over the lazy brown dog."
3. Highlighted all four lines of text. Pulled down the font menu. Selected Arial.
4. All text converted to control characters, numbers, and other apparently random binary data.
5. Reproduced three out of three tries.
Isolation: New to build 1.1.018; same test case passed against builds 1.1.007–1.1.017. Reproduced with the same steps using Wingdings and Symbol fonts. On the suspicion that this was a formatting problem, saved the file, closed SpeedyWriter, and reopened it — garbage remained. Saving the file before changing the font prevents the bug. Does not occur with existing files. Only under Windows 98 — not Solaris, Mac, or other Windows flavors.
Closing
A bug report is a technical document. Accurate. Concise. Well-conceived. High-quality. Move quickly from opened to closed.
The ROI: improved test-team communication, more credibility with senior management, better tester–developer relationships, faster bug lifecycles, fewer reopens. All of it compounds into increased product quality — which is what the test team is there for.
Keep reading
Related pieces.
More for this audience
Articles, guides, and case studies tagged for the same readers.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →
Want this talk delivered in-house?
Rex Black, Inc. delivers every talk on this site as a live workshop, a keynote, or a conference session. Tailored to your stack, your team, and your timeline.