One workbook for everything you track during test execution.
Team, environment, coverage, status, metrics.
A multi-sheet workbook that gives a test manager a single pane of glass over an execution cycle. Designed for the volume of data a real program produces, not the volume a demo produces.
Dashboards without source data are theater. This workbook is the source data — the chart is a byproduct.
Key Takeaways
Four things to remember.
One book, six sheets
Test Environment, Test Team, Quality Risk Coverage, Test Case Summary, Bug Metrics, Release Readiness. Each sheet feeds the others; none stands alone.
Every test case ties to a risk
The Quality Risk Coverage sheet cross-tabulates test cases against the FMEA risks. An uncovered high-RPN risk shows up immediately.
Estimate vs. actual, side by side
The Test Case Summary sheet tracks estimated and actual effort for every case. Velocity signals emerge from the difference — not from guesswork.
Track the environment explicitly
The Test Environment sheet names every host and its availability status. A "blocked" cycle has a root cause traceable to a named system.
Why this exists
What this template is for.
Most test tracking we inherit is a README, a shared doc, and three Slack channels. This workbook replaces all three with a structured record the whole team edits together.
The Basic variant is four sheets; the Advanced variant adds historical snapshots and a release-readiness view. Pick the one that matches how many releases you run per year — both are useful.
The columns
What each field means.
One row per test host or environment. Track system role, hostname, IP, OS, installed software, and current availability.
Example: DB1 / Kursk / 192.168.6.10 / Solaris / Oracle 9i / Available
Roster of testers, including initials (used elsewhere in the book), full name, title, and working shift.
Example: JHB / Jamal Brown / Test Manager / Split
Matrix cross-tabulating test suites / cases against the FMEA risks. Cell values indicate coverage strength (e.g., 9 = full, 3 = partial, 1 = incidental, 0 = none).
Example: Test 1.001 (File) covers risk 1.001 with weight 9 (full coverage).
Row per test case. Tracks developer, executor, Test ID, name, status, type (Auto / Manual), phase, estimated effort, actual effort, estimated duration, and comments.
Example: ATE / TT / 1.001 / File / Update / Auto / IS / 2 est / 6 act / 4 duration
Running counts of bugs by severity × priority, opens vs. closes per cycle, fix velocity, and regression rate. Feeds the dashboard charts.
Example: Severity 1 × Priority 1 = 3 open; 2 closed this cycle; regression rate 4%.
Summary view rolling up coverage, status, and bug metrics into release-go/no-go signals for leadership.
Example: Coverage: 78% of high-risk; Status: 62% passed; Bugs: 2 sev-1 open.
Live preview
What it looks like populated.
Test Environment sheet from the Basic Sumatra Test Tracking workbook.
| System | Name | IP Address | OS | Other SW | Status |
|---|---|---|---|---|---|
| Server Cluster East (SE) | |||||
| DB1 | Kursk | 192.168.6.10 | Solaris | Oracle 9i | Available |
| Web1 | Leningrad | 192.168.6.20 | Solaris | Netscape | Available |
| App1 | Stalingrad | 192.168.6.30 | HP/UX | Oracle 9AS | Available |
| Server Cluster West (SW) | |||||
| DB2 | Dunkirk | 192.168.6.11 | AIX | Sybase | Available |
| Web2 | Bulge | 192.168.6.21 | AIX | Domino | Available |
How to use it
7 steps, in order.
- 1
Populate the Test Team sheet with every person who will log test results. Use initials consistently in downstream sheets.
- 2
Populate the Test Environment sheet with every host the test lab will use. Mark availability before each cycle.
- 3
Import your FMEA into the Quality Risk Coverage sheet as column headers. Create one row per test suite / case.
- 4
Fill coverage values (9 / 3 / 1 / 0) for each test case against each risk. Sort by risk RPN to spot uncovered high-risk cells.
- 5
In Test Case Summary, enter estimated effort and duration for every case before execution starts. Fill in actual effort as execution progresses.
- 6
Update Bug Metrics daily during execution. Use the shift transition as the natural checkpoint.
- 7
At each release gate, review the Release Readiness roll-up with stakeholders. If it cannot answer their questions, iterate the underlying sheets.
Methodology
The thinking behind it.
The Basic variant is enough for teams running 3–6 test cycles per release. The Advanced variant adds historical snapshot sheets and a read-only dashboard tab suitable for programs running 20+ cycles per year.
The coverage weights (9 / 3 / 1 / 0) come from QFD (Quality Function Deployment). They force a step-function judgment instead of arbitrary percentages.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →