Three test case templates, one workbook.
Step-by-step, screen-by-screen, and IEEE 829 formal.
Every test case style you will need for most software engagements, in one workbook. Pick the variant that matches how your team thinks — they are equivalent in rigor, different in presentation.
A test case is a written question to the system: "does this behave correctly?". The template is the grammar of the question.
Key Takeaways
Four things to remember.
Template 1 — step-by-step
For procedural, command-driven, or API-level testing. Numbered major / minor steps with a result column. Works well for automated and manual test cases alike.
Template 2 — screen-by-screen
For GUI-heavy applications. Rows map to screens and fields; expected result is captured once at the end. Easier to author; reads faster during execution.
Template 3 — IEEE 829 formal
For regulated or safety-critical environments. Full input / output specifications with states, timing, and inter-case dependencies. Takes longer to write; reads like a specification.
All three share the same core metadata
Test ID, suite, priority, hardware, software, duration, effort, setup, teardown. Pick a variant for body; the header stays the same.
Why this exists
What this template is for.
The three variants exist because one size does not fit all. A DevOps team running API regression runs Template 1. A consumer product team running manual GUI regression runs Template 2. A medical-device team running audit-visible test protocols runs Template 3.
The columns below are the union of fields across all three. Each variant populates a subset that matches the testing style.
The columns
What each field means.
Mnemonic identifier. Short enough to reference in conversation; long enough to recognize at a glance.
Five-digit hierarchical ID: XX.YYY where XX is the suite number and YYY is the test number within that suite.
The name(s) of the test suite(s) that use this case. A case may belong to multiple suites.
Derived from the quality risk coverage analysis. Drives selection order during compressed cycles.
One row per required hardware item. Match exactly to the Test Environment sheet entries.
One row per required software item, including versions.
Elapsed clock time to run the test. Distinct from effort — a 2-hour soak test has 2h duration but ~5 min effort.
Person-hours required to execute the test.
Steps to bring the system under test into the required initial state. Kept separate from the test body so the initial state can be saved and reused.
Steps to return the SUT to pretest state. Often mirrors setup in reverse.
Template 1: numbered major / minor steps with result column. Template 2: screen × field × input table. Template 3: IEEE 829 input / output spec with states, timing, dependencies.
Status, system config ID, tester, date completed, actual effort, actual duration, bug IDs linked.
Live preview
What it looks like populated.
Header fields of Template 1 — the step-by-step variant.
| Field | Description |
|---|---|
| Test Case Name | Mnemonic identifier |
| Test ID | Five-digit ID, XX.YYY: XX suite number, YYY test number. |
| Test Suite(s) | The name of the test suite(s) that use this test case. |
| Priority | From quality risk coverage analysis |
| Hardware Required | List hardware in rows |
| Software Required | List software in rows |
| Duration | Elapsed clock time |
| Effort | Person-hours |
| Setup | List steps needed to set up the test |
| Teardown | List steps needed to return SUT to pretest state |
| — Body — | ID / Step / Result / Bug ID / Bug RPN |
| Execution Summary | Status, config, tester, dates, effort, duration |
How to use it
6 steps, in order.
- 1
Pick the variant that matches how your team thinks. Do not try to mix variants inside a single test suite.
- 2
Populate the common header fields (Name, ID, Suite, Priority, etc.) for every case — these drive the tracking workbook.
- 3
Write the body with one atomic check per step / row. A step that checks three things is three steps, not one.
- 4
Keep Setup and Teardown explicit, even when they seem obvious. Automated runners depend on them.
- 5
Leave the Execution Summary fields blank in the template; they are filled in when the case runs, not when it is designed.
- 6
Review authored cases with the test lead before they enter the tracking workbook. A bad case in tracking is harder to remove than a bad case in review.
Methodology
The thinking behind it.
Template 3 follows IEEE 829-2008 Test Case Specification. Inter-case dependencies, timing constraints, and explicit environmental needs are what distinguish it from the lighter-weight variants.
For automated cases, Template 1 is almost always correct. For highly visual or wizard-style UIs, Template 2 is easier to maintain. Template 3 is reserved for regulated environments where the case itself is a controlled document.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →