Four phases, applied in order, every project.
Plan. Prepare. Perform. Perfect.
The top-level testing process used on every engagement. Four phases contain twelve sub-processes — each of which is documented in its own checklist in this library.
Testing is a sequence of interlocking activities that produce a defensible assessment of risk to quality. Skip a phase and you skip the defense.
Key Takeaways
Four things to remember.
Plan first, test later
Understand context, prioritize risk, estimate the effort, and get stakeholder sign-off before a single test case is written.
Prepare the team and the test system
A test team without the right skills and a test system without the right coverage are two sides of the same failure.
Perform in cycles against releases
Every test release is an opportunity to either confirm quality or reveal risk. Assign, track, and manage cases against each one.
Perfect what you ship and how you ship it
Document bugs, report status to stakeholders, and adjust the process itself. The last phase is where teams stop repeating mistakes.
Why this exists
The problem this process fixes.
Every engagement starts with the same question: how do we know this system is ready? This process is the answer we give. Four phases, twelve sub-processes, no shortcuts.
Each phase contains its own detailed process, each documented as a standalone checklist in this library. Use this page as the map; follow the cross-links below for the territory.
The checklist
15 steps, in order.
- Phase 1
Plan: Understand the testing effort
- 1.A
Understand the operational (system, project, and process) context and the organizational context in which the testing will be performed.
- 1.B
Define and prioritize the risks to system quality, and obtain stakeholder consensus on the extent of testing to mitigate these risks.
- 1.C
Estimate and obtain management support for the time, resources, and budget required to perform the testing agreed upon in step 1.B.
- 1.D
Develop a plan for the tasks, dependencies, and participants required to mitigate the risks to system quality, and obtain stakeholder support for this plan.
- Phase 2
Prepare: Assemble the people and tests
- 2.A
Through staffing and training, build a team of test professionals with the appropriate skills, attitudes, and motivation.
- 2.B
Design, develop, acquire, and verify the test system which the test team uses to assess the quality of the system under test.
- Phase 3
Perform: Do the testing and gather the results.
- 3.A
Acquire and install a test release consisting of some or all of the components in the system under test.
- 3.B
Assign, track, and manage the set of test cases to be run against each test release.
- Phase 4
Perfect: Guide adaptation and improvement
- 4.A
Document the bugs found during test execution.
- 4.B
Communicate test results to key stakeholders.
- 4.C
Adjust to changes and refine the testing process.
One more thing
Each row in this checklist is its own discipline. The detailed checklists linked below show the sub-steps for each one. Start here to orient yourself, then drill down into whichever phase your team is in now.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →