Estimate testing the way projects actually work.
WBS, schedule, budget, approval — in that order.
Test estimates fail either because they skip the work breakdown or because they skip stakeholder buy-in. This process does both, in sequence, so the resulting schedule and budget survive contact with management.
An estimate without a work breakdown is a guess. A work breakdown without stakeholder review is a fantasy. Both fail for different reasons; this process solves both.
Key Takeaways
Four things to remember.
Decompose until tasks are real
Each leaf task must be measurable, have start and end criteria, one responsible owner, and a bounded duration. If it does not, keep decomposing.
Sanity check at every level
Subtask, task, activity, phase. Compare against prior project data and industry metrics. Document discrepancies instead of smoothing them over.
Review with owners and with the team
Hidden assumptions and dependencies surface in conversation, not in spreadsheets. Walk the WBS with the people who will own each task.
Earn commitment by negotiating scope
When management will not fund the full estimate, negotiate which testing to drop — not how fast to go. Scope conversations are honest; velocity conversations are not.
Why this exists
The problem this process fixes.
Most test estimates arrive at management as a single number — with no breakdown, no assumptions, no linkage to risk. They get cut in half. The team spends the rest of the project explaining why the half that remained was not enough.
This process produces an estimate that resists that fate. It forces explicit decomposition, sanity checks against prior data, and an iterative negotiation with management about scope rather than velocity.
The checklist
23 steps, in order.
- Phase 1
Working one-on-one or as a group with the assigned test team, develop the work-breakdown-structure and estimated schedule.
- 1.A
Decompose the test project into phases.
- 1.B
Decompose each phase into constituent activities.
- 1.C
Decompose each activity into tasks and subtasks until each task or subtask at the lowest level of composition satisfies the criteria below.
- 1.D
Taking risk priority into account, set up dependencies, resource assignments, and dependent tasks internal to the test subproject. Document dependencies, resources, and tasks external to the test subproject (i.e., those that involve collaborative processes).
- 1.E
Sanity check the durations and efforts at the subtask, task, activity, and phase levels. If possible, augment professional judgment and gut instinct with previous project data, industry metrics, and so forth. Identify and, if possible, resolve discrepancies between the test subproject schedule and the project schedule. Where discrepancies cannot be resolved, document the obstacles.
- 1.F
Review the work-breakdown-structure and schedule with the individuals to whom you've assigned responsibility for each task, taking special care to surface any hidden assumptions or dependencies.
- 1.G
Review the work-breakdown-structure and schedule with the entire test team along with any subject matter experts available within or outside your organization.
- Phase 2
Use the work-breakdown-structure and schedule to develop a budget.
- 2.A
Extract from your work-breakdown-structure a complete list of resources. For each resource, determine the first and last day of assignment to the project. If you have resources shared across multiple test projects within a given time period, understand the percentage allocation of each resource's assignment to each project during various time periods.
- 2.B
Identify any incidental resources required to support these resources.
- 2.C
Categorize the resources into staff, travel, tools, test environments, and, if applicable, outsourcing costs. Total by time periods and categories.
- 2.D
Sanity check the budget details and totals. If possible, augment your professional judgment and gut instinct with previous project data, industry metrics, and so forth. Identify and, if possible, resolve discrepancies between the test subproject budget and the overall budget. Should resolution prove impossible, document the obstacles.
- 2.E
Amortize budget items that are long-term investments, documenting the reuse opportunities and the period of time over which you expect to recoup the costs.
- 2.F
If required or desirable, analyze return on investment. If the return on investment is negative, review your assumptions about amortization in 2.E and repeat this step if necessary. If the return on investment remains negative, review those items on your estimated work-breakdown-structure that consume the most money while contributing the least return, tracing back to the quality risks. Document the money losing activities.
- 2.G
If permitted by your management, review the budget with your test team. Take special care to identify any missing resources, especially incidental ones. Iterate steps 2.D through 2.F if necessary.
- Phase 3
Obtain management support for the estimated schedule and budget.
- 3.A
Present the benefits of the test subproject.
- 3.B
Outline the time and money commitment required to receive those specific benefits.
- 3.C
Understand and attempt to resolve any objections to the estimate through iteration of steps 3.A and 3.B.
- 3.D
If management commitment to the proposed budget and schedule cannot be gained, discuss specific areas of testing to be deleted, setting cost and/or schedule goals to be met to obtain management support.
- 4
Repeat steps 1 through 3 if necessary, fine-tuning the estimated schedule and budget, until resources and management commitment adequate to the (possibly adjusted) scope of the test effort are secured.
- 5
Check the approved budget and schedule documents into the project library or configuration management system. Place the document under change control
One more thing
In a well-formed WBS, each leaf task has: measurable status, clear start and end criteria, one or more deliverables, resource and duration estimates, short duration, independence from other tasks, one responsible owner, a mapping to the quality risks, and a position in the schedule aligned to risk priority. Anything less and the estimate will not survive the first change request.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →