Prioritize risk before you prioritize tests.
Six steps from stakeholder consensus to approved risk register.
Testing without a prioritized quality risk analysis is testing by politics. This six-step process produces a risk register with explicit stakeholder consensus — the basis for every downstream testing decision.
Any test effort that cannot point to an agreed-upon list of risks is really a test effort that nobody has agreed upon.
Key Takeaways
Four things to remember.
Start from stakeholders, not from features
Identify who cares about quality and commit them to participate. Without their ideas in the room, the register is yours alone to defend.
Pick the technique, then stick to it
FMEA, informal QA, or something in between — match the technique to the team. The wrong technique picked well is better than the right one applied inconsistently.
Capture incipient bugs as you go
The analysis surfaces bad requirements, design flaws, and missing specs. File them; do not let them wait for test execution.
Close the loop with configuration management
A quality risk analysis that is not under change control is a document that will drift. Check it in and require change requests to alter it.
Why this exists
The problem this process fixes.
When the schedule compresses — and it always compresses — the test team needs a defensible ordering of what to test first, what to test less, and what to cut. That ordering is the quality risk analysis.
These six steps produce that ordering. Every subsequent decision in the testing process — estimation, case design, execution sequencing, release criteria — traces back to the register built here.
The checklist
6 steps, in order.
- 1
Identify the key testing and quality stakeholders. Obtain stakeholder commitment to participate in a quality risk analysis.
- 2
Survey the key stakeholders about the techniques and methods for quality risks analysis. If appropriate, propose a technique. Obtain consensus on the technique and the method selected.
- 3
Gather ideas from the key stakeholders about the quality risks, the failure modes associated with those risks, the quality impact of such failures, and the priority of the risks. Identify the recommended action to mitigate each risk.
- 4
Report any incipient bugs identified in other project documents during the analysis, such as bad or missing requirements, design problems, and so forth.
- 5
Document the quality risks as appropriate for the technique used. Circulate the document to the stakeholders for approval. Iterate steps three, four, and five as necessary to finalize the quality risks, their priorities, and the recommended actions.
- 6
Check the quality risks analysis document(s) into the project library or configuration management system. Place the document under change control.
One more thing
The quality risk analysis is the single most-cited document in a healthy test program. Release criteria point back to it. Test estimates are justified by it. Change requests are re-prioritized against it. Treat it as load-bearing.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →