The master test plan for a first-generation Internet appliance.
Hardware, client software, server software, and operations — one plan.
Master test plan for a consumer Internet appliance program. Four phases (integration, system, DVT, PVT), nineteen risk categories, and written entry/exit criteria for each transition. The client name is scrubbed ('Some IA Maker') but the methodology is intact.
Key Takeaways
Four things to remember.
IS / IS-NOT is the first section
The scope table naming what the test organization owns — and what it explicitly does not — is the single most useful paragraph in any test plan. It eliminates weeks of arguments before they happen.
Four phases, each with real criteria
Integration, System, DVT, PVT. Every phase transition has written entry, continuation, and exit criteria — not "when we feel ready."
Hardware and software risks live together
Nineteen risk categories span reliability (infant mortality, MTBF), safety (sharp edges, RSI), environment (heat/cold/spills), and software behavior. One risk register, one priority ordering.
Test plan owns the execution process
Bug tracking, bug isolation, release management, and test cycle cadence are all defined in the plan. This is how arguments stop being personal.
Overview
This is the master test plan Rex Black authored for the independent test organization of a first-generation consumer Internet appliance — a product that combined a set-top device, client software, proprietary server-side services, and billing operations.
The client name and identifying details have been scrubbed ("Some IA Maker"). The structure, methodology, and the section-by-section discipline of the plan are preserved verbatim. Use it as a working template for device-plus-service quality programs, not as a fill-in-the-blanks form.
01
Overview
The following test plan describes the testing to be performed and/or managed by the "Some IA Maker" independent test team. It covers the included items in the test project, the specific risks to product quality we intend to address, timeframes, the test environment, problems that could threaten the success of testing, test tools and harnesses we will need to develop, and the test execution process. The independent test team is the quality control organization for "Some IA Maker". Some testing occurs outside of the independent test team's area, such as user testing and unit testing.
02
Scope — What the test organization IS and IS NOT
The plan opens with an explicit scope table. This is the most copied, least-used section in any test plan we read in the wild.
IS
- Functionality (including client boot, client update, mail, Web, channels, etc.)
- Capacity and volume
- Operations (i.e., billing)
- Client configuration
- Error handling and recovery
- Standards compliance (UL, FCC, etc.)
- Hardware reliability (MTBF, etc.)
- Software reliability (qualitative)
- Date and time (including Y2K) processing
- Distributed (leverage third-party labs and supplier testing)
- Performance
- Data flow or data quality
- Test system architecture (including unit, FVT and regression)
- Client-side and server-side test tool development
- Test database development
- Testing of the complete system
- Horizontal (end-to-end) integration
- Software integration and system test
- Hardware DVT and PVT test
- Black-box / behavioral testing
IS NOT
- Usability or user interface (supporting role only)
- Documentation
- Code coverage
- Security (handled third-party contract)
- Unit, EVT, or FVT testing (except for test system architecture)
- White-box / structural testing
03
Quality risks — the nineteen categories
The plan enumerates quality risks across hardware and software, each with specific failure modes and a priority placeholder. We have reproduced the category structure here; failure-mode detail lives in the PDF download.
Hardware risk categories
- Reliability (infant mortality, premature failure, battery memory, pixel death)
- Radiation (regulatory compliance)
- Safety (sharp surfaces, electrified areas, RSI, child/pet issues, hot spots)
- Power (brown-outs, transients, electrostatic discharge, consumption)
- Fragility (moving parts, drop / bump / slap, shaking / vibration)
- Environmental (hot / cold / humid / dry, heat dissipation, spills)
- Packaging (protection, ease of opening)
- Signal quality (external I/O interfaces)
- Display quality (bad pixels, contrast, color, brightness consistency)
- Power management (battery life, suspend / standby)
- Performance (modem throughput, memory access, CPU bandwidth)
Software risk categories
- Functionality (client boot, update, mail, Web, channels, applications)
- Load / capacity / volume under peak concurrent usage
- Compatibility with the target client configurations
- Error handling and recovery paths
- Reliability under extended use
- Security boundaries with the service back end
- Date and time including Y2K rollover
- Performance under realistic network conditions
04
Transitions — phase entry and exit criteria
Four phases, each gated by written criteria. The full criteria lists are in the PDF.
- Integration Test Entry Criteria
- System Test Entry Criteria
- System Test Continuation Criteria
- System Test Exit Criteria
- DVT Test Entry Criteria
- DVT Test Exit Criteria
- PVT Test Entry Criteria
- PVT Test Exit Criteria
05
Test execution process
The plan describes how the test organization actually runs, not just what it tests.
- Human resources — who sits in the chair, what they own, who they escalate to
- Test case and bug tracking — lifecycle, states, SLAs
- Bug isolation — the sequence we follow before filing
- Release management — revision control and what counts as a "build"
- Test cycles — the cadence that keeps the program moving
06
Risks and contingencies
The plan closes by naming what can go wrong with the test program itself — schedule compression, supplier delays, environment instability — and what the test organization will do in response. This is the section most often missing from first-attempt test plans.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →