Skip to main content
Case Study · Master Test Plan

The master test plan for a first-generation Internet appliance.
Hardware, client software, server software, and operations — one plan.

Master test plan for a consumer Internet appliance program. Four phases (integration, system, DVT, PVT), nineteen risk categories, and written entry/exit criteria for each transition. The client name is scrubbed ('Some IA Maker') but the methodology is intact.

Phases
4
Risk Categories
19
Scope
Device + Service

Key Takeaways

Four things to remember.

01

IS / IS-NOT is the first section

The scope table naming what the test organization owns — and what it explicitly does not — is the single most useful paragraph in any test plan. It eliminates weeks of arguments before they happen.

02

Four phases, each with real criteria

Integration, System, DVT, PVT. Every phase transition has written entry, continuation, and exit criteria — not "when we feel ready."

03

Hardware and software risks live together

Nineteen risk categories span reliability (infant mortality, MTBF), safety (sharp edges, RSI), environment (heat/cold/spills), and software behavior. One risk register, one priority ordering.

04

Test plan owns the execution process

Bug tracking, bug isolation, release management, and test cycle cadence are all defined in the plan. This is how arguments stop being personal.

Overview

This is the master test plan Rex Black authored for the independent test organization of a first-generation consumer Internet appliance — a product that combined a set-top device, client software, proprietary server-side services, and billing operations.

The client name and identifying details have been scrubbed ("Some IA Maker"). The structure, methodology, and the section-by-section discipline of the plan are preserved verbatim. Use it as a working template for device-plus-service quality programs, not as a fill-in-the-blanks form.

01

Overview

The following test plan describes the testing to be performed and/or managed by the "Some IA Maker" independent test team. It covers the included items in the test project, the specific risks to product quality we intend to address, timeframes, the test environment, problems that could threaten the success of testing, test tools and harnesses we will need to develop, and the test execution process. The independent test team is the quality control organization for "Some IA Maker". Some testing occurs outside of the independent test team's area, such as user testing and unit testing.

02

Scope — What the test organization IS and IS NOT

The plan opens with an explicit scope table. This is the most copied, least-used section in any test plan we read in the wild.

IS

  • Functionality (including client boot, client update, mail, Web, channels, etc.)
  • Capacity and volume
  • Operations (i.e., billing)
  • Client configuration
  • Error handling and recovery
  • Standards compliance (UL, FCC, etc.)
  • Hardware reliability (MTBF, etc.)
  • Software reliability (qualitative)
  • Date and time (including Y2K) processing
  • Distributed (leverage third-party labs and supplier testing)
  • Performance
  • Data flow or data quality
  • Test system architecture (including unit, FVT and regression)
  • Client-side and server-side test tool development
  • Test database development
  • Testing of the complete system
  • Horizontal (end-to-end) integration
  • Software integration and system test
  • Hardware DVT and PVT test
  • Black-box / behavioral testing

IS NOT

  • Usability or user interface (supporting role only)
  • Documentation
  • Code coverage
  • Security (handled third-party contract)
  • Unit, EVT, or FVT testing (except for test system architecture)
  • White-box / structural testing

03

Quality risks — the nineteen categories

The plan enumerates quality risks across hardware and software, each with specific failure modes and a priority placeholder. We have reproduced the category structure here; failure-mode detail lives in the PDF download.

Hardware risk categories

  • Reliability (infant mortality, premature failure, battery memory, pixel death)
  • Radiation (regulatory compliance)
  • Safety (sharp surfaces, electrified areas, RSI, child/pet issues, hot spots)
  • Power (brown-outs, transients, electrostatic discharge, consumption)
  • Fragility (moving parts, drop / bump / slap, shaking / vibration)
  • Environmental (hot / cold / humid / dry, heat dissipation, spills)
  • Packaging (protection, ease of opening)
  • Signal quality (external I/O interfaces)
  • Display quality (bad pixels, contrast, color, brightness consistency)
  • Power management (battery life, suspend / standby)
  • Performance (modem throughput, memory access, CPU bandwidth)

Software risk categories

  • Functionality (client boot, update, mail, Web, channels, applications)
  • Load / capacity / volume under peak concurrent usage
  • Compatibility with the target client configurations
  • Error handling and recovery paths
  • Reliability under extended use
  • Security boundaries with the service back end
  • Date and time including Y2K rollover
  • Performance under realistic network conditions

04

Transitions — phase entry and exit criteria

Four phases, each gated by written criteria. The full criteria lists are in the PDF.

  • Integration Test Entry Criteria
  • System Test Entry Criteria
  • System Test Continuation Criteria
  • System Test Exit Criteria
  • DVT Test Entry Criteria
  • DVT Test Exit Criteria
  • PVT Test Entry Criteria
  • PVT Test Exit Criteria

05

Test execution process

The plan describes how the test organization actually runs, not just what it tests.

  • Human resources — who sits in the chair, what they own, who they escalate to
  • Test case and bug tracking — lifecycle, states, SLAs
  • Bug isolation — the sequence we follow before filing
  • Release management — revision control and what counts as a "build"
  • Test cycles — the cadence that keeps the program moving

06

Risks and contingencies

The plan closes by naming what can go wrong with the test program itself — schedule compression, supplier delays, environment instability — and what the test organization will do in response. This is the section most often missing from first-attempt test plans.

Take it with you

Download the piece you just read.

We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.

Need a QA program to back this up in your organization?

If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.

Related reading

Articles, talks, guides, and case studies tagged for the same audience.