Since 1994

The risk taxonomy

every FMEA starts from.

Sixteen categories, three test-level views, and the seven 2026 additions nobody used to need. · Rex Black, Inc.

A risk taxonomy is

a completeness check.


Not a filing cabinet.
Walk every category. Answer "applies or not." Move on.


Three minutes. Zero missed classes of failure.

REX BLACK, INC. · QUALITY RISK CATEGORIES
What this is about

In plain English.


Every risk-based test strategy starts with a list. A quality risk category is a class of failure a system could have; the risk items inside each category are specific failures a specific engagement is exposed to.


Most programs waste their first FMEA session reinventing the category list. This piece publishes the one we use.


Three views of the same taxonomy:

  • The flat 16-category reference.
  • The per-test-level cross-reference (component / integration / system-and-acceptance).
  • The seven categories we added since 2002, because products and their failure modes changed.
REX BLACK, INC. · QUALITY RISK CATEGORIES
The 16 categories

Flat taxonomy.

Describe, don't nest.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Categories · 1–5

Function, load, reliability, stress, date.


  • Functionality, features don't work as specified.
  • Load, Capacity, and Volume, scaling to peak concurrent usage and data volumes.
  • Reliability / Stability, availability, MTBF, resource leaks, degradation over uptime.
  • Stress, Error Handling, Recovery, beyond-peak behavior; deliberate failures; recovery from partition, power loss, upstream outage.
  • Date Handling, time-zone boundaries, DST, leap seconds, year boundaries, fiscal-vs-calendar, internal epoch expiry.
REX BLACK, INC. · QUALITY RISK CATEGORIES
Categories · 6–10

Competitive, ops, usability, data, perf.


  • Competitive Inferiority, fails to match competing systems in quality. Requires market research, not just engineering.
  • Operations and Maintenance, backup/restore, runbooks, on-call workflows, ops-staff ability to recover without engineering.
  • Usability, human factors; interface; install flow; onboarding; recovery from user error.
  • Data Quality, silent corruption, precision loss, truncation, encoding, referential integrity.
  • Performance, latency, throughput, responsiveness against SLO.
REX BLACK, INC. · QUALITY RISK CATEGORIES
Categories · 11–16

Localization, compat, security, install, docs, interfaces.


  • Localization, locale, dictionary/thesaurus, collation, formatting, error messages.
  • Compatibility, OS/browser/device/runtime/dependency combinations; dependency-upgrade regressions.
  • Security and Privacy, fraudulent/malicious misuse. Deep sub-taxonomy (OWASP Top 10, MITRE ATT&CK, CWE).
  • Installation / Migration, deployment, canary/blue-green/rollback integrity, database migration safety.
  • Documentation, operator/user docs, API ref accuracy, runbook accuracy, deprecation notices.
  • Interfaces, wire formats, contract violations, schema drift, protocol mismatches, silent field removal.
REX BLACK, INC. · QUALITY RISK CATEGORIES

16

Categories. Walked top-to-bottom before every FMEA session closes.
REX BLACK, INC. · QUALITY RISK CATEGORIES
Test-level view

Same taxonomy. Re-cut.

By who catches what.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Test-level · component

Close to the code.


  • States, internal state transitions, state-machine correctness.
  • Transactions, single-component unit-of-work correctness.
  • Code coverage, structural coverage of the implementation.
  • Data flow coverage, variable def/use pairs, data-flow anomalies.
  • Functionality, component-level feature behavior.
  • User interface, component-local rendering / input handling.
  • Mechanical / signal / embedded properties, for physical products.

Fast feedback. High volume. If a category is listed here, component testing is where it's cheapest to catch it.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Test-level · integration

Across boundaries.


  • Component or subsystem interfaces, contract verification, schema compliance.
  • Functionality, feature behavior spanning components.
  • Capacity and volume, subsystem-level load.
  • Error / disaster handling and recovery, failure propagation across boundaries.
  • Data quality, integrity across subsystem boundaries.
  • Performance, subsystem latency and throughput.
  • User interface, UI integration with data layer.

Contract verification. Data flow between subsystems. Integration testing exists for the categories that don't survive component isolation.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Test-level · system and acceptance

Only the full assembly.


Functionality · User interface · States and transactions · Data quality · Operations · Capacity and volume · Reliability, availability, stability · Error / disaster handling and recovery · Stress · Performance · Date and time handling · Localization · Networked and distributed environment behavior · Configuration options and compatibility · Standards compliance · Security and privacy · Environment · Installation, cut-over, setup · Documentation and packaging · Maintainability · Alpha, beta, and live tests.


The reason system test exists. These categories are invisible at lower levels.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Added since 2002

Seven categories.

All now required.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Added · accessibility, observability

Out of Usability. Into law.

Accessibility

WCAG 2.2, Section 508, EN 301 549, ADA case law. Screen-reader behavior, keyboard nav, contrast, motion preference, ARIA, cognitive load.

Ignoring this invites lawsuits.

Observability

Instrumentation an operator can tell the system is working from. Logs, metrics, traces, dashboards, span correctness.

Operations = can we run it. Observability = can we tell what it's doing.


Both split out of existing categories because they grew their own testable surface.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Added · FinOps, supply chain

The bill. And the dependencies.

Cost and financial operations

Runaway workers, retry storms, uncapped logging, N+1 queries, uncapped LLM-API calls, storage leaks.

A correctness-passing release can still cause a business incident via the monthly bill.

Supply chain

Vulnerable packages, transitive drift, build-system compromise, malicious maintainers, SBOM inaccuracy.

Own surface. Own tooling. SCA, SBOM attestation, provenance, pinning.


Both absent from 2002 because cloud-native and dependency-rich ecosystems changed the failure surface.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Added · AI-system accuracy and safety

Probabilistic output changes testing.

AI-system accuracy and calibration

Held-out eval-set accuracy. Calibration (80% confidence = 80%?). Adversarial / out-of-distribution / long-tail. Slice-based regression.

Cannot be tested with example-based assertions.

AI-system safety, integrity, alignment

Prompt injection. Indirect injection via retrieved docs. Over-trust of tool outputs. Jailbreaking. Training-data poisoning. Hallucination. Bias / fairness.

Most of OWASP LLM Top 10 lives here.


These are distinct from Functionality and Security. They need their own test strategy, their own tooling, and their own owner.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Added · explainability and auditability

Why did the system do what it did?


  • Defensible account of specific decisions.
  • Increasingly mandatory in regulated contexts: credit, hiring, healthcare, underwriting, insurance pricing, immigration, education.
  • Decision logs. Feature attribution. Data and model version lineage. Ability to reconstruct a specific production decision six months later.

Pair with Cost-of-Exposure and Compliance risks under Security / Privacy. Explainability is now a category in its own right, not a subsidiary concern.

REX BLACK, INC. · QUALITY RISK CATEGORIES
How to use the list

Four moves.

REX BLACK, INC. · QUALITY RISK CATEGORIES
How to use it · 1–2

Walk it out loud. Cross-check.


  1. Walk the 16-category flat list at the top of the session. "Does this category contain items that apply to this release?" Draft items if yes; log an explicit "not applicable because…" if no. The explicit negative answer is the completeness signal.
  2. Cross-check against the per-test-level view. Every category with items should map to at least one test level. Categories with items but no level mapped = test-plan gap.
REX BLACK, INC. · QUALITY RISK CATEGORIES
How to use it · 3–4

2026 additions. ISO mapping.


  1. Apply the 2026 additions as a second pass. For any modern system, at least three of seven will apply. For AI-backed systems, usually five or more.
  2. Map to ISO/IEC 25010:2023 if a formal framework is required (regulated buyer, compliance audit, procurement scoring). Both views coexist, the named categories drive FMEA items; the ISO mapping supports external reporting.
REX BLACK, INC. · QUALITY RISK CATEGORIES
Living artifact

When a new failure class bites,

propose a category.


The 2002 list got us 24 years.
No taxonomy survives forever.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Takeaways

Four. Walk them.

REX BLACK, INC. · QUALITY RISK CATEGORIES
Takeaways · 1 of 2

Zero ceremony. Every release.


  • Sixteen categories, three minutes. Say them out loud. Answer "applies" or "not." Move on.
  • Different test levels see different risks. The per-level view keeps each team focused on what they can actually test.
  • Categories with items but no test level mapped = test-plan gap.
REX BLACK, INC. · QUALITY RISK CATEGORIES
Takeaways · 2 of 2

The taxonomy moves forward.


  • Seven 2026 additions. Accessibility, observability, FinOps, supply chain, AI accuracy, AI safety, explainability.
  • Pair with ISO/IEC 25010:2023 when a formal framework is required.
  • When a new failure class bites an engagement, propose a category in the next methodology review.
REX BLACK, INC. · QUALITY RISK CATEGORIES
Since 1994

Thank you.

Rex Black, Inc. · rexblack.com/resources/qa-library/general-quality-risk-categories