The four-day course for testing AI-based systems — and using AI in testing.
ISTQB Certified Tester AI Testing (CT-AI), v1.0.
Official outline for the ISTQB Certified Tester AI Testing course as delivered by Rex Black, Inc. Four days. Eleven chapters. Covers testing AI-based systems and using AI to support software testing.
Key Takeaways
Four things to remember.
Two directions, one course
Testing AI-based systems — and using AI to support testing. CT-AI covers both, which is why it is the fastest-growing ISTQB credential.
Experience-based, not lecture-only
Every chapter includes explanations, discussions, and exercises. Attendees implement and test a real ML model, not just read about one.
AI quality characteristics, front and center
Bias, ethics, transparency, explainability, safety, self-learning, non-determinism — not afterthoughts, but the backbone of Chapters 2 and 8.
Certification-ready
Sample exam, syllabus coverage, and glossary are included. Attendees leave ready to sit the ISTQB CT-AI exam.
Overview
The Certified Tester AI Testing course is for anyone involved in testing AI-based systems and/or AI for testing. This includes people in roles such as testers, test analysts, data analysts, test engineers, test consultants, test managers, user acceptance testers and software developers. The certification is also appropriate for anyone who wants a basic understanding of testing AI-based systems and/or AI for testing, such as project managers, quality managers, software development managers, business analysts, operations team members, IT directors and management consultants.
This course is experience-based, highly interactive with a mixture of explanation of concepts and exercises. It covers the ISTQB Certified Tester AI Testing Syllabus (CT-AI) v1.0. Solutions are provided for the exercises demonstrated and discussed in the class, along with sample questions, copies of the ISTQB Certified Tester AI Testing Syllabus and Glossary, and more.
01
What attendees will gain
- Understand the current state and expected trends of AI.
- Experience the implementation and testing of an ML model and recognize where testers can best influence its quality.
- Understand the challenges associated with testing AI-based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability.
- Contribute to the test strategy for an AI-based system.
- Design and execute test cases for AI-based systems.
- Recognize the special requirements for the test infrastructure to support the testing of AI-based systems.
- Understand how AI can be used to support software testing.
02
Course materials
- Course Outline — general description plus learning objectives, materials, and approximate section timings.
- Noteset — approximately 500 PowerPoint slides covering the topics.
- Sample Exam — assess your readiness for the ISTQB CT-AI exam.
- Exercise Solutions — approximately 80 pages of demonstrated exercises.
- ISTQB CT-AI Syllabus — the official syllabus that forms the basis of the certification.
- ISTQB Glossary — the latest glossary of software testing terms from ISTQB.
03
Session plan (4 days)
The course runs for four days. Each day is about 380–400 minutes of class time, from 9:00 AM to 5:00 PM, including lunch and other breaks. Timings are approximate and depend on attendee interest and discussion.
- Introduction (30 minutes)
04
Chapter 1 — Introduction to AI (105 minutes)
- Definition of AI and AI Effect
- Narrow, General and Super AI
- AI-Based and Conventional Systems
- AI Technologies
- AI Development Frameworks
- Hardware for AI-Based Systems
- AI as a Service (AIaaS)
- Pre-Trained Models
- Standards, Regulations and AI
05
Chapter 2 — Quality Characteristics for AI-Based Systems (105 minutes)
- Flexibility and Adaptability
- Autonomy
- Evolution
- Bias
- Ethics
- Side Effects and Reward Hacking
- Transparency, Interpretability and Explainability
- Safety and AI
06
Chapter 3 — Machine Learning: Overview (145 minutes)
- Forms of ML
- ML Workflow
- Selecting a Form of ML
- Factors Involved in ML Algorithm Selection
- Overfitting and Underfitting
07
Chapter 4 — ML: Data (230 minutes)
- Data Preparation as Part of the ML Workflow
- Training, Validation and Test Datasets in the ML Workflow
- Dataset Quality Issues
- Data Quality and its Effect on the ML Model
- Data Labelling for Supervised Learning
08
Chapter 5 — ML: Functional Performance Metrics (120 minutes)
- Confusion Matrix
- Additional ML Functional Performance Metrics for Classification, Regression and Clustering
- Limitations of ML Functional Performance Metrics
- Selecting ML Functional Performance Metrics
- Benchmark Suites for ML Performance
09
Chapter 6 — ML: Neural Networks and Testing (65 minutes)
- Neural Networks
- Coverage Measures for Neural Networks
10
Chapter 7 — Testing AI-Based Systems Overview (115 minutes)
- Specification of AI-Based Systems
- Test Levels for AI-Based Systems
- Test Data for Testing AI-Based Systems
- Testing for Automation Bias in AI-Based Systems
- Documenting an AI Component
- Testing for Concept Drift
- Selecting a Test Approach for an ML System
11
Chapter 8 — Testing AI-Specific Quality Characteristics (150 minutes)
- Challenges Testing Self-Learning Systems
- Testing Autonomous AI-Based Systems
- Testing for Algorithmic, Sample and Inappropriate Bias
- Challenges Testing Probabilistic and Non-Deterministic AI-Based Systems
- Challenges Testing Complex AI-Based Systems
- Testing the Transparency, Interpretability and Explainability of AI-Based Systems
- Test Oracles for AI-Based Systems
- Test Objectives and Acceptance Criteria
12
Chapter 9 — Methods and Techniques for Testing AI-Based Systems (245 minutes)
- Adversarial Attacks and Data Poisoning
- Pairwise Testing
- Back-to-Back Testing
- A/B Testing
- Metamorphic Testing (MT)
- Experience-Based Testing of AI-Based Systems
- Selecting Test Techniques for AI-Based Systems
13
Chapter 10 — Test Environments for AI-Based Systems (30 minutes)
- Test Environments for AI-Based Systems
- Virtual Test Environments for Testing AI-Based Systems
14
Chapter 11 — Using AI for Testing (195 minutes)
- AI Technologies for Testing
- Using AI to Analyze Reported Defects
- Using AI for Test Case Generation
- Using AI for the Optimization of Regression Test Suites
- Using AI for Defect Prediction
- Using AI for Testing User Interfaces
15
Wrap
Question and answer period.
Take it with you
Download the piece you just read.
We keep this library free. All we ask is that you tell us who you are, so we know who to follow up with if we release an updated version. One-time form, this browser remembers you after that.
Related in the library
Pair this with.
Need a QA program to back this up in your organization?
If a checklist is not enough and you want help applying it to a live engagement, we can have a call this week.
Related reading
Articles, talks, guides, and case studies tagged for the same audience.
- Whitepaper
Starting AI Adoption: A Sequence for Mid-Market Engineering Teams
The order of operations we use with mid-market engineering teams that have been told to ship AI and do not know where to start. Six stages, named exit criteria, the anti-patterns that predict failure, and the first-90-days view that ties architecture, evaluation, and model economics into a coherent adoption sequence.
Read → - Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
The Case for Investing in Testing: A Board-Level Argument for Enterprise Test-Function Capability
Enterprise organizations regularly face the question of whether to invest in their test-function capability — in hiring, in tooling, in automation infrastructure, in process maturity. The question is often answered by default rather than by analysis, and the default is under-investment relative to the economic case. This whitepaper presents the board-level argument for investing in testing, structured around the four business outcomes that robust testing produces, the cost curve that makes early investment asymmetrically valuable, and the specific organizational patterns that distinguish organizations that treat testing as strategic from those that treat it as overhead.
Read →
Where this leads
- Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Service · AI
AI & Data Governance
Building AI systems that work in production: architecture, governance, and the failure-mode coverage prototypes hide.
Learn more →